mirror of
https://github.com/vmware-tanzu/pinniped.git
synced 2026-01-15 18:23:10 +00:00
Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fab814d6c6 | ||
|
|
d5ff2f4447 |
@@ -18,8 +18,8 @@ gcloud auth login
|
||||
|
||||
# Set some variables.
|
||||
project="REDACTED" # Change this to be the actual project name before running these commands.
|
||||
region="us-west1"
|
||||
zone="us-west1-c"
|
||||
region="us-central1"
|
||||
zone="us-central1-b"
|
||||
vpc_name="ad"
|
||||
|
||||
# Create VPC.
|
||||
|
||||
@@ -1 +1 @@
|
||||
Please see https://github.com/vmware/pinniped/blob/main/CODE_OF_CONDUCT.md
|
||||
Please see https://github.com/vmware-tanzu/pinniped/blob/main/CODE_OF_CONDUCT.md
|
||||
|
||||
@@ -1 +1 @@
|
||||
Please see https://github.com/vmware/pinniped/blob/main/CONTRIBUTING.md
|
||||
Please see https://github.com/vmware-tanzu/pinniped/blob/main/CONTRIBUTING.md
|
||||
|
||||
@@ -1 +1 @@
|
||||
Please see https://github.com/vmware/pinniped/blob/main/MAINTAINERS.md
|
||||
Please see https://github.com/vmware-tanzu/pinniped/blob/main/MAINTAINERS.md
|
||||
|
||||
16
README.md
16
README.md
@@ -1,6 +1,6 @@
|
||||
# Pinniped's `ci` branch
|
||||
|
||||
This `ci` branch contains the CI/CD tooling for [Pinniped](https://github.com/vmware/pinniped).
|
||||
This `ci` branch contains the CI/CD tooling for [Pinniped](https://github.com/vmware-tanzu/pinniped).
|
||||
|
||||
The documentation and code in this branch is mainly intended for the maintainers of Pinniped.
|
||||
|
||||
@@ -13,20 +13,20 @@ for these files was not copied from the private repository at the time of this m
|
||||
## Reporting an issue in this branch
|
||||
|
||||
Found a bug or would like to make an enhancement request?
|
||||
Please report issues in [this repo](https://github.com/vmware/pinniped).
|
||||
Please report issues in [this repo](https://github.com/vmware-tanzu/pinniped).
|
||||
|
||||
## Reporting security vulnerabilities
|
||||
|
||||
Please follow the procedure described in [SECURITY.md](https://github.com/vmware/pinniped/blob/main/SECURITY.md).
|
||||
Please follow the procedure described in [SECURITY.md](https://github.com/vmware-tanzu/pinniped/blob/main/SECURITY.md).
|
||||
|
||||
## Creating a release
|
||||
|
||||
When the team is preparing to ship a release, a maintainer will create a new
|
||||
GitHub [Issue](https://github.com/vmware/pinniped/issues/new/choose) in this repo to
|
||||
GitHub [Issue](https://github.com/vmware-tanzu/pinniped/issues/new/choose) in this repo to
|
||||
collaboratively track progress on the release checklist. As tasks are completed,
|
||||
the team will check them off. When all the tasks are completed, the issue is closed.
|
||||
|
||||
The release checklist is committed to this repo as an [issue template](https://github.com/vmware/pinniped/tree/main/.github/ISSUE_TEMPLATE/release_checklist.md).
|
||||
The release checklist is committed to this repo as an [issue template](https://github.com/vmware-tanzu/pinniped/tree/main/.github/ISSUE_TEMPLATE/release_checklist.md).
|
||||
|
||||
## Pipelines
|
||||
|
||||
@@ -115,7 +115,7 @@ Some pipelines use github [webhooks to trigger resource checks](https://concours
|
||||
rather than the default of polling every minute, to make these pipelines more responsive and use fewer compute resources
|
||||
for running checks. Refer to places where `webhook_token` is configured in various `pipeline.yml` files.
|
||||
|
||||
To make these webhooks work, they must be defined on the [GitHub repo's settings](https://github.com/vmware/pinniped/settings/hooks).
|
||||
To make these webhooks work, they must be defined on the [GitHub repo's settings](https://github.com/vmware-tanzu/pinniped/settings/hooks).
|
||||
|
||||
## Installing and operating Concourse
|
||||
|
||||
@@ -125,12 +125,12 @@ See [infra/README.md](./infra/README.md) for details about how Concourse was ins
|
||||
|
||||
In addition to the many ephemeral Kubernetes clusters we use for testing, we also deploy a long-running acceptance environment.
|
||||
|
||||
Google Kubernetes Engine (GKE) in the `gke-acceptance-cluster` cluster in our GCP project in the `us-west1-c` availability zone.
|
||||
Google Kubernetes Engine (GKE) in the `gke-acceptance-cluster` cluster in our GCP project in the `us-central1-c` availability zone.
|
||||
|
||||
To access this cluster, download the kubeconfig to `gke-acceptance.yaml` by running:
|
||||
|
||||
```cmd
|
||||
KUBECONFIG=gke-acceptance.yaml gcloud container clusters get-credentials gke-acceptance-cluster --project "$PINNIPED_GCP_PROJECT" --zone us-west1-c
|
||||
KUBECONFIG=gke-acceptance.yaml gcloud container clusters get-credentials gke-acceptance-cluster --project "$PINNIPED_GCP_PROJECT" --zone us-central1-c
|
||||
```
|
||||
|
||||
The above command assumes that you have already set `PINNIPED_GCP_PROJECT` to be the name of the GCP project.
|
||||
|
||||
@@ -1 +1 @@
|
||||
Please see https://github.com/vmware/pinniped/blob/main/SECURITY.md
|
||||
Please see https://github.com/vmware-tanzu/pinniped/blob/main/SECURITY.md
|
||||
|
||||
@@ -2,13 +2,13 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# For running Go linters
|
||||
FROM debian:13.2-slim AS builder
|
||||
FROM debian:12.11-slim AS builder
|
||||
|
||||
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN curl -sfLo /tmp/codecov https://uploader.codecov.io/latest/linux/codecov
|
||||
RUN chmod +x /tmp/codecov
|
||||
|
||||
FROM golang:1.25.5
|
||||
FROM golang:1.24.3
|
||||
RUN apt-get update -y && apt-get dist-upgrade -y
|
||||
COPY --from=builder /tmp/codecov /usr/local/bin/codecov
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
FROM gcr.io/go-containerregistry/crane as crane
|
||||
FROM mikefarah/yq:4.50.1 AS yq
|
||||
FROM mikefarah/yq:4.45.4 AS yq
|
||||
|
||||
FROM golang:1.25.5
|
||||
FROM golang:1.24.3
|
||||
COPY --from=yq /usr/bin/yq /usr/local/bin
|
||||
COPY --from=crane /ko-app/crane /usr/local/bin
|
||||
ENTRYPOINT ["bash"]
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
FROM mikefarah/yq:4.50.1 AS yq
|
||||
FROM mikefarah/yq:4.45.4 AS yq
|
||||
|
||||
FROM debian:13.2-slim
|
||||
FROM debian:12.11-slim
|
||||
|
||||
# Note: libdigest-sha-perl is to get shasum, which is used when installing Carvel tools below.
|
||||
RUN apt-get update && apt-get install -y ca-certificates jq curl libdigest-sha-perl && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
|
||||
# For deploying an EKS cluster and setting it up to run our tests.
|
||||
|
||||
FROM weaveworks/eksctl:v0.221.0 AS eksctl
|
||||
FROM mikefarah/yq:4.50.1 AS yq
|
||||
FROM amazon/aws-cli:2.32.30
|
||||
RUN yum update -y && yum install -y jq perl-Digest-SHA openssl && yum clean all
|
||||
FROM weaveworks/eksctl:v0.208.0 AS eksctl
|
||||
FROM mikefarah/yq:4.45.4 AS yq
|
||||
FROM amazon/aws-cli:2.27.24
|
||||
RUN yum update -y && yum install -y jq && yum install -y perl-Digest-SHA && yum clean all
|
||||
COPY --from=eksctl eksctl /usr/local/bin/eksctl
|
||||
COPY --from=yq /usr/bin/yq /usr/local/bin/yq
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# For running the GitHub CLI.
|
||||
FROM debian:13.2-slim AS builder
|
||||
FROM debian:12.11-slim AS builder
|
||||
|
||||
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
@@ -11,5 +11,5 @@ RUN curl \
|
||||
https://github.com/cli/cli/releases/download/v2.40.0/gh_2.40.0_linux_amd64.tar.gz \
|
||||
&& tar -C /tmp --strip-components=1 -xzvf /tmp/gh.tar.gz
|
||||
|
||||
FROM golang:1.25.5
|
||||
FROM golang:1.24.3
|
||||
COPY --from=builder /tmp/bin/gh /usr/local/bin/gh
|
||||
|
||||
@@ -3,14 +3,14 @@
|
||||
|
||||
# For running the integration tests as a client to a k8s cluster
|
||||
|
||||
FROM mikefarah/yq:4.50.1 AS yq
|
||||
FROM mikefarah/yq:4.45.4 AS yq
|
||||
|
||||
# We need gcloud for running integration tests against GKE
|
||||
# because the kubeconfig uses gcloud as an `auth-provider`.
|
||||
# Use FROM gcloud-sdk instead of FROM golang because its
|
||||
# a lot easier to install Go than to install gcloud in the
|
||||
# subsequent commands below.
|
||||
FROM google/cloud-sdk:551.0.0-slim
|
||||
FROM google/cloud-sdk:524.0.0-slim
|
||||
|
||||
# Install apache2-utils (for htpasswd to bcrypt passwords for the
|
||||
# local-user-authenticator) and jq.
|
||||
@@ -36,7 +36,7 @@ RUN google-chrome --version
|
||||
|
||||
# Install Go. The download URL that can be used below for any version of Go can be found on https://go.dev/dl/
|
||||
ENV PATH /usr/local/go/bin:$PATH
|
||||
RUN curl -fsSL https://go.dev/dl/go1.25.5.linux-amd64.tar.gz -o /tmp/go.tar.gz && \
|
||||
RUN curl -fsSL https://go.dev/dl/go1.24.3.linux-amd64.tar.gz -o /tmp/go.tar.gz && \
|
||||
tar -C /usr/local -xzf /tmp/go.tar.gz && \
|
||||
rm /tmp/go.tar.gz && \
|
||||
go version
|
||||
|
||||
@@ -3,14 +3,14 @@
|
||||
|
||||
# For running the integration tests as a client to a k8s cluster
|
||||
|
||||
FROM mikefarah/yq:4.50.1 AS yq
|
||||
FROM mikefarah/yq:4.45.4 AS yq
|
||||
|
||||
# We need gcloud for running integration tests against GKE
|
||||
# because the kubeconfig uses gcloud as an `auth-provider`.
|
||||
# Use FROM gcloud-sdk instead of FROM golang because its
|
||||
# a lot easier to install Go than to install gcloud in the
|
||||
# subsequent commands below.
|
||||
FROM google/cloud-sdk:551.0.0-slim
|
||||
FROM google/cloud-sdk:524.0.0-slim
|
||||
|
||||
# Install apache2-utils (for htpasswd to bcrypt passwords for the
|
||||
# local-user-authenticator) and jq.
|
||||
@@ -36,7 +36,7 @@ RUN google-chrome --version
|
||||
|
||||
# Install Go. The download URL that can be used below for any version of Go can be found on https://go.dev/dl/
|
||||
ENV PATH /usr/local/go/bin:$PATH
|
||||
RUN curl -fsSL https://go.dev/dl/go1.25.5.linux-amd64.tar.gz -o /tmp/go.tar.gz && \
|
||||
RUN curl -fsSL https://go.dev/dl/go1.24.3.linux-amd64.tar.gz -o /tmp/go.tar.gz && \
|
||||
tar -C /usr/local -xzf /tmp/go.tar.gz && \
|
||||
rm /tmp/go.tar.gz && \
|
||||
go version
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
# For deploying apps onto Kubernetes clusters (including GKE)
|
||||
|
||||
FROM google/cloud-sdk:551.0.0-slim
|
||||
FROM google/cloud-sdk:524.0.0-slim
|
||||
|
||||
# Install apache2-utils (for htpasswd to bcrypt passwords for the
|
||||
# local-user-authenticator) and jq.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -51,9 +51,7 @@ require (
|
||||
EOF
|
||||
|
||||
# Resolve dependencies and download the modules.
|
||||
echo "Running go mod tidy ..."
|
||||
go mod tidy
|
||||
echo "Running go mod download ..."
|
||||
go mod download
|
||||
|
||||
# Copy the downloaded source code of k8s.io/code-generator so we can "go install" all its commands.
|
||||
@@ -66,16 +64,7 @@ cp -pr "$(go env GOMODCACHE)/k8s.io/code-generator@v$K8S_PKG_VERSION" "$(go env
|
||||
# The sed is a dirty hack to avoid having the code-generator shell scripts run go install again.
|
||||
# In version 0.23.0 the line inside the shell script that previously said "go install ..." started
|
||||
# to instead say "GO111MODULE=on go install ..." so this sed is a little wrong, but still seems to work.
|
||||
echo "Running go install for all k8s.io/code-generator commands ..."
|
||||
# Using sed to edit the go.mod file (and then running go mod tidy) is a dirty hack to work around
|
||||
# an issue introduced in Go v1.25. See https://github.com/golang/go/issues/74462.
|
||||
# The version of code-generator used by Kube 1.30 depends on x/tools v0.18.0.
|
||||
# The version of code-generator used by Kube 1.31 depends on x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d.
|
||||
# Other versions of Kube use code-generator versions which do not have this problem.
|
||||
(cd "$(go env GOPATH)/src/k8s.io/code-generator" &&
|
||||
sed -i -E -e 's#golang\.org/x/tools v0\.18\.0#golang\.org/x/tools v0\.24\.1#g' ./go.mod &&
|
||||
sed -i -E -e 's#golang\.org/x/tools v0\.21\.1-.*#golang\.org/x/tools v0\.24\.1#g' ./go.mod &&
|
||||
go mod tidy &&
|
||||
go install -v ./cmd/... &&
|
||||
sed -i -E -e 's/(go install.*)/# \1/g' ./*.sh)
|
||||
|
||||
@@ -85,30 +74,14 @@ if [[ ! -f "$(go env GOPATH)/bin/openapi-gen" ]]; then
|
||||
# that is selected as an indirect dependency by the go.mod.
|
||||
kube_openapi_version=$(go list -m k8s.io/kube-openapi | cut -f2 -d' ')
|
||||
# Install that version of its openapi-gen command.
|
||||
echo "Running go install for openapi-gen $kube_openapi_version ..."
|
||||
# Using sed to edit the go.mod file (and then running go mod tidy) is a dirty hack to work around
|
||||
# an issue introduced in Go v1.25. See https://github.com/golang/go/issues/74462.
|
||||
# If this were not needed, then we could just use "go install" directly without
|
||||
# copying the source code or editing the go.mod file (which is what this script used to do),
|
||||
# like this: go install -v "k8s.io/kube-openapi/cmd/openapi-gen@$kube_openapi_version"
|
||||
# The version of kube-openapi used by Kube 1.30 (and maybe 1.31) depends on x/tools v0.18.0.
|
||||
# The version of kube-openapi used by Kube 1.32 depends on x/tools v0.24.0.
|
||||
# Other versions of Kube use kube-openapi versions which do not have this problem.
|
||||
cp -pr "$(go env GOMODCACHE)/k8s.io/kube-openapi@$kube_openapi_version" "$(go env GOPATH)/src/k8s.io/kube-openapi"
|
||||
(cd "$(go env GOPATH)/src/k8s.io/kube-openapi" &&
|
||||
sed -i -E -e 's#golang\.org/x/tools v0\.18\.0#golang\.org/x/tools v0\.24\.1#g' ./go.mod &&
|
||||
sed -i -E -e 's#golang\.org/x/tools v0\.24\.0#golang\.org/x/tools v0\.24\.1#g' ./go.mod &&
|
||||
go mod tidy &&
|
||||
go install -v ./cmd/openapi-gen)
|
||||
go install -v "k8s.io/kube-openapi/cmd/openapi-gen@$kube_openapi_version"
|
||||
fi
|
||||
|
||||
echo "Running go install for controller-gen ..."
|
||||
go install -v sigs.k8s.io/controller-tools/cmd/controller-gen@v$CONTROLLER_GEN_VERSION
|
||||
|
||||
# We use a commit sha instead of a release semver because this project does not create
|
||||
# releases very often. They seem to only release 1-2 times per year, but commit to
|
||||
# main more often.
|
||||
echo "Running go install for crd-ref-docs ..."
|
||||
go install -v github.com/elastic/crd-ref-docs@$CRD_REF_DOCS_COMMIT_SHA
|
||||
|
||||
# List all the commands that we just installed.
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
# to use newer versions of linux, jq, and git. The "assets" directory's source code is copied from
|
||||
# https://github.com/cfmobile/pool-trigger-resource/tree/master/assets as of commit efefe018c88e937.
|
||||
|
||||
FROM debian:13.2-slim
|
||||
FROM debian:12.11-slim
|
||||
|
||||
RUN apt-get update && apt-get install -y ca-certificates jq git && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
@@ -1,11 +1,4 @@
|
||||
# Copyright 2024-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# It seems that Bitnami no longer supports openldap.
|
||||
# See https://github.com/bitnami/containers/issues/83267
|
||||
# All existing container images have been migrated from the public catalog (docker.io/bitnami) to
|
||||
# the “Bitnami Legacy” repository (docker.io/bitnamilegacy), where they will no longer receive updates.
|
||||
#
|
||||
# FROM bitnami/openldap:2.6.10
|
||||
|
||||
FROM bitnamilegacy/openldap:2.6.10
|
||||
FROM bitnami/openldap:2.6.10
|
||||
|
||||
@@ -11,7 +11,7 @@ FROM cfssl/cfssl:v1.6.5 as cfssl
|
||||
|
||||
# We just need any basic unix with bash, but we can pick the same
|
||||
# base image that they use, just in case they did any dynamic linking.
|
||||
FROM golang:1.25.5
|
||||
FROM golang:1.24.3
|
||||
|
||||
# Thier Docerfile https://github.com/cloudflare/cfssl/blob/master/Dockerfile
|
||||
# calls their Makefile https://github.com/cloudflare/cfssl/blob/master/Makefile
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2021-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
FROM ghcr.io/dexidp/dex:v2.44.0
|
||||
FROM ghcr.io/dexidp/dex:v2.43.1
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# Use a runtime image based on Debian slim
|
||||
FROM debian:13.2-slim
|
||||
FROM debian:12.11-slim
|
||||
|
||||
# Install Squid and drop in a very basic, open proxy configuration.
|
||||
RUN apt-get update && apt-get install -y squid
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
repo=vmware/pinniped
|
||||
repo=vmware-tanzu/pinniped
|
||||
current_branch_name=$(git rev-parse --abbrev-ref HEAD)
|
||||
|
||||
if [[ "$current_branch_name" != "ci" ]]; then
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -15,61 +15,30 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${SHARED_VPC_PROJECT:-}" ]]; then
|
||||
echo "SHARED_VPC_PROJECT env var must be set"
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z "${SHARED_VPC_NAME:-}" ]]; then
|
||||
echo "SHARED_VPC_NAME env var must be set"
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z "${SUBNET_NAME:-}" ]]; then
|
||||
echo "SUBNET_NAME env var must be set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_ZONE="us-west1-c"
|
||||
SUBNET_REGION="us-west1"
|
||||
|
||||
# Create (or recreate) a GKE acceptance cluster.
|
||||
# Pro tip: The GCP Console UI can help you build this command.
|
||||
# The following fields were customized, and all of the others are left as the GCP Console's defaults:
|
||||
# - Cluster name
|
||||
# - Machine type - starting in Aug 2025, the google pods request more than 1 CPU, making them not fit on a single e2-medium node
|
||||
# - Cluster version - newest at the time
|
||||
# - Num nodes - sized smaller to be cheaper
|
||||
# - Maintenance window start and recurrence - to avoid downtime during business hours
|
||||
# - Issue client certificate - to make it possible to use an admin kubeconfig without the GKE auth plugin
|
||||
# - tags, authorized networks, private nodes, private endpoint, network, subnet, and secondary ranges
|
||||
# - service account
|
||||
gcloud container --project "$PINNIPED_GCP_PROJECT" clusters create "gke-acceptance-cluster" \
|
||||
--zone "$CLUSTER_ZONE" \
|
||||
--no-enable-basic-auth \
|
||||
--cluster-version "1.32.4-gke.1415000" \
|
||||
--release-channel "regular" \
|
||||
--machine-type "e2-standard-2" \
|
||||
--zone "us-central1-c" --no-enable-basic-auth --cluster-version "1.30.4-gke.1348000" --release-channel "regular" \
|
||||
--machine-type "e2-medium" \
|
||||
--image-type "COS_CONTAINERD" --disk-type "pd-balanced" --disk-size "100" --metadata disable-legacy-endpoints=true \
|
||||
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
|
||||
--num-nodes "1" \
|
||||
--logging=SYSTEM,WORKLOAD --monitoring=SYSTEM,STORAGE,POD,DEPLOYMENT,STATEFULSET,DAEMONSET,HPA,CADVISOR,KUBELET \
|
||||
--enable-ip-alias \
|
||||
--network "projects/$PINNIPED_GCP_PROJECT/global/networks/default" \
|
||||
--subnetwork "projects/$PINNIPED_GCP_PROJECT/regions/us-central1/subnetworks/default" \
|
||||
--no-enable-intra-node-visibility \
|
||||
--default-max-pods-per-node "110" \
|
||||
--security-posture=standard --workload-vulnerability-scanning=disabled \
|
||||
--security-posture=standard --workload-vulnerability-scanning=disabled --no-enable-master-authorized-networks \
|
||||
--addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
|
||||
--enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 \
|
||||
--binauthz-evaluation-mode=DISABLED --enable-managed-prometheus \
|
||||
--enable-shielded-nodes --shielded-integrity-monitoring --no-shielded-secure-boot \
|
||||
--node-locations "$CLUSTER_ZONE" \
|
||||
--binauthz-evaluation-mode=DISABLED --enable-managed-prometheus --enable-shielded-nodes --node-locations "us-central1-c" \
|
||||
--maintenance-window-start "2020-07-01T03:00:00Z" --maintenance-window-end "2020-07-01T11:00:00Z" \
|
||||
--maintenance-window-recurrence "FREQ=WEEKLY;BYDAY=MO,TU,WE,TH,FR,SA,SU" \
|
||||
--issue-client-certificate \
|
||||
--tags "gke-broadcom" \
|
||||
--enable-master-authorized-networks \
|
||||
--master-authorized-networks "10.0.0.0/8" \
|
||||
--enable-private-nodes \
|
||||
--enable-private-endpoint \
|
||||
--enable-ip-alias \
|
||||
--network "projects/${SHARED_VPC_PROJECT}/global/networks/${SHARED_VPC_NAME}" \
|
||||
--subnetwork "projects/${SHARED_VPC_PROJECT}/regions/${SUBNET_REGION}/subnetworks/${SUBNET_NAME}" \
|
||||
--cluster-secondary-range-name "services" \
|
||||
--services-secondary-range-name "pods" \
|
||||
--service-account "terraform@${PINNIPED_GCP_PROJECT}.iam.gserviceaccount.com"
|
||||
--issue-client-certificate
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# Assuming that you have somehow got your hands on a remote GKE or kind cluster,
|
||||
@@ -240,7 +240,7 @@ gke | aks | eks)
|
||||
log_note "KUBECONFIG='$KUBECONFIG' TEST_ENV_PATH='/tmp/integration-test-env' SOURCE_PATH='$pinniped_repo' $ROOT/pipelines/shared-tasks/run-integration-tests/task.sh"
|
||||
;;
|
||||
kind)
|
||||
log_note "KUBECONFIG='$KUBECONFIG' TEST_ENV_PATH='/tmp/integration-test-env' SOURCE_PATH='$pinniped_repo' START_GCLOUD_PROXY=yes GCP_PROJECT=$PINNIPED_GCP_PROJECT GCP_ZONE=us-west1-a $ROOT/pipelines/shared-tasks/run-integration-tests/task.sh"
|
||||
log_note "KUBECONFIG='$KUBECONFIG' TEST_ENV_PATH='/tmp/integration-test-env' SOURCE_PATH='$pinniped_repo' START_GCLOUD_PROXY=yes GCP_PROJECT=$PINNIPED_GCP_PROJECT GCP_ZONE=us-central1-b $ROOT/pipelines/shared-tasks/run-integration-tests/task.sh"
|
||||
;;
|
||||
*)
|
||||
log_error "Huh? Should never get here."
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2021-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2021-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -10,30 +10,10 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${SHARED_VPC_PROJECT:-}" ]]; then
|
||||
echo "SHARED_VPC_PROJECT env var must be set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${SUBNET_NAME:-}" ]]; then
|
||||
echo "SUBNET_NAME env var must be set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${DISK_IMAGES_PROJECT:-}" ]]; then
|
||||
echo "DISK_IMAGES_PROJECT env var must be set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
instance_user="${REMOTE_INSTANCE_USERNAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
zone="us-central1-b"
|
||||
here="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Create a VM called $instance_name with some reasonable compute power and disk.
|
||||
@@ -41,45 +21,23 @@ echo "Creating VM with name $instance_name..."
|
||||
gcloud compute instances create "$instance_name" \
|
||||
--project="$project" --zone="$zone" \
|
||||
--machine-type="e2-standard-8" \
|
||||
--network-interface=stack-type=IPV4_ONLY,subnet=projects/"$SHARED_VPC_PROJECT"/regions/us-west1/subnetworks/"${SUBNET_NAME}",no-address \
|
||||
--create-disk=auto-delete=yes,boot=yes,device-name="$instance_name",image=projects/"${DISK_IMAGES_PROJECT}"/global/images/labs-saas-gcp-debian12-packer-latest,mode=rw,size=40,type=pd-ssd
|
||||
--boot-disk-size="40GB" --boot-disk-type="pd-ssd" --boot-disk-device-name="$instance_name"
|
||||
|
||||
# Make a private key for ssh.
|
||||
ssh_key_file="$HOME/.ssh/gcp-remote-workstation-key"
|
||||
if [[ ! -f "$ssh_key_file" ]]; then
|
||||
ssh-keygen -t rsa -b 4096 -q -N "" -f "$ssh_key_file"
|
||||
fi
|
||||
|
||||
# Add the key only to the specific VM instance (as VM metadata).
|
||||
echo "${instance_user}:$(cat "${ssh_key_file}.pub")" >/tmp/ssh-key-values
|
||||
gcloud compute instances add-metadata "$instance_name" \
|
||||
--metadata-from-file ssh-keys=/tmp/ssh-key-values \
|
||||
--zone "$zone" --project "$project"
|
||||
|
||||
# Get the IP so we can use regular ssh (not gcloud ssh).
|
||||
gcloud_instance_ip=$(gcloud compute instances describe \
|
||||
--zone "$zone" --project "$project" "${instance_name}" \
|
||||
--format='get(networkInterfaces[0].networkIP)')
|
||||
|
||||
ssh_dest="${instance_user}@${gcloud_instance_ip}"
|
||||
|
||||
# Wait for the ssh server of the new instance to be ready.
|
||||
attempts=0
|
||||
while ! ssh -i "$ssh_key_file" -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null "$ssh_dest" echo connection test; do
|
||||
echo "Waiting for ssh server to start ..."
|
||||
attempts=$((attempts + 1))
|
||||
if [[ $attempts -gt 25 ]]; then
|
||||
echo "ERROR: ssh server never accepted connections after waiting for a while"
|
||||
exit 1
|
||||
# Give a little time for the server to be ready.
|
||||
while true; do
|
||||
sleep 5
|
||||
if ! "$here"/ssh.sh ls; then
|
||||
echo "Waiting for VM to be accessible via ssh..."
|
||||
else
|
||||
echo "VM ready!"
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Copy the deps script to the new VM.
|
||||
echo "Copying deps.sh to $instance_name..."
|
||||
scp -i "$ssh_key_file" \
|
||||
-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
"$here"/lib/deps.sh "$ssh_dest":/tmp
|
||||
gcloud compute scp "$here"/lib/deps.sh "$instance_user@$instance_name":/tmp \
|
||||
--project="$project" --zone="$zone"
|
||||
|
||||
# Run the deps script on the new VM.
|
||||
"$here"/ssh.sh /tmp/deps.sh
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2021-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2021-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -10,14 +10,9 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
zone="us-central1-b"
|
||||
|
||||
# Delete the instance forever. Will prompt for confirmation.
|
||||
echo "Destroying VM $instance_name..."
|
||||
|
||||
@@ -19,10 +19,12 @@ brew install gcc
|
||||
|
||||
# Install go.
|
||||
brew install go
|
||||
# On linux go really wants gcc5 to also be installed for some reason.
|
||||
brew install gcc@5
|
||||
|
||||
# Install and configure zsh and plugins.
|
||||
brew install zsh zsh-history-substring-search
|
||||
brew install fzf
|
||||
brew install fasd fzf
|
||||
/home/linuxbrew/.linuxbrew/opt/fzf/install --all --no-bash --no-fish
|
||||
# Install https://ohmyz.sh
|
||||
export PATH=$PATH:/home/linuxbrew/.linuxbrew/bin
|
||||
@@ -44,11 +46,11 @@ curl -fsSL https://gist.githubusercontent.com/cfryanr/80ada8af9a78f08b368327401e
|
||||
|
||||
# Install other useful packages.
|
||||
brew tap homebrew/command-not-found
|
||||
brew tap carvel-dev/carvel
|
||||
brew tap vmware-tanzu/carvel
|
||||
brew install ytt kbld kapp imgpkg kwt vendir
|
||||
brew install git git-duet/tap/git-duet pre-commit gh
|
||||
brew install k9s kind kubectl kubectx stern
|
||||
brew install acarl005/homebrew-formulas/ls-go ripgrep procs bat tokei git-delta dust fd httpie chroma
|
||||
brew install exa acarl005/homebrew-formulas/ls-go ripgrep procs bat tokei git-delta dust fd httpie chroma
|
||||
brew install watch htop wget
|
||||
brew install jesseduffield/lazydocker/lazydocker ctop dive
|
||||
brew install jq yq
|
||||
@@ -79,7 +81,9 @@ sudo systemctl enable containerd.service
|
||||
mkdir workspace
|
||||
pushd workspace
|
||||
ssh-keyscan -H github.com >> $HOME/.ssh/known_hosts
|
||||
git clone https://github.com/vmware/pinniped.git
|
||||
# This assumes that you used `--ssh-flag=-A` when using `gcloud compute ssh` to log in to the host,
|
||||
# which will forward your ssh identities.
|
||||
git clone git@github.com:vmware-tanzu/pinniped.git
|
||||
pushd pinniped
|
||||
pre-commit install
|
||||
./hack/install-linter.sh
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2022-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2022-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# This is similar to rsync.sh, but with the src and dest flipped at the end.
|
||||
@@ -13,36 +13,28 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SRC_DIR=${SRC_DIR:-"$HOME/workspace/pinniped"}
|
||||
src_dir_parent=$(dirname "$SRC_DIR")
|
||||
dest_dir="./workspace/pinniped"
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
instance_user="${REMOTE_INSTANCE_USERNAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
zone="us-central1-b"
|
||||
config_file="/tmp/gcp-ssh-config"
|
||||
here="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ssh_key_file="$HOME/.ssh/gcp-remote-workstation-key"
|
||||
|
||||
# Get the IP so we can use regular ssh (not gcloud ssh).
|
||||
gcloud_instance_ip=$(gcloud compute instances describe \
|
||||
--zone "$zone" --project "$project" "${instance_name}" \
|
||||
--format='get(networkInterfaces[0].networkIP)')
|
||||
|
||||
ssh_dest="${instance_user}@${gcloud_instance_ip}"
|
||||
|
||||
if [[ ! -d "$SRC_DIR" ]]; then
|
||||
echo "ERROR: $SRC_DIR does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the ssh fingerprints of all the GCP VMs.
|
||||
gcloud compute config-ssh --ssh-config-file="$config_file" \
|
||||
--project="$project" >/dev/null
|
||||
|
||||
cd "$SRC_DIR"
|
||||
local_commit=$(git rev-parse HEAD)
|
||||
remote_commit=$("$here"/ssh.sh "cd $dest_dir; git rev-parse HEAD" 2>/dev/null | tr -dc '[:print:]')
|
||||
local_commit=$(git rev-parse --short HEAD)
|
||||
remote_commit=$("$here"/ssh.sh "cd $dest_dir; git rev-parse --short HEAD" 2>/dev/null | tr -dc '[:print:]')
|
||||
|
||||
if [[ -z "$local_commit" || -z "$remote_commit" ]]; then
|
||||
echo "ERROR: Could not determine currently checked out git commit sha"
|
||||
@@ -51,8 +43,8 @@ fi
|
||||
|
||||
if [[ "$local_commit" != "$remote_commit" ]]; then
|
||||
echo "ERROR: Local and remote repos are not on the same commit. This is usually a mistake."
|
||||
echo "Local was $SRC_DIR at ${local_commit}"
|
||||
echo "Remote was ${instance_name}:${dest_dir} at ${remote_commit}"
|
||||
echo "Local was $SRC_DIR at *${local_commit}*"
|
||||
echo "Remote was ${instance_name}:${dest_dir} at *${remote_commit}*"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -63,5 +55,5 @@ rsync \
|
||||
--progress --delete --archive --compress --human-readable \
|
||||
--max-size 200K \
|
||||
--exclude .git/ --exclude .idea/ --exclude .DS_Store --exclude '*.test' --exclude '*.out' \
|
||||
--rsh "ssh -i '$ssh_key_file' -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" \
|
||||
"$ssh_dest:$dest_dir" "$src_dir_parent"
|
||||
--rsh "ssh -F $config_file" \
|
||||
"${instance_user}@${instance_name}.${zone}.${project}:$dest_dir" "$src_dir_parent"
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# Copyright 2021-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2021 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -13,35 +13,27 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SRC_DIR=${SRC_DIR:-"$HOME/workspace/pinniped"}
|
||||
dest_dir="./workspace"
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
instance_user="${REMOTE_INSTANCE_USERNAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
zone="us-central1-b"
|
||||
config_file="/tmp/gcp-ssh-config"
|
||||
here="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ssh_key_file="$HOME/.ssh/gcp-remote-workstation-key"
|
||||
|
||||
# Get the IP so we can use regular ssh (not gcloud ssh).
|
||||
gcloud_instance_ip=$(gcloud compute instances describe \
|
||||
--zone "$zone" --project "$project" "${instance_name}" \
|
||||
--format='get(networkInterfaces[0].networkIP)')
|
||||
|
||||
ssh_dest="${instance_user}@${gcloud_instance_ip}"
|
||||
|
||||
if [[ ! -d "$SRC_DIR" ]]; then
|
||||
echo "ERROR: $SRC_DIR does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the ssh fingerprints of all the GCP VMs.
|
||||
gcloud compute config-ssh --ssh-config-file="$config_file" \
|
||||
--project="$project" >/dev/null
|
||||
|
||||
cd "$SRC_DIR"
|
||||
local_commit=$(git rev-parse HEAD)
|
||||
remote_commit=$("$here"/ssh.sh "cd $dest_dir/pinniped; git rev-parse HEAD" 2>/dev/null | tr -dc '[:print:]')
|
||||
local_commit=$(git rev-parse --short HEAD)
|
||||
remote_commit=$("$here"/ssh.sh "cd $dest_dir/pinniped; git rev-parse --short HEAD" 2>/dev/null | tr -dc '[:print:]')
|
||||
|
||||
if [[ -z "$local_commit" || -z "$remote_commit" ]]; then
|
||||
echo "ERROR: Could not determine currently checked out git commit sha"
|
||||
@@ -50,8 +42,8 @@ fi
|
||||
|
||||
if [[ "$local_commit" != "$remote_commit" ]]; then
|
||||
echo "ERROR: Local and remote repos are not on the same commit. This is usually a mistake."
|
||||
echo "Local was $SRC_DIR at ${local_commit}"
|
||||
echo "Remote was ${instance_name}:${dest_dir}/pinniped at ${remote_commit}"
|
||||
echo "Local was $SRC_DIR at *${local_commit}*"
|
||||
echo "Remote was ${instance_name}:${dest_dir}/pinniped at *${remote_commit}*"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -62,5 +54,5 @@ rsync \
|
||||
--progress --delete --archive --compress --human-readable \
|
||||
--max-size 200K \
|
||||
--exclude .git/ --exclude .idea/ --exclude .DS_Store --exclude '*.test' --exclude '*.out' \
|
||||
--rsh "ssh -i '$ssh_key_file' -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" \
|
||||
"$SRC_DIR" "$ssh_dest:$dest_dir"
|
||||
--rsh "ssh -F $config_file" \
|
||||
"$SRC_DIR" "${instance_user}@${instance_name}.${zone}.${project}:$dest_dir"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2021-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2021-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -10,25 +10,13 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
instance_user="${REMOTE_INSTANCE_USERNAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
ssh_key_file="$HOME/.ssh/gcp-remote-workstation-key"
|
||||
|
||||
# Get the IP so we can use regular ssh (not gcloud ssh).
|
||||
gcloud_instance_ip=$(gcloud compute instances describe \
|
||||
--zone "$zone" --project "$project" "${instance_name}" \
|
||||
--format='get(networkInterfaces[0].networkIP)')
|
||||
|
||||
ssh_dest="${instance_user}@${gcloud_instance_ip}"
|
||||
zone="us-central1-b"
|
||||
|
||||
# Run ssh with identities forwarded so you can use them with git on the remote host.
|
||||
# Optionally run an arbitrary command on the remote host.
|
||||
# By default, start an interactive session.
|
||||
ssh -i "$ssh_key_file" -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -A "$ssh_dest" -- "$@"
|
||||
gcloud compute ssh --ssh-flag=-A "$instance_user@$instance_name" \
|
||||
--project="$project" --zone="$zone" -- "$@"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2021-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2021-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -10,14 +10,9 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
zone="us-central1-b"
|
||||
|
||||
# Start an instance which was previously stopped to save money.
|
||||
echo "Starting VM $instance_name..."
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2021-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2021-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -10,14 +10,9 @@ if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gcloud auth print-access-token &>/dev/null; then
|
||||
echo "Please run \`gcloud auth login\` and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
instance_name="${REMOTE_INSTANCE_NAME:-${USER}}"
|
||||
project="$PINNIPED_GCP_PROJECT"
|
||||
zone="us-west1-a"
|
||||
zone="us-central1-b"
|
||||
|
||||
# Stop the instance, to save money, in a way that it can be restarted.
|
||||
echo "Stopping VM $instance_name..."
|
||||
|
||||
@@ -6,9 +6,7 @@ metadata:
|
||||
namespace: kube-system
|
||||
data:
|
||||
# 240.0.0.0/4 is needed to allow the pod to reach the Cloud SQL server's private IP.
|
||||
# I was told to also add the whole primary IP range of the cluster's subnet, which is 10.31.141.64/27.
|
||||
config: |
|
||||
nonMasqueradeCIDRs:
|
||||
- 240.0.0.0/4
|
||||
- 10.31.141.64/27
|
||||
resyncInterval: 60s
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
display:
|
||||
@@ -11,9 +11,9 @@ resources:
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
|
||||
jobs:
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2026 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
display:
|
||||
@@ -9,25 +9,24 @@ meta:
|
||||
|
||||
# GCP account info and which zone the workers should be created in and deleted from.
|
||||
gke_admin_params: &gke_admin_params
|
||||
INSTANCE_ZONE: us-west1-c
|
||||
INSTANCE_ZONE: us-west1-b
|
||||
PINNIPED_GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_USERNAME: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
GCP_USERNAME: ((gke-cluster-developer-username))
|
||||
GCP_JSON_KEY: ((gke-cluster-developer-json-key))
|
||||
|
||||
# GCP account info and which zone the workers should be created in and deleted from.
|
||||
gcp_account_params: &gcp_account_params
|
||||
INSTANCE_ZONE: us-west1-a
|
||||
INSTANCE_ZONE: us-central1-b
|
||||
GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_USERNAME: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
|
||||
# GKE account info and which zone the clusters should be created in and deleted from.
|
||||
gke_account_params: &gke_account_params
|
||||
CLUSTER_REGION: us-west1
|
||||
CLUSTER_ZONE: us-west1-c
|
||||
CLUSTER_ZONE: us-central1-c
|
||||
GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_SERVICE_ACCOUNT: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
GCP_SERVICE_ACCOUNT: ((gke-test-pool-manager-username))
|
||||
GCP_JSON_KEY: ((gke-test-pool-manager-json-key))
|
||||
|
||||
# Azure account info and which resource group the clusters should be created in and deleted from.
|
||||
azure_account_params: &azure_account_params
|
||||
@@ -43,9 +42,9 @@ resources:
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
|
||||
- name: k8s-app-deployer-image
|
||||
type: registry-image
|
||||
@@ -65,12 +64,12 @@ resources:
|
||||
repository: google/cloud-sdk
|
||||
tag: slim
|
||||
|
||||
# - name: aks-deployer-image
|
||||
# type: registry-image
|
||||
# icon: docker
|
||||
# check_every: 5m
|
||||
# source:
|
||||
# repository: mcr.microsoft.com/azure-cli
|
||||
- name: aks-deployer-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 5m
|
||||
source:
|
||||
repository: mcr.microsoft.com/azure-cli
|
||||
|
||||
- name: hourly
|
||||
type: time
|
||||
@@ -163,18 +162,18 @@ jobs:
|
||||
params:
|
||||
<<: *gke_account_params
|
||||
|
||||
# - name: remove-orphaned-aks-clusters
|
||||
# public: true # all logs are publicly visible
|
||||
# plan:
|
||||
# - in_parallel:
|
||||
# - get: pinniped-ci
|
||||
# - get: aks-deployer-image
|
||||
# - get: hourly
|
||||
# trigger: true
|
||||
# - task: remove-orphaned-aks-clusters
|
||||
# attempts: 2
|
||||
# timeout: 25m
|
||||
# file: pinniped-ci/pipelines/shared-tasks/remove-orphaned-aks-clusters/task.yml
|
||||
# image: aks-deployer-image
|
||||
# params:
|
||||
# <<: *azure_account_params
|
||||
- name: remove-orphaned-aks-clusters
|
||||
public: true # all logs are publicly visible
|
||||
plan:
|
||||
- in_parallel:
|
||||
- get: pinniped-ci
|
||||
- get: aks-deployer-image
|
||||
- get: hourly
|
||||
trigger: true
|
||||
- task: remove-orphaned-aks-clusters
|
||||
attempts: 2
|
||||
timeout: 25m
|
||||
file: pinniped-ci/pipelines/shared-tasks/remove-orphaned-aks-clusters/task.yml
|
||||
image: aks-deployer-image
|
||||
params:
|
||||
<<: *azure_account_params
|
||||
|
||||
@@ -38,15 +38,15 @@ meta:
|
||||
# These version numbers should be updated periodically.
|
||||
codegen-versions: &codegen-versions
|
||||
# Choose which version of Golang to use in the codegen container images.
|
||||
BUILD_ARG_GO_VERSION: '1.25.5'
|
||||
BUILD_ARG_GO_VERSION: '1.24.3'
|
||||
# Choose which version of sigs.k8s.io/controller-tools/cmd/controller-gen to install
|
||||
# in the codegen container images.
|
||||
BUILD_ARG_CONTROLLER_GEN_VERSION: 0.20.0
|
||||
BUILD_ARG_CONTROLLER_GEN_VERSION: 0.18.0
|
||||
# Choose which version of github.com/elastic/crd-ref-docs to install in the codegen
|
||||
# container images. We use a commit sha instead of a release semver because this project
|
||||
# does not create releases very often. They seem to only release 1-2 times per year, but
|
||||
# commit to main more often.
|
||||
BUILD_ARG_CRD_REF_DOCS_COMMIT_SHA: da1c9739
|
||||
BUILD_ARG_CRD_REF_DOCS_COMMIT_SHA: ade917a
|
||||
|
||||
resources:
|
||||
|
||||
@@ -55,9 +55,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/k8s-app-deployer/Dockerfile ]
|
||||
|
||||
- name: k8s-app-deployer-image
|
||||
@@ -75,9 +75,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/deployment-yaml-formatter/Dockerfile ]
|
||||
|
||||
- name: deployment-yaml-formatter-image
|
||||
@@ -95,9 +95,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/integration-test-runner/Dockerfile ]
|
||||
|
||||
- name: integration-test-runner-beta-dockerfile
|
||||
@@ -105,9 +105,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/integration-test-runner-beta/Dockerfile ]
|
||||
|
||||
- name: integration-test-runner-image
|
||||
@@ -135,9 +135,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/code-coverage-uploader/Dockerfile ]
|
||||
|
||||
- name: code-coverage-uploader-image
|
||||
@@ -155,9 +155,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths:
|
||||
- dockerfiles/pool-trigger-resource/Dockerfile
|
||||
- "dockerfiles/pool-trigger-resource/assets/*"
|
||||
@@ -252,34 +252,14 @@ resources:
|
||||
password: ((ci-ghcr-pusher-token))
|
||||
tag: latest
|
||||
|
||||
- name: k8s-code-generator-1.34-image-ghcr
|
||||
type: registry-image
|
||||
icon: docker
|
||||
<<: *check-every-for-image
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.34
|
||||
username: ((ci-ghcr-pusher-username))
|
||||
password: ((ci-ghcr-pusher-token))
|
||||
tag: latest
|
||||
|
||||
- name: k8s-code-generator-1.35-image-ghcr
|
||||
type: registry-image
|
||||
icon: docker
|
||||
<<: *check-every-for-image
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.35
|
||||
username: ((ci-ghcr-pusher-username))
|
||||
password: ((ci-ghcr-pusher-token))
|
||||
tag: latest
|
||||
|
||||
- name: k8s-code-generator-dockerfile
|
||||
type: git
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/k8s-code-generator/* ]
|
||||
|
||||
- name: test-forward-proxy-image-ghcr
|
||||
@@ -297,9 +277,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/test-forward-proxy/* ]
|
||||
|
||||
- name: test-bitnami-ldap-image-ghcr
|
||||
@@ -317,9 +297,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/test-bitnami-ldap/Dockerfile ]
|
||||
|
||||
- name: test-dex-image
|
||||
@@ -337,9 +317,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/test-dex/Dockerfile ]
|
||||
|
||||
- name: test-cfssl-image
|
||||
@@ -357,9 +337,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/test-cfssl/Dockerfile ]
|
||||
|
||||
- name: test-kubectl-image
|
||||
@@ -377,9 +357,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/test-kubectl/Dockerfile ]
|
||||
|
||||
- name: gh-cli-image
|
||||
@@ -397,9 +377,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/gh-cli/Dockerfile ]
|
||||
|
||||
- name: crane-image
|
||||
@@ -417,9 +397,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/crane/Dockerfile ]
|
||||
|
||||
- name: eks-deployer-dockerfile
|
||||
@@ -427,9 +407,9 @@ resources:
|
||||
icon: github
|
||||
<<: *check-every-for-dockerfile
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
paths: [ dockerfiles/eks-deployer/Dockerfile ]
|
||||
|
||||
- name: eks-deployer-image
|
||||
@@ -824,7 +804,7 @@ jobs:
|
||||
- path: cache
|
||||
params:
|
||||
CONTEXT: k8s-code-generator-dockerfile/dockerfiles/k8s-code-generator
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.30.14
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.30.12
|
||||
<<: *codegen-versions
|
||||
OUTPUT_OCI: true # needed for building multi-arch images
|
||||
IMAGE_PLATFORM: "linux/amd64,linux/arm64" # build a multi-arch images which includes these platforms
|
||||
@@ -860,7 +840,7 @@ jobs:
|
||||
- path: cache
|
||||
params:
|
||||
CONTEXT: k8s-code-generator-dockerfile/dockerfiles/k8s-code-generator
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.31.14
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.31.8
|
||||
<<: *codegen-versions
|
||||
OUTPUT_OCI: true # needed for building multi-arch images
|
||||
IMAGE_PLATFORM: "linux/amd64,linux/arm64" # build a multi-arch images which includes these platforms
|
||||
@@ -896,7 +876,7 @@ jobs:
|
||||
- path: cache
|
||||
params:
|
||||
CONTEXT: k8s-code-generator-dockerfile/dockerfiles/k8s-code-generator
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.32.10
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.32.4
|
||||
<<: *codegen-versions
|
||||
OUTPUT_OCI: true # needed for building multi-arch images
|
||||
IMAGE_PLATFORM: "linux/amd64,linux/arm64" # build a multi-arch images which includes these platforms
|
||||
@@ -932,7 +912,7 @@ jobs:
|
||||
- path: cache
|
||||
params:
|
||||
CONTEXT: k8s-code-generator-dockerfile/dockerfiles/k8s-code-generator
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.33.6
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.33.0
|
||||
<<: *codegen-versions
|
||||
OUTPUT_OCI: true # needed for building multi-arch images
|
||||
IMAGE_PLATFORM: "linux/amd64,linux/arm64" # build a multi-arch images which includes these platforms
|
||||
@@ -942,78 +922,6 @@ jobs:
|
||||
params:
|
||||
image: image/image # this is a directory for OCI (multi-arch images)
|
||||
|
||||
- name: build-k8s-code-generator-1.34
|
||||
public: true # all logs are publicly visible
|
||||
serial: true
|
||||
plan:
|
||||
- get: k8s-code-generator-dockerfile
|
||||
trigger: true
|
||||
- get: daily
|
||||
trigger: true
|
||||
- task: build-image
|
||||
privileged: true
|
||||
config:
|
||||
platform: linux
|
||||
image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: concourse/oci-build-task
|
||||
inputs:
|
||||
- name: k8s-code-generator-dockerfile
|
||||
outputs:
|
||||
- name: image
|
||||
run:
|
||||
path: build
|
||||
caches:
|
||||
- path: cache
|
||||
params:
|
||||
CONTEXT: k8s-code-generator-dockerfile/dockerfiles/k8s-code-generator
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.34.2
|
||||
<<: *codegen-versions
|
||||
OUTPUT_OCI: true # needed for building multi-arch images
|
||||
IMAGE_PLATFORM: "linux/amd64,linux/arm64" # build a multi-arch images which includes these platforms
|
||||
- put: k8s-code-generator-1.34-image-ghcr
|
||||
get_params:
|
||||
format: oci # needed for multi-arch images
|
||||
params:
|
||||
image: image/image # this is a directory for OCI (multi-arch images)
|
||||
|
||||
- name: build-k8s-code-generator-1.35
|
||||
public: true # all logs are publicly visible
|
||||
serial: true
|
||||
plan:
|
||||
- get: k8s-code-generator-dockerfile
|
||||
trigger: true
|
||||
- get: daily
|
||||
trigger: true
|
||||
- task: build-image
|
||||
privileged: true
|
||||
config:
|
||||
platform: linux
|
||||
image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: concourse/oci-build-task
|
||||
inputs:
|
||||
- name: k8s-code-generator-dockerfile
|
||||
outputs:
|
||||
- name: image
|
||||
run:
|
||||
path: build
|
||||
caches:
|
||||
- path: cache
|
||||
params:
|
||||
CONTEXT: k8s-code-generator-dockerfile/dockerfiles/k8s-code-generator
|
||||
BUILD_ARG_K8S_PKG_VERSION: 0.35.0
|
||||
<<: *codegen-versions
|
||||
OUTPUT_OCI: true # needed for building multi-arch images
|
||||
IMAGE_PLATFORM: "linux/amd64,linux/arm64" # build a multi-arch images which includes these platforms
|
||||
- put: k8s-code-generator-1.35-image-ghcr
|
||||
get_params:
|
||||
format: oci # needed for multi-arch images
|
||||
params:
|
||||
image: image/image # this is a directory for OCI (multi-arch images)
|
||||
|
||||
- name: build-test-forward-proxy
|
||||
public: true # all logs are publicly visible
|
||||
serial: true
|
||||
|
||||
148
pipelines/go-compatibility/pipeline.yml
Normal file
148
pipelines/go-compatibility/pipeline.yml
Normal file
@@ -0,0 +1,148 @@
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
display:
|
||||
|
||||
background_image: https://upload.wikimedia.org/wikipedia/commons/6/68/Mirounga_leonina.jpg
|
||||
|
||||
meta:
|
||||
|
||||
build_pinniped: &build_pinniped
|
||||
config:
|
||||
platform: linux
|
||||
inputs:
|
||||
- name: pinniped-source
|
||||
run:
|
||||
path: bash
|
||||
args:
|
||||
- "-c"
|
||||
- |
|
||||
set -exuo pipefail
|
||||
go version
|
||||
cd pinniped-source/
|
||||
|
||||
# compile all of our code
|
||||
go build -o /dev/null ./...
|
||||
|
||||
# compile (but don't actually run) all of our tests
|
||||
go test ./... -run=nothing
|
||||
|
||||
resources:
|
||||
|
||||
- name: daily
|
||||
type: time
|
||||
icon: calendar-clock
|
||||
check_every: 10m
|
||||
source:
|
||||
location: America/Los_Angeles
|
||||
start: 4:00 AM
|
||||
stop: 5:00 AM
|
||||
days: [ Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday ]
|
||||
|
||||
- name: pinniped-source
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: main
|
||||
private_key: ((source-repo-deploy-key))
|
||||
|
||||
- name: go-1.22-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 10m
|
||||
source:
|
||||
repository: docker.io/golang
|
||||
tag: "1.22"
|
||||
|
||||
jobs:
|
||||
|
||||
- name: go-install-cli
|
||||
public: true # all logs are publicly visible
|
||||
serial: true
|
||||
plan:
|
||||
- get: daily
|
||||
trigger: true
|
||||
- task: go-install
|
||||
config:
|
||||
platform: linux
|
||||
image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: docker.io/golang
|
||||
run:
|
||||
path: bash
|
||||
args:
|
||||
- "-c"
|
||||
- |
|
||||
set -exuo pipefail
|
||||
go install -v go.pinniped.dev/cmd/pinniped@latest
|
||||
|
||||
# This job attempts to check whether it's possible to depend on our API client submodule.
|
||||
# It creates a simple test application with go.mod and main.go files, then attempts to compile it.
|
||||
#
|
||||
# As of now, this is known to be broken so we've decided to disable this job.
|
||||
# - name: go-get-submodule
|
||||
# serial: true
|
||||
# plan:
|
||||
# - get: daily
|
||||
# trigger: true
|
||||
# - task: go-get
|
||||
# config:
|
||||
# platform: linux
|
||||
# image_resource:
|
||||
# type: registry-image
|
||||
# source:
|
||||
# repository: docker.io/golang
|
||||
# run:
|
||||
# path: bash
|
||||
# args:
|
||||
# - "-c"
|
||||
# - |
|
||||
# set -euo pipefail
|
||||
# mkdir /work
|
||||
# cd /work
|
||||
|
||||
# cat << EOF > go.mod
|
||||
# module testapp
|
||||
|
||||
# go 1.14
|
||||
|
||||
# require (
|
||||
# go.pinniped.dev/generated/1.18/apis v0.0.0-00010101000000-000000000000
|
||||
# go.pinniped.dev/generated/1.18/client v0.0.0-20200918195624-2d4d7e588a18
|
||||
# )
|
||||
|
||||
# replace (
|
||||
# go.pinniped.dev/generated/1.18/apis v0.0.0-00010101000000-000000000000 => go.pinniped.dev/generated/1.18/apis v0.0.0-20200918195624-2d4d7e588a18
|
||||
# )
|
||||
# EOF
|
||||
|
||||
# cat << EOF > main.go
|
||||
# package main
|
||||
|
||||
# import (
|
||||
# _ "go.pinniped.dev/generated/1.18/apis/idp/v1alpha1"
|
||||
# _ "go.pinniped.dev/generated/1.18/client/clientset/versioned"
|
||||
# )
|
||||
|
||||
# func main() {}
|
||||
# EOF
|
||||
|
||||
# head -100 go.mod main.go
|
||||
# set -x
|
||||
# go mod download
|
||||
# go build -o testapp main.go
|
||||
|
||||
- name: go-1.22-compatibility
|
||||
public: true # all logs are publicly visible
|
||||
serial: true
|
||||
plan:
|
||||
- in_parallel:
|
||||
- get: daily
|
||||
trigger: true
|
||||
- get: pinniped-source
|
||||
- get: go-1.22-image
|
||||
- task: build
|
||||
image: go-1.22-image
|
||||
<<: *build_pinniped
|
||||
13
pipelines/go-compatibility/update-pipeline.sh
Executable file
13
pipelines/go-compatibility/update-pipeline.sh
Executable file
@@ -0,0 +1,13 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
pipeline=$(basename "$script_dir")
|
||||
source "$script_dir/../../hack/fly-helpers.sh"
|
||||
|
||||
set_pipeline "$pipeline" "$script_dir/pipeline.yml"
|
||||
ensure_time_resource_has_at_least_one_version "$pipeline" daily
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
display:
|
||||
@@ -18,7 +18,7 @@ meta:
|
||||
|
||||
# GCP account info and which zone the workers should be created in and deleted from.
|
||||
gcp_account_params: &gcp_account_params
|
||||
INSTANCE_ZONE: us-west1-a
|
||||
INSTANCE_ZONE: us-central1-b
|
||||
GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_USERNAME: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
@@ -50,9 +50,9 @@ resources:
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
|
||||
- name: daily
|
||||
type: time
|
||||
@@ -86,10 +86,6 @@ jobs:
|
||||
file: pinniped-ci/pipelines/shared-tasks/create-kind-node-builder-vm/task.yml
|
||||
image: gcloud-image
|
||||
params:
|
||||
SHARED_VPC_PROJECT: ((shared-vpc-project))
|
||||
SUBNET_REGION: ((subnet-region))
|
||||
SUBNET_NAME: ((instances-subnet-name))
|
||||
DISK_IMAGES_PROJECT: ((disk-images-gcp-project-name))
|
||||
<<: *gcp_account_params
|
||||
- task: build-kind-node-image
|
||||
timeout: 90m
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -73,16 +73,15 @@ meta:
|
||||
|
||||
# GKE account info and which zone the clusters should be created in and deleted from.
|
||||
gke_account_params: &gke_account_params
|
||||
# CLUSTER_ZONE: us-west1-c
|
||||
CLUSTER_REGION: us-west1
|
||||
CLUSTER_ZONE: us-central1-c
|
||||
GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_SERVICE_ACCOUNT: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
GCP_SERVICE_ACCOUNT: ((gke-test-pool-manager-username))
|
||||
GCP_JSON_KEY: ((gke-test-pool-manager-json-key))
|
||||
|
||||
# GCP account info and which zone the workers should be created in and deleted from.
|
||||
gcp_account_params: &gcp_account_params
|
||||
INSTANCE_ZONE: us-west1-a # which zone the kind worker VMs should be created in and deleted from
|
||||
GCP_ZONE: us-west1-a
|
||||
INSTANCE_ZONE: us-central1-b # which zone the kind worker VMs should be created in and deleted from
|
||||
GCP_ZONE: us-central1-b
|
||||
GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_USERNAME: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
@@ -92,10 +91,10 @@ meta:
|
||||
image: integration-test-runner-image
|
||||
timeout: 15m
|
||||
params:
|
||||
GCS_BUCKET: pinniped-ci-logs
|
||||
GCS_BUCKET: pinniped-ci-archive
|
||||
GCP_PROJECT: ((gcp-project-name))
|
||||
GCP_USERNAME: ((gcp-instance-admin-username))
|
||||
GCP_JSON_KEY: ((gcp-instance-admin-json-key))
|
||||
GCP_USERNAME: ((gcp-cluster-diagnostic-uploader-username))
|
||||
GCP_JSON_KEY: ((gcp-cluster-diagnostic-uploaded-json-key))
|
||||
|
||||
# Decides which specific patch versions of k8s we would like to deploy when creating kind cluster workers.
|
||||
# It should be safe to update the patch version numbers here whenever new versions come out.
|
||||
@@ -106,8 +105,8 @@ meta:
|
||||
# so always check the tags using the above link.
|
||||
kube_version_v1-21-x: &kube_version_v1-21-x
|
||||
KUBE_VERSION: v1.21.14
|
||||
kube_version_v1-35-x: &kube_version_v1-35-x
|
||||
KUBE_VERSION: v1.35.0
|
||||
kube_version_v1-33-x: &kube_version_v1-33-x
|
||||
KUBE_VERSION: v1.33.0
|
||||
kube_version_k8s-main: &kube_version_k8s-main
|
||||
KUBE_VERSION: "k8s-main"
|
||||
KIND_NODE_IMAGE: "ghcr.io/pinniped-ci-bot/kind-node-image:latest"
|
||||
@@ -116,7 +115,7 @@ meta:
|
||||
oldest_kind_kube_version: &oldest_kind_kube_version
|
||||
<<: *kube_version_v1-21-x
|
||||
latest_kind_kube_version: &latest_kind_kube_version
|
||||
<<: *kube_version_v1-35-x
|
||||
<<: *kube_version_v1-33-x
|
||||
|
||||
okta_integration_env_vars: &okta_integration_env_vars
|
||||
OKTA_CLI_CALLBACK: ((okta-cli-callback))
|
||||
@@ -138,7 +137,6 @@ meta:
|
||||
JUMPCLOUD_LDAP_BIND_ACCOUNT_PASSWORD: ((jumpcloud-ldap-bind-account-password))
|
||||
JUMPCLOUD_LDAP_USERS_SEARCH_BASE: ((jumpcloud-ldap-users-search-base))
|
||||
JUMPCLOUD_LDAP_GROUPS_SEARCH_BASE: ((jumpcloud-ldap-groups-search-base))
|
||||
JUMPCLOUD_LDAP_GROUPS_SEARCH_FILTER: ((jumpcloud-ldap-groups-search-filter))
|
||||
JUMPCLOUD_LDAP_USER_DN: ((jumpcloud-ldap-user-dn))
|
||||
JUMPCLOUD_LDAP_USER_CN: ((jumpcloud-ldap-user-cn))
|
||||
JUMPCLOUD_LDAP_USER_PASSWORD: ((jumpcloud-ldap-user-password))
|
||||
@@ -150,25 +148,6 @@ meta:
|
||||
JUMPCLOUD_LDAP_EXPECTED_DIRECT_GROUPS_CN: ((jumpcloud-ldap-expected-direct-groups-cn))
|
||||
JUMPCLOUD_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN: ((jumpcloud-ldap-expected-direct-posix-groups-cn))
|
||||
|
||||
okta_ldap_integration_env_vars: &okta_ldap_integration_env_vars
|
||||
OKTA_LDAP_HOST: ((okta-ldap-host))
|
||||
OKTA_LDAP_STARTTLS_ONLY_HOST: ((okta-ldap-start-tls-only-host))
|
||||
OKTA_LDAP_BIND_ACCOUNT_USERNAME: ((okta-ldap-bind-account-username))
|
||||
OKTA_LDAP_BIND_ACCOUNT_PASSWORD: ((okta-ldap-bind-account-password))
|
||||
OKTA_LDAP_USERS_SEARCH_BASE: ((okta-ldap-users-search-base))
|
||||
OKTA_LDAP_GROUPS_SEARCH_BASE: ((okta-ldap-groups-search-base))
|
||||
OKTA_LDAP_GROUPS_SEARCH_FILTER: ((okta-ldap-groups-search-filter))
|
||||
OKTA_LDAP_USER_DN: ((okta-ldap-user-dn))
|
||||
OKTA_LDAP_USER_CN: ((okta-ldap-user-cn))
|
||||
OKTA_LDAP_USER_PASSWORD: ((okta-ldap-user-password))
|
||||
OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_NAME: ((okta-ldap-user-unique-id-attribute-name))
|
||||
OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_VALUE: ((okta-ldap-user-unique-id-attribute-value))
|
||||
OKTA_LDAP_USER_EMAIL_ATTRIBUTE_NAME: ((okta-ldap-user-email-attribute-name))
|
||||
OKTA_LDAP_USER_EMAIL_ATTRIBUTE_VALUE: ((okta-ldap-user-email-attribute-value))
|
||||
OKTA_LDAP_EXPECTED_DIRECT_GROUPS_DN: ((okta-ldap-expected-direct-groups-dn))
|
||||
OKTA_LDAP_EXPECTED_DIRECT_GROUPS_CN: ((okta-ldap-expected-direct-groups-cn))
|
||||
OKTA_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN: ((okta-ldap-expected-direct-posix-groups-cn))
|
||||
|
||||
active_directory_integration_env_vars: &active_directory_integration_env_vars
|
||||
TEST_ACTIVE_DIRECTORY: "yes"
|
||||
AWS_AD_HOST: ((aws-ad-host))
|
||||
@@ -222,7 +201,7 @@ resources:
|
||||
icon: source-pull
|
||||
check_every: 1m
|
||||
source:
|
||||
repository: vmware/pinniped
|
||||
repository: vmware-tanzu/pinniped
|
||||
access_token: ((ci-bot-access-token-with-repo-status-permission))
|
||||
disable_forks: false
|
||||
base_branch: main
|
||||
@@ -238,9 +217,9 @@ resources:
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
|
||||
- name: ci-build-image
|
||||
type: registry-image
|
||||
@@ -329,6 +308,42 @@ resources:
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.26-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 3m
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.26
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.27-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 3m
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.27
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.28-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 3m
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.28
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.29-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 3m
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.29
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.30-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
@@ -365,24 +380,6 @@ resources:
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.34-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 3m
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.34
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
- name: k8s-code-generator-1.35-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 3m
|
||||
source:
|
||||
repository: ((ci-ghcr-registry))/k8s-code-generator-1.35
|
||||
username: ((ci-ghcr-puller-username))
|
||||
password: ((ci-ghcr-puller-token))
|
||||
|
||||
jobs:
|
||||
|
||||
- name: start
|
||||
@@ -466,12 +463,14 @@ jobs:
|
||||
version: every
|
||||
passed: [ start ]
|
||||
- get: pinniped-ci
|
||||
- get: k8s-code-generator-1.26-image
|
||||
- get: k8s-code-generator-1.27-image
|
||||
- get: k8s-code-generator-1.28-image
|
||||
- get: k8s-code-generator-1.29-image
|
||||
- get: k8s-code-generator-1.30-image
|
||||
- get: k8s-code-generator-1.31-image
|
||||
- get: k8s-code-generator-1.32-image
|
||||
- get: k8s-code-generator-1.33-image
|
||||
- get: k8s-code-generator-1.34-image
|
||||
- get: k8s-code-generator-1.35-image
|
||||
- { <<: *pr-status-on-pending, params: { <<: *pr-status-on-pending-params, context: verify-codegen } }
|
||||
- in_parallel:
|
||||
- task: verify-go-mod-tidy
|
||||
@@ -482,6 +481,34 @@ jobs:
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-go-generate/task.yml
|
||||
- task: codegen-1.26
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-codegen/task.yml
|
||||
image: k8s-code-generator-1.26-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.26"
|
||||
- task: codegen-1.27
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-codegen/task.yml
|
||||
image: k8s-code-generator-1.27-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.27"
|
||||
- task: codegen-1.28
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-codegen/task.yml
|
||||
image: k8s-code-generator-1.28-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.28"
|
||||
- task: codegen-1.29
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-codegen/task.yml
|
||||
image: k8s-code-generator-1.29-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.29"
|
||||
- task: codegen-1.30
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
@@ -510,20 +537,6 @@ jobs:
|
||||
image: k8s-code-generator-1.33-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.33"
|
||||
- task: codegen-1.34
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-codegen/task.yml
|
||||
image: k8s-code-generator-1.34-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.34"
|
||||
- task: codegen-1.35
|
||||
timeout: 20m
|
||||
<<: *pinniped-pr-input-mapping
|
||||
file: pinniped-ci/pipelines/shared-tasks/run-verify-codegen/task.yml
|
||||
image: k8s-code-generator-1.35-image
|
||||
params:
|
||||
KUBE_MINOR_VERSION: "1.35"
|
||||
|
||||
- name: unit-test
|
||||
on_success: { <<: *pr-status-on-success, params: { <<: *pr-status-on-success-params, context: unit-test } }
|
||||
@@ -572,7 +585,7 @@ jobs:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped-pr
|
||||
outputs:
|
||||
@@ -602,9 +615,6 @@ jobs:
|
||||
tag: alpine
|
||||
inputs:
|
||||
- name: pinniped-modules
|
||||
params:
|
||||
SONATYPE_API_KEY: ((sonatype-api-key))
|
||||
SONATYPE_USERNAME: ((sonatype-username))
|
||||
run:
|
||||
path: 'sh'
|
||||
args:
|
||||
@@ -633,10 +643,7 @@ jobs:
|
||||
|
||||
EOF
|
||||
|
||||
cat pinniped-modules/modules.json | nancy sleuth \
|
||||
--exclude-vulnerability-file=exclusions.txt \
|
||||
--token ${SONATYPE_API_KEY} \
|
||||
--username ${SONATYPE_USERNAME}
|
||||
nancy sleuth --exclude-vulnerability-file=exclusions.txt < pinniped-modules/modules.json
|
||||
|
||||
- name: run-go-vuln-scan
|
||||
on_success: { <<: *pr-status-on-success, params: { <<: *pr-status-on-success-params, context: run-go-vuln-scan } }
|
||||
@@ -1209,14 +1216,13 @@ jobs:
|
||||
# We don't need to run these on every version of Kubernetes for Kind in this pipeline, so we choose to run
|
||||
# them on one version to get some coverage.
|
||||
<<: *okta_integration_env_vars
|
||||
# The following Okta LDAP params will cause the integration tests to use Okta LDAP instead of OpenLDAP.
|
||||
# The following Jumpcloud params will cause the integration tests to use Jumpcloud instead of OpenLDAP.
|
||||
# We don't need to run these on every version of Kubernetes for Kind in this pipeline, so we choose to run
|
||||
# them on one version to get some coverage.
|
||||
<<: *okta_ldap_integration_env_vars
|
||||
<<: *jumpcloud_integration_env_vars
|
||||
# The following AD params enable the ActiveDirectory integration tests. We don't need to run these on every
|
||||
# version of Kubernetes for Kind in this pipeline, so we choose to run them on one version to get some coverage.
|
||||
# TODO: bring this back with a new AD server
|
||||
# <<: *active_directory_integration_env_vars
|
||||
<<: *active_directory_integration_env_vars
|
||||
# The following params enable the GitHub integration tests. We don't need to run these on every
|
||||
# version of Kubernetes for Kind in this pipeline, so we choose to run them on one version to get some coverage.
|
||||
<<: *github_integration_env_vars
|
||||
@@ -1361,8 +1367,7 @@ jobs:
|
||||
# The following AD params enable the ActiveDirectory integration tests. We don't need to run these on every
|
||||
# version of Kubernetes for Kind in this pipeline, but it is useful to know if we can communicate with our
|
||||
# AD server when using FIPS cipher suites.
|
||||
# TODO: bring this back with a new AD server
|
||||
# <<: *active_directory_integration_env_vars
|
||||
<<: *active_directory_integration_env_vars
|
||||
# The following params enable the GitHub integration tests. We don't need to run these on every
|
||||
# version of Kubernetes for Kind in this pipeline, but it is useful to know if we can communicate with
|
||||
# GitHub when using FIPS cipher suites.
|
||||
@@ -1819,7 +1824,6 @@ jobs:
|
||||
on_error: { <<: *pr-status-on-error, params: { <<: *pr-status-on-error-params, context: integration-test-gke-rapid } }
|
||||
on_abort: { <<: *pr-status-on-abort, params: { <<: *pr-status-on-abort-params, context: integration-test-gke-rapid } }
|
||||
public: true # all logs are publicly visible
|
||||
serial: true # since we need to choose a subnet, we can't run this in parallel anymore
|
||||
plan:
|
||||
- in_parallel:
|
||||
- get: pinniped-pr
|
||||
@@ -1842,10 +1846,6 @@ jobs:
|
||||
image: k8s-app-deployer-image
|
||||
params:
|
||||
GKE_CHANNEL: rapid
|
||||
SHARED_VPC_PROJECT: ((shared-vpc-project))
|
||||
SHARED_VPC_NAME: ((shared-vpc-name))
|
||||
SUBNET_REGION: ((subnet-region))
|
||||
SUBNET_NAME: ((gke-subnet-name-3)) # globally unique to this job
|
||||
<<: *gke_account_params
|
||||
- task: pre-warm-cluster
|
||||
timeout: 10m
|
||||
@@ -1884,7 +1884,6 @@ jobs:
|
||||
ensure:
|
||||
task: remove-cluster
|
||||
timeout: 10m
|
||||
attempts: 5
|
||||
file: pinniped-ci/pipelines/shared-tasks/remove-gke-cluster/task.yml
|
||||
image: k8s-app-deployer-image
|
||||
input_mapping:
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
display:
|
||||
@@ -56,23 +56,23 @@ resources:
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: https://github.com/vmware-tanzu/pinniped.git
|
||||
branch: main
|
||||
|
||||
- name: pinniped-ci
|
||||
type: git
|
||||
icon: github
|
||||
source:
|
||||
uri: https://github.com/vmware/pinniped.git
|
||||
uri: git@github.com:vmware-tanzu/pinniped.git
|
||||
branch: ci
|
||||
username: ((ci-bot-access-token-with-read-only-public-repos))
|
||||
private_key: ((source-repo-deploy-key))
|
||||
|
||||
- name: pinniped-latest-release-image
|
||||
type: registry-image
|
||||
icon: docker
|
||||
check_every: 10m
|
||||
source:
|
||||
repository: ghcr.io/vmware/pinniped/pinniped-server
|
||||
repository: ghcr.io/vmware-tanzu/pinniped/pinniped-server
|
||||
tag: latest
|
||||
|
||||
- name: pinniped-latest-main-image
|
||||
@@ -173,9 +173,6 @@ jobs:
|
||||
tag: alpine
|
||||
inputs:
|
||||
- name: pinniped-modules
|
||||
params:
|
||||
SONATYPE_API_KEY: ((sonatype-api-key))
|
||||
SONATYPE_USERNAME: ((sonatype-username))
|
||||
run:
|
||||
path: 'sh'
|
||||
args:
|
||||
@@ -198,10 +195,7 @@ jobs:
|
||||
CVE-2020-8561
|
||||
EOF
|
||||
|
||||
cat pinniped-modules/modules.json | nancy sleuth \
|
||||
--exclude-vulnerability-file=exclusions.txt \
|
||||
--token ${SONATYPE_API_KEY} \
|
||||
--username ${SONATYPE_USERNAME}
|
||||
nancy sleuth --exclude-vulnerability-file=exclusions.txt < pinniped-modules/modules.json
|
||||
|
||||
- name: trivy-release
|
||||
public: true # all logs are publicly visible
|
||||
@@ -269,11 +263,8 @@ jobs:
|
||||
image: gh-cli-image
|
||||
file: pinniped-ci/pipelines/shared-tasks/create-or-update-pr/task.yml
|
||||
params:
|
||||
DEPLOY_KEY: ((source-repo-deploy-key))
|
||||
GH_TOKEN: ((ci-bot-access-token-with-public-repo-write-permission))
|
||||
BRANCH: "pinny/bump-deps"
|
||||
COMMIT_MESSAGE: "Bump dependencies"
|
||||
PR_TITLE: "Bump dependencies"
|
||||
PR_BODY: "Automatically bumped all go.mod direct dependencies and/or images in dockerfiles."
|
||||
input_mapping:
|
||||
pinniped: pinniped-out
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ set -euo pipefail
|
||||
# - $DEPLOY_LOCAL_USER_AUTHENTICATOR, when set to "yes", will deploy and use the
|
||||
# local-user-authenticator instead of using the TMC webhook authenticator.
|
||||
# - $DEPLOY_TEST_TOOLS will deploy the squid proxy, Dex, and OpenLDAP into the cluster.
|
||||
# If the OKTA_* and JUMPCLOUD_*/OKTA_LDAP* variables are not present, then Dex and OpenLDAP
|
||||
# If the OKTA_* and JUMPCLOUD_* variables are not present, then Dex and OpenLDAP
|
||||
# will be configured for the integration tests.
|
||||
# - To use Okta instead of Dex, use the variables $OKTA_ISSUER, $OKTA_CLI_CLIENT_ID,
|
||||
# $OKTA_CLI_CALLBACK, $OKTA_ADDITIONAL_SCOPES, $OKTA_USERNAME_CLAIM, $OKTA_GROUPS_CLAIM,
|
||||
@@ -51,28 +51,19 @@ set -euo pipefail
|
||||
# - To use Jumpcloud instead of OpenLDAP, use the variables $JUMPCLOUD_LDAP_HOST,
|
||||
# $JUMPCLOUD_LDAP_STARTTLS_ONLY_HOST,
|
||||
# $JUMPCLOUD_LDAP_BIND_ACCOUNT_USERNAME, $JUMPCLOUD_LDAP_BIND_ACCOUNT_PASSWORD,
|
||||
# $JUMPCLOUD_LDAP_USERS_SEARCH_BASE, $JUMPCLOUD_LDAP_GROUPS_SEARCH_BASE, $JUMPCLOUD_LDAP_GROUPS_SEARCH_FILTER,
|
||||
# $JUMPCLOUD_LDAP_USERS_SEARCH_BASE, $JUMPCLOUD_LDAP_GROUPS_SEARCH_BASE,
|
||||
# $JUMPCLOUD_LDAP_USER_DN, $JUMPCLOUD_LDAP_USER_CN, $JUMPCLOUD_LDAP_USER_PASSWORD,
|
||||
# $JUMPCLOUD_LDAP_USER_UNIQUE_ID_ATTRIBUTE_NAME, $JUMPCLOUD_LDAP_USER_UNIQUE_ID_ATTRIBUTE_VALUE,
|
||||
# $JUMPCLOUD_LDAP_USER_EMAIL_ATTRIBUTE_NAME, $JUMPCLOUD_LDAP_USER_EMAIL_ATTRIBUTE_VALUE,
|
||||
# $JUMPCLOUD_LDAP_EXPECTED_DIRECT_GROUPS_DN, $JUMPCLOUD_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN,
|
||||
# and $JUMPCLOUD_LDAP_EXPECTED_DIRECT_GROUPS_CN to configure the LDAP tests.
|
||||
# - To use Okta LDAP instead of OpenLDAP, use the variables $OKTA_LDAP_HOST,
|
||||
# $OKTA_LDAP_STARTTLS_ONLY_HOST,
|
||||
# $OKTA_LDAP_BIND_ACCOUNT_USERNAME, $OKTA_LDAP_BIND_ACCOUNT_PASSWORD,
|
||||
# $OKTA_LDAP_USERS_SEARCH_BASE, $OKTA_LDAP_GROUPS_SEARCH_BASE, $OKTA_LDAP_GROUPS_SEARCH_FILTER,
|
||||
# $OKTA_LDAP_USER_DN, $OKTA_LDAP_USER_CN, $OKTA_LDAP_USER_PASSWORD,
|
||||
# $OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_NAME, $OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_VALUE,
|
||||
# $OKTA_LDAP_USER_EMAIL_ATTRIBUTE_NAME, $OKTA_LDAP_USER_EMAIL_ATTRIBUTE_VALUE,
|
||||
# $OKTA_LDAP_EXPECTED_DIRECT_GROUPS_DN, $OKTA_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN,
|
||||
# and $OKTA_LDAP_EXPECTED_DIRECT_GROUPS_CN to configure the LDAP tests.
|
||||
# - $FIREWALL_IDPS, when set to "yes" will add NetworkPolicies to effectively firewall the Concierge
|
||||
# and Supervisor pods such that they need to use the Squid proxy server to reach several of the IDPs.
|
||||
# Note that NetworkPolicy is not supported on all flavors of Kube, but can be enabled on GKE by using
|
||||
# `--enable-network-policy` when creating the GKE cluster, abd is supported in recent versions of Kind.
|
||||
# - $TEST_ACTIVE_DIRECTORY determines whether to test against AWS Managed Active
|
||||
# Directory. Note that there's no "local" equivalent-- for OIDC we use Dex's internal
|
||||
# user store or Okta, for LDAP we deploy OpenLDAP or use Jumpcloud/Okta LDAP,
|
||||
# user store or Okta, for LDAP we deploy OpenLDAP or use Jumpcloud,
|
||||
# but for AD there is only the hosted version.
|
||||
# When set, the tests are configured with the variables
|
||||
# $AWS_AD_HOST, $AWS_AD_DOMAIN, $AWS_AD_BIND_ACCOUNT_USERNAME, $AWS_AD_BIND_ACCOUNT_PASSWORD,
|
||||
@@ -98,12 +89,19 @@ set -euo pipefail
|
||||
# to choose its own IP address, and dynamically register that address as the name
|
||||
# specified in $SUPERVISOR_LOAD_BALANCER_DNS_NAME using the Cloud DNS service.
|
||||
# - $SUPERVISOR_INGRESS, when set to "yes", will deploy the Supervisor with a
|
||||
# ClusterIP Service defined and create an internal Ingress connected to that Service.
|
||||
# NodePort Service defined and create an Ingress connected to that Service.
|
||||
# When set to "yes" the following additional variables are expected:
|
||||
# - $SUPERVISOR_INGRESS_STATIC_IP_NAME: The name of the static IP resource from the
|
||||
# underlying cloud infrastructure platform. Required when $SUPERVISOR_INGRESS is "yes".
|
||||
# underlying cloud infrastructure platform. Optional.
|
||||
# - $SUPERVISOR_INGRESS_DNS_NAME: The DNS hostname name associated with the
|
||||
# ingress' IP address. Required when $SUPERVISOR_INGRESS is "yes".
|
||||
# - $SUPERVISOR_INGRESS_PATH_PATTERN: The path that will be set in the Ingress object
|
||||
# (e.g., "/", "/*"; this depends on what is supported by the underlying platform).
|
||||
# Required when $SUPERVISOR_INGRESS is "yes".
|
||||
# - If the $SUPERVISOR_INGRESS_DNS_NAME is given without the
|
||||
# $SUPERVISOR_INGRESS_STATIC_IP_NAME, then allow the ingress service
|
||||
# to choose its own IP address, and dynamically register that address as the name
|
||||
# specified in $SUPERVISOR_INGRESS_DNS_NAME using the Cloud DNS service.
|
||||
# - When neither $SUPERVISOR_LOAD_BALANCER nor $SUPERVISOR_INGRESS then we will use
|
||||
# nodeport services to make the supervisor available. In this case you may specify
|
||||
# $PINNIPED_SUPERVISOR_HTTP_NODEPORT and $PINNIPED_SUPERVISOR_HTTPS_NODEPORT if you
|
||||
@@ -178,6 +176,64 @@ function print_redacted_manifest() {
|
||||
print_or_redact_doc "$doc"
|
||||
}
|
||||
|
||||
function update_gcloud_dns_record() {
|
||||
if [[ -z "${PINNIPED_GCP_PROJECT:-}" ]]; then
|
||||
echo "PINNIPED_GCP_PROJECT env var must be set when using update_gcloud_dns_record"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local dns_name=$1
|
||||
local new_ip=$2
|
||||
local dns_record_name="${dns_name}."
|
||||
local dns_zone="pinniped-dev"
|
||||
local dns_project="$PINNIPED_GCP_PROJECT"
|
||||
|
||||
# Login to gcloud CLI
|
||||
gcloud auth activate-service-account "$GKE_USERNAME" --key-file <(echo "$GKE_JSON_KEY") --project "$dns_project"
|
||||
|
||||
# Get the current value of the DNS A record.
|
||||
# We assume that this record already exists because it was manually created.
|
||||
# We also assume in the transaction commands below that it was created with a TTL of 30 seconds.
|
||||
current_dns_record_ip=$(gcloud dns record-sets list --zone "$dns_zone" \
|
||||
--project "$dns_project" --name "$dns_record_name" --format json |
|
||||
jq -r ".[] | select(.name ==\"${dns_record_name}\") | .rrdatas[0]")
|
||||
|
||||
if [[ "$current_dns_record_ip" == "$new_ip" ]]; then
|
||||
echo "No update needed: DNS record $dns_record_name was already set to $new_ip"
|
||||
else
|
||||
echo "Changing DNS record $dns_record_name from $current_dns_record_ip to $new_ip ..."
|
||||
|
||||
# Updating a DNS record with gcloud must be done with a remove and an add wrapped in a transaction.
|
||||
gcloud dns record-sets transaction start --zone "$dns_zone" --project "$dns_project"
|
||||
gcloud dns record-sets transaction remove "$current_dns_record_ip" --name "$dns_name" \
|
||||
--ttl "30" --type "A" --zone "$dns_zone" --project "$dns_project"
|
||||
gcloud dns record-sets transaction add "$new_ip" --name "$dns_name" \
|
||||
--ttl "30" --type "A" --zone "$dns_zone" --project "$dns_project"
|
||||
change_id=$(gcloud dns record-sets transaction execute --zone "$dns_zone" --project "$dns_project" --format json | jq -r '.id')
|
||||
|
||||
# Wait for that transaction to commit. This is usually quick.
|
||||
change_status="not-done"
|
||||
while [[ "$change_status" != "done" ]]; do
|
||||
sleep 3
|
||||
change_status=$(gcloud dns record-sets changes describe "$change_id" \
|
||||
--zone "$dns_zone" --project "$dns_project" --format json | jq -r '.status')
|
||||
echo "Waiting for change $change_id to have status 'done'. Current status: $change_status"
|
||||
done
|
||||
|
||||
# Wait for DNS propagation. The TTL is 30 seconds, so this shouldn't take too long.
|
||||
echo "Waiting for new IP address $new_ip to appear in the result of a local DNS query. This may take a few minutes..."
|
||||
while true; do
|
||||
dig_result=$(dig +short "$dns_name")
|
||||
echo "dig result for $dns_name: $dig_result"
|
||||
if [[ "$dig_result" == "$new_ip" ]]; then
|
||||
echo "New IP address has finished DNS propagation. Done with DNS update!"
|
||||
break
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
if [[ "${TMC_API_TOKEN:-}" == "" && "${DEPLOY_LOCAL_USER_AUTHENTICATOR:-no}" != "yes" ]]; then
|
||||
echo "Must use either \$TMC_API_TOKEN or \$DEPLOY_LOCAL_USER_AUTHENTICATOR"
|
||||
exit 1
|
||||
@@ -305,7 +361,7 @@ EOF
|
||||
# Also annotate the service so that GKE ingress knows to use HTTP2 for the backend connection.
|
||||
cat <<EOF >>/tmp/add-annotations-for-gke-ingress-overlay.yaml
|
||||
#@ load("@ytt:overlay", "overlay")
|
||||
#@overlay/match by=overlay.subset({"kind": "Service", "metadata":{"name":"${supervisor_app_name}-clusterip"}}), expects=1
|
||||
#@overlay/match by=overlay.subset({"kind": "Service", "metadata":{"name":"${supervisor_app_name}-nodeport"}}), expects=1
|
||||
---
|
||||
metadata:
|
||||
annotations:
|
||||
@@ -313,20 +369,6 @@ metadata:
|
||||
cloud.google.com/app-protocols: '{"https":"HTTP2"}'
|
||||
#@overlay/match missing_ok=True
|
||||
cloud.google.com/backend-config: '{"default":"healthcheck-backendconfig"}'
|
||||
#@overlay/match missing_ok=True
|
||||
cloud.google.com/neg: '{"ingress": true}'
|
||||
EOF
|
||||
|
||||
# Save this file for possible later use. When we want to make a Supervisor load balancer service,
|
||||
# we need to make sure that we tell it that it should use an internal IP address.
|
||||
cat <<EOF >>/tmp/add-annotations-for-supervisor-lb-service-overlay.yaml
|
||||
#@ load("@ytt:overlay", "overlay")
|
||||
#@overlay/match by=overlay.subset({"kind": "Service", "metadata":{"name":"${supervisor_app_name}-loadbalancer"}}), expects=1
|
||||
---
|
||||
metadata:
|
||||
annotations:
|
||||
#@overlay/match missing_ok=True
|
||||
networking.gke.io/load-balancer-type: "Internal"
|
||||
EOF
|
||||
|
||||
if [[ "${DEPLOY_LOCAL_USER_AUTHENTICATOR:-no}" == "yes" ]]; then
|
||||
@@ -453,7 +495,6 @@ metadata:
|
||||
app: ${supervisor_app_name}
|
||||
annotations:
|
||||
kapp.k14s.io/disable-default-label-scoping-rules: ""
|
||||
networking.gke.io/load-balancer-type: "Internal"
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
@@ -482,7 +523,6 @@ metadata:
|
||||
app: dex
|
||||
annotations:
|
||||
kapp.k14s.io/disable-default-label-scoping-rules: ""
|
||||
networking.gke.io/load-balancer-type: "Internal"
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
@@ -632,7 +672,6 @@ if [[ "${DEPLOY_TEST_TOOLS:-no}" == "yes" ]]; then
|
||||
pinniped_test_ldap_bind_account_password=password
|
||||
pinniped_test_ldap_users_search_base="ou=users,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_groups_search_base="ou=groups,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_groups_search_filter=""
|
||||
pinniped_test_ldap_user_dn="cn=pinny,ou=users,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_user_cn="pinny"
|
||||
pinniped_test_ldap_user_password=${ldap_test_password}
|
||||
@@ -692,7 +731,6 @@ if [[ "${JUMPCLOUD_LDAP_HOST:-no}" != "no" ]]; then
|
||||
pinniped_test_ldap_bind_account_password="$JUMPCLOUD_LDAP_BIND_ACCOUNT_PASSWORD"
|
||||
pinniped_test_ldap_users_search_base="$JUMPCLOUD_LDAP_USERS_SEARCH_BASE"
|
||||
pinniped_test_ldap_groups_search_base="$JUMPCLOUD_LDAP_GROUPS_SEARCH_BASE"
|
||||
pinniped_test_ldap_groups_search_filter="$JUMPCLOUD_LDAP_GROUPS_SEARCH_FILTER"
|
||||
pinniped_test_ldap_user_dn="$JUMPCLOUD_LDAP_USER_DN"
|
||||
pinniped_test_ldap_user_cn="$JUMPCLOUD_LDAP_USER_CN"
|
||||
pinniped_test_ldap_user_password="$JUMPCLOUD_LDAP_USER_PASSWORD"
|
||||
@@ -707,31 +745,6 @@ if [[ "${JUMPCLOUD_LDAP_HOST:-no}" != "no" ]]; then
|
||||
pinniped_test_ldap_expected_indirect_groups_cn=""
|
||||
fi
|
||||
|
||||
# Whether or not the tools namespace is deployed, we can configure the integration
|
||||
# tests to use Jumpcloud instead of Okta LDAP as the LDAP provider.
|
||||
if [[ "${OKTA_LDAP_HOST:-no}" != "no" ]]; then
|
||||
pinniped_test_ldap_host="$OKTA_LDAP_HOST"
|
||||
pinniped_test_ldap_starttls_only_host="$OKTA_LDAP_STARTTLS_ONLY_HOST"
|
||||
pinniped_test_ldap_ldaps_ca_bundle=""
|
||||
pinniped_test_ldap_bind_account_username="$OKTA_LDAP_BIND_ACCOUNT_USERNAME"
|
||||
pinniped_test_ldap_bind_account_password="$OKTA_LDAP_BIND_ACCOUNT_PASSWORD"
|
||||
pinniped_test_ldap_users_search_base="$OKTA_LDAP_USERS_SEARCH_BASE"
|
||||
pinniped_test_ldap_groups_search_base="$OKTA_LDAP_GROUPS_SEARCH_BASE"
|
||||
pinniped_test_ldap_groups_search_filter="$OKTA_LDAP_GROUPS_SEARCH_FILTER"
|
||||
pinniped_test_ldap_user_dn="$OKTA_LDAP_USER_DN"
|
||||
pinniped_test_ldap_user_cn="$OKTA_LDAP_USER_CN"
|
||||
pinniped_test_ldap_user_password="$OKTA_LDAP_USER_PASSWORD"
|
||||
pinniped_test_ldap_user_unique_id_attribute_name="$OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_NAME"
|
||||
pinniped_test_ldap_user_unique_id_attribute_value="$OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_VALUE"
|
||||
pinniped_test_ldap_user_email_attribute_name="$OKTA_LDAP_USER_EMAIL_ATTRIBUTE_NAME"
|
||||
pinniped_test_ldap_user_email_attribute_value="$OKTA_LDAP_USER_EMAIL_ATTRIBUTE_VALUE"
|
||||
pinniped_test_ldap_expected_direct_groups_dn="$OKTA_LDAP_EXPECTED_DIRECT_GROUPS_DN"
|
||||
pinniped_test_ldap_expected_indirect_groups_dn=""
|
||||
pinniped_test_ldap_expected_direct_groups_cn="$OKTA_LDAP_EXPECTED_DIRECT_GROUPS_CN"
|
||||
pinniped_test_ldap_expected_direct_posix_groups_cn="$OKTA_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN"
|
||||
pinniped_test_ldap_expected_indirect_groups_cn=""
|
||||
fi
|
||||
|
||||
if [[ "${TEST_ACTIVE_DIRECTORY:-no}" == "yes" ]]; then
|
||||
# there's no way to test active directory locally... it has to be aws managed ad or nothing.
|
||||
# this is a separate toggle from $DEPLOY_TEST_TOOLS so we can run against ad once in the pr pipeline
|
||||
@@ -840,7 +853,6 @@ ytt --file . \
|
||||
--data-value "log_level=debug" \
|
||||
--data-value-yaml "custom_labels=$concierge_custom_labels" \
|
||||
--data-value "discovery_url=$discovery_url" \
|
||||
--data-value-yaml "impersonation_proxy_spec.service.annotations={'networking.gke.io/load-balancer-type': 'Internal', 'service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout': '4000'}" \
|
||||
${concierge_optional_ytt_values[@]+"${concierge_optional_ytt_values[@]}"} \
|
||||
>"$manifest"
|
||||
|
||||
@@ -854,7 +866,7 @@ echo
|
||||
set -x
|
||||
kapp deploy --yes --app "$concierge_app_name" --diff-changes --file "$manifest"
|
||||
|
||||
if ! { (($(kubectl version --output json | jq -r .serverVersion.major) == 1)) && (($(kubectl version --output json | jq -r .serverVersion.minor) < 27)); }; then
|
||||
if ! { (($(kubectl version --output json | jq -r .serverVersion.major) == 1)) && (($(kubectl version --output json | jq -r .serverVersion.minor) < 19)); }; then
|
||||
# Also perform a dry-run create with kubectl just to see if there are any validation errors.
|
||||
# Skip this on very old clusters, since we use some API fields (like seccompProfile) which did not exist back then.
|
||||
# Use can still install on these clusters by using kapp or by using kubectl --validate=false.
|
||||
@@ -879,7 +891,8 @@ if [[ "${USE_LOAD_BALANCERS_FOR_DEX_AND_SUPERVISOR:-no}" != "yes" ]]; then
|
||||
fi
|
||||
fi
|
||||
if [[ "${SUPERVISOR_INGRESS:-no}" == "yes" ]]; then
|
||||
supervisor_ytt_service_flags+=("--data-value-yaml=service_https_clusterip_port=443")
|
||||
# even when we have functioning ingress, we need a TCP connection to the supervisor https port to test its TLS config
|
||||
supervisor_ytt_service_flags+=("--data-value-yaml=service_https_nodeport_port=443")
|
||||
fi
|
||||
if [[ "${SUPERVISOR_LOAD_BALANCER:-no}" == "no" && "${SUPERVISOR_INGRESS:-no}" == "no" ]]; then
|
||||
# When no specific service was requested for the supervisor, we assume we are running on
|
||||
@@ -908,10 +921,6 @@ fi
|
||||
if [[ "${SUPERVISOR_INGRESS:-no}" == "yes" && "$cluster_has_gke_backend_config" == "yes" ]]; then
|
||||
supervisor_optional_ytt_values+=("--file=/tmp/add-annotations-for-gke-ingress-overlay.yaml")
|
||||
fi
|
||||
if [[ "${USE_LOAD_BALANCERS_FOR_DEX_AND_SUPERVISOR:-no}" != "yes" && "${SUPERVISOR_LOAD_BALANCER:-no}" == "yes" ]]; then
|
||||
# When using the ytt templates to create a LB service, then also tell the service to use an internal IP.
|
||||
supervisor_optional_ytt_values+=("--file=/tmp/add-annotations-for-supervisor-lb-service-overlay.yaml")
|
||||
fi
|
||||
|
||||
echo "Deploying the Supervisor app to the cluster..."
|
||||
echo "Using ytt service flags:" "${supervisor_ytt_service_flags[@]}"
|
||||
@@ -939,7 +948,7 @@ echo
|
||||
set -x
|
||||
kapp deploy --yes --app "$supervisor_app_name" --diff-changes --file "$manifest"
|
||||
|
||||
if ! { (($(kubectl version --output json | jq -r .serverVersion.major) == 1)) && (($(kubectl version --output json | jq -r .serverVersion.minor) < 27)); }; then
|
||||
if ! { (($(kubectl version --output json | jq -r .serverVersion.major) == 1)) && (($(kubectl version --output json | jq -r .serverVersion.minor) < 23)); }; then
|
||||
# Also perform a dry-run create with kubectl just to see if there are any validation errors.
|
||||
# Skip this on very old clusters, since we use some API fields (like seccompProfile) which did not exist back then.
|
||||
# In the Supervisor CRDs we began to use CEL validations which were introduced in Kubernetes 1.23.
|
||||
@@ -1036,6 +1045,12 @@ if [[ "${SUPERVISOR_LOAD_BALANCER:-no}" == "yes" ]]; then
|
||||
echo "Load balancer reported ingress: $ingress_json"
|
||||
ingress_ip=$(echo "$ingress_json" | jq -r '.ingress[0].ip')
|
||||
|
||||
if [[ "${SUPERVISOR_LOAD_BALANCER_STATIC_IP:-}" == "" ]]; then
|
||||
# No static IP was provided, so the load balancer was allowed to choose its own IP.
|
||||
# Update the DNS record associated with $SUPERVISOR_LOAD_BALANCER_DNS_NAME to make it match the new IP.
|
||||
update_gcloud_dns_record "$SUPERVISOR_LOAD_BALANCER_DNS_NAME" "$ingress_ip"
|
||||
fi
|
||||
|
||||
# Use the published ingress address for the integration test env vars below.
|
||||
supervisor_https_address="https://${SUPERVISOR_LOAD_BALANCER_DNS_NAME}:443"
|
||||
elif [[ "${USE_LOAD_BALANCERS_FOR_DEX_AND_SUPERVISOR:-no}" == "yes" ]]; then
|
||||
@@ -1142,7 +1157,17 @@ EOF
|
||||
kubectl get -n "$supervisor_namespace" secret "$ingress_tls_secret" -o jsonpath=\{.data.'tls\.crt'\} | base64 -d >"$ingress_tls_cert_file"
|
||||
fi
|
||||
|
||||
# If a static IP name was provided then use it. Otherwise, don't include the annotation at all.
|
||||
static_ip_annotation=""
|
||||
if [[ "${SUPERVISOR_INGRESS_STATIC_IP_NAME:-}" != "" ]]; then
|
||||
static_ip_annotation="kubernetes.io/ingress.global-static-ip-name: ${SUPERVISOR_INGRESS_STATIC_IP_NAME}"
|
||||
fi
|
||||
|
||||
if [[ "$cluster_has_gke_backend_config" == "yes" ]]; then
|
||||
# Get the nodePort port number that was dynamically assigned to the nodeport service.
|
||||
nodeport_service_port=$(kubectl get service -n "${supervisor_namespace}" "${supervisor_app_name}-nodeport" -o jsonpath='{.spec.ports[0].nodePort}')
|
||||
echo "${supervisor_app_name}-nodeport Service was assigned nodePort $nodeport_service_port"
|
||||
|
||||
# Create or update a BackendConfig to configure the health checks that will be used by the Ingress for its backend Service.
|
||||
# The annotation already added to the Service by an overlay above tells the Service to use this BackendConfig.
|
||||
cat <<EOF | kubectl apply --wait -f -
|
||||
@@ -1159,10 +1184,11 @@ spec:
|
||||
checkIntervalSec: 30
|
||||
healthyThreshold: 1
|
||||
unhealthyThreshold: 10
|
||||
port: ${nodeport_service_port}
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Create or update an Ingress to sit in front of our supervisor-clusterip service.
|
||||
# Create or update an Ingress to sit in front of our supervisor-nodeport service.
|
||||
cat <<EOF | kubectl apply --wait -f -
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
@@ -1170,9 +1196,6 @@ metadata:
|
||||
name: ${supervisor_app_name}
|
||||
namespace: ${supervisor_namespace}
|
||||
annotations:
|
||||
cloud.google.com/neg: '{"ingress":true}'
|
||||
kubernetes.io/ingress.class: "gce-internal"
|
||||
kubernetes.io/ingress.regional-static-ip-name: "${SUPERVISOR_INGRESS_STATIC_IP_NAME}"
|
||||
kubernetes.io/ingress.allow-http: "false"
|
||||
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
|
||||
# TODO Re-enable backend TLS cert verification once the Supervisor's default TLS cert is generated by automation in this script.
|
||||
@@ -1180,10 +1203,11 @@ metadata:
|
||||
#nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
|
||||
#nginx.ingress.kubernetes.io/proxy-ssl-secret: ${supervisor_namespace}/${supervisor_app_name}-default-tls-certificate
|
||||
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
|
||||
${static_ip_annotation}
|
||||
spec:
|
||||
defaultBackend:
|
||||
service:
|
||||
name: ${supervisor_app_name}-clusterip
|
||||
name: ${supervisor_app_name}-nodeport
|
||||
port:
|
||||
number: 443
|
||||
tls:
|
||||
@@ -1192,6 +1216,25 @@ spec:
|
||||
- ${SUPERVISOR_INGRESS_DNS_NAME}
|
||||
EOF
|
||||
|
||||
# If no static IP was provided for the ingress, then register the dynamic IP of the ingress with the DNS provider.
|
||||
if [[ "${SUPERVISOR_INGRESS_STATIC_IP_NAME:-}" == "" ]]; then
|
||||
# Wait for the ingress to get an IP
|
||||
ingress_json='{}'
|
||||
while [[ "$ingress_json" == '{}' ]]; do
|
||||
echo "Checking for ingress address..."
|
||||
sleep 1
|
||||
ingress_json=$(kubectl get ingress "${supervisor_app_name}" -n "$supervisor_namespace" -o json |
|
||||
jq -r '.status.loadBalancer')
|
||||
done
|
||||
|
||||
echo "Ingress reported address: $ingress_json"
|
||||
ingress_ip=$(echo "$ingress_json" | jq -r '.ingress[0].ip')
|
||||
|
||||
# No static IP was provided, so the load balancer was allowed to choose its own IP.
|
||||
# Update the DNS record associated with $SUPERVISOR_INGRESS_DNS_NAME to make it match the new IP.
|
||||
update_gcloud_dns_record "$SUPERVISOR_INGRESS_DNS_NAME" "$ingress_ip"
|
||||
fi
|
||||
|
||||
# Wait for the Ingress frontend to be up and running. Wait forever... until this Concourse task times out.
|
||||
healthz_via_ingress_url="https://${SUPERVISOR_INGRESS_DNS_NAME}/healthz"
|
||||
echo "The Ingress TLS CA bundle is:"
|
||||
@@ -1239,7 +1282,6 @@ export PINNIPED_TEST_LDAP_BIND_ACCOUNT_USERNAME='${pinniped_test_ldap_bind_accou
|
||||
export PINNIPED_TEST_LDAP_BIND_ACCOUNT_PASSWORD='${pinniped_test_ldap_bind_account_password}'
|
||||
export PINNIPED_TEST_LDAP_USERS_SEARCH_BASE='${pinniped_test_ldap_users_search_base}'
|
||||
export PINNIPED_TEST_LDAP_GROUPS_SEARCH_BASE='${pinniped_test_ldap_groups_search_base}'
|
||||
export PINNIPED_TEST_LDAP_GROUPS_SEARCH_FILTER='${pinniped_test_ldap_groups_search_filter}'
|
||||
export PINNIPED_TEST_LDAP_USER_DN='${pinniped_test_ldap_user_dn}'
|
||||
export PINNIPED_TEST_LDAP_USER_CN='${pinniped_test_ldap_user_cn}'
|
||||
export PINNIPED_TEST_LDAP_USER_PASSWORD='${pinniped_test_ldap_user_password}'
|
||||
@@ -1258,9 +1300,6 @@ export PINNIPED_TEST_CLI_OIDC_ISSUER_CA_BUNDLE='${test_cli_oidc_issuer_ca_bundle
|
||||
export PINNIPED_TEST_CLI_OIDC_ISSUER='${test_cli_oidc_issuer}'
|
||||
export PINNIPED_TEST_CLI_OIDC_PASSWORD='${test_cli_oidc_password}'
|
||||
export PINNIPED_TEST_CLI_OIDC_USERNAME='${test_cli_oidc_username}'
|
||||
export PINNIPED_TEST_CLI_OIDC_USERNAME_CLAIM='${test_supervisor_upstream_oidc_username_claim}'
|
||||
export PINNIPED_TEST_CLI_OIDC_GROUPS_CLAIM='${test_supervisor_upstream_oidc_groups_claim}'
|
||||
export PINNIPED_TEST_CLI_OIDC_EXPECTED_GROUPS='${test_supervisor_upstream_oidc_groups}'
|
||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_CALLBACK_URL='${test_supervisor_upstream_oidc_callback_url}'
|
||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_ADDITIONAL_SCOPES='${test_supervisor_upstream_oidc_additional_scopes}'
|
||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_USERNAME_CLAIM='${test_supervisor_upstream_oidc_username_claim}'
|
||||
|
||||
@@ -3,7 +3,12 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
FROM golang:1.25.5-bookworm as build-env
|
||||
|
||||
# Using bullseye (debian 11) until google/cloud-sdk starts using bookworm (debian 12) because the
|
||||
# test binaries built by this dockerfile are run in a container built by dockerfiles/integration-test-runner/Dockerfile
|
||||
# which uses google/cloud-sdk as the base image. Mismatching debian versions causes the pinniped-integration-test
|
||||
# built below to error upon execution complaining that the expected version of GLIBC is not found.
|
||||
FROM golang:1.24.3-bullseye as build-env
|
||||
WORKDIR /work
|
||||
COPY . .
|
||||
ARG GOPROXY
|
||||
|
||||
@@ -6,7 +6,11 @@
|
||||
# we need a separate dockerfile for the fips test image so that the integration tests
|
||||
# use the right ciphers etc.
|
||||
|
||||
FROM golang:1.25.5-bookworm as build-env
|
||||
# Using bullseye (debian 11) until google/cloud-sdk starts using bookworm (debian 12) because the
|
||||
# test binaries built by this dockerfile are run in a container built by dockerfiles/integration-test-runner/Dockerfile
|
||||
# which uses google/cloud-sdk as the base image. Mismatching debian versions causes the pinniped-integration-test
|
||||
# built below to error upon execution complaining that the expected version of GLIBC is not found.
|
||||
FROM golang:1.24.3-bullseye as build-env
|
||||
WORKDIR /work
|
||||
COPY . .
|
||||
ARG GOPROXY
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: pinniped-ci
|
||||
|
||||
@@ -1,27 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# This procedure is inspired from https://github.com/aojea/kind-images/blob/master/.circleci/config.yml
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Put the original apt source list back.
|
||||
sudo cp /etc/apt/sources.list.bak /etc/apt/sources.list
|
||||
|
||||
# Note that sources.list.bak file should have this content for debian 11,
|
||||
# noted here in case the file ever gets removed from the OS disk image:
|
||||
|
||||
# deb https://deb.debian.org/debian bullseye main
|
||||
# deb-src https://deb.debian.org/debian bullseye main
|
||||
# deb https://deb.debian.org/debian-security bullseye-security main
|
||||
# deb-src https://deb.debian.org/debian-security bullseye-security main
|
||||
# deb https://deb.debian.org/debian bullseye-updates main
|
||||
# deb-src https://deb.debian.org/debian bullseye-updates main
|
||||
# deb https://deb.debian.org/debian bullseye-backports main
|
||||
# deb-src https://deb.debian.org/debian bullseye-backports main
|
||||
|
||||
# Choose the tag for the new image that we will build below.
|
||||
full_repo="${PUSH_TO_IMAGE_REGISTRY}/${PUSH_TO_IMAGE_REPO}"
|
||||
image_tag="${full_repo}:latest"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -17,61 +17,27 @@ gcloud auth activate-service-account \
|
||||
|
||||
# Create a temporary username because we can't ssh as root. Note that this username must be 32 character or less.
|
||||
ssh_user="kind-node-builder-$(openssl rand -hex 4)"
|
||||
echo "ssh user will be ${ssh_user}"
|
||||
ssh_dest="${ssh_user}@${instance_name}"
|
||||
echo "ssh user@dest will be ${ssh_dest}"
|
||||
|
||||
# Make a private key for ssh.
|
||||
# gcloud scp/ssh commands will interactively prompt to create an ssh key unless one already exists, so create one.
|
||||
mkdir -p "$HOME/.ssh"
|
||||
ssh_key_file="$HOME/.ssh/kind-node-builder-key"
|
||||
ssh-keygen -t rsa -b 4096 -q -N "" -f "$ssh_key_file"
|
||||
|
||||
# When run in CI, the service account should not have permission to create project-wide keys, so explicitly add the
|
||||
# key only to the specific VM instance (as VM metadata). We don't want to pollute the project-wide keys with these.
|
||||
# See https://cloud.google.com/compute/docs/connect/add-ssh-keys#after-vm-creation for explanation of these commands.
|
||||
# Note that this overwrites all ssh keys in the metadata. At the moment, these VMs have no ssh keys in the metadata
|
||||
# upon creation, so it should always be okay to overwrite the empty value. However, if someday they need to have some
|
||||
# initial ssh keys in the metadata for some reason, and if those keys need to be preserved for some reason, then
|
||||
# these commands could be enhanced to instead read the keys, add to them, and write back the new list.
|
||||
future_time="$(date --utc --date '+3 hours' '+%FT%T%z')"
|
||||
echo \
|
||||
"${ssh_user}:$(cat "${ssh_key_file}.pub") google-ssh {\"userName\":\"${ssh_user}\",\"expireOn\":\"${future_time}\"}" \
|
||||
>/tmp/ssh-key-values
|
||||
gcloud compute instances add-metadata "$instance_name" \
|
||||
--metadata-from-file ssh-keys=/tmp/ssh-key-values \
|
||||
--zone "$INSTANCE_ZONE" --project "$GCP_PROJECT"
|
||||
|
||||
# Get the IP so we can use regular ssh (not gcloud ssh), now that it has been set up.
|
||||
gcloud_instance_ip=$(gcloud compute instances describe \
|
||||
--zone "$INSTANCE_ZONE" --project "$GCP_PROJECT" "${instance_name}" \
|
||||
--format='get(networkInterfaces[0].networkIP)')
|
||||
|
||||
ssh_dest="${ssh_user}@${gcloud_instance_ip}"
|
||||
|
||||
# Wait for the ssh server of the new instance to be ready.
|
||||
attempts=0
|
||||
while ! ssh -i "$ssh_key_file" -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null "$ssh_dest" echo connection test; do
|
||||
echo "Waiting for ssh server to start ..."
|
||||
attempts=$((attempts + 1))
|
||||
if [[ $attempts -gt 25 ]]; then
|
||||
echo "ERROR: ssh server never accepted connections after waiting for a while"
|
||||
exit 1
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Copy the build script to the VM.
|
||||
echo "Copying $local_build_script to $instance_name as $remote_build_script..."
|
||||
scp -i "$ssh_key_file" \
|
||||
-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
gcloud compute scp --zone "$INSTANCE_ZONE" --project "$GCP_PROJECT" \
|
||||
--ssh-key-file "$ssh_key_file" --ssh-key-expire-after 1h --strict-host-key-checking no \
|
||||
"$local_build_script" "$ssh_dest":"$remote_build_script"
|
||||
|
||||
# Run the script that was copied to the server above.
|
||||
# Note that this assumes that there is no single quote character inside the values of PUSH_TO_IMAGE_REPO,
|
||||
# DOCKER_USERNAME, and DOCKER_PASSWORD, which would cause quoting problems in the command below.
|
||||
echo "Running $remote_build_script on $instance_name..."
|
||||
ssh -i "$ssh_key_file" \
|
||||
-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
"$ssh_dest" \
|
||||
"chmod 755 $remote_build_script && export PUSH_TO_IMAGE_REGISTRY='${PUSH_TO_IMAGE_REGISTRY}' && export PUSH_TO_IMAGE_REPO='${PUSH_TO_IMAGE_REPO}' && export DOCKER_USERNAME='${DOCKER_USERNAME}' && export DOCKER_PASSWORD='${DOCKER_PASSWORD}' && $remote_build_script"
|
||||
gcloud compute ssh --zone "$INSTANCE_ZONE" --project "$GCP_PROJECT" "$ssh_dest" \
|
||||
--ssh-key-file "$ssh_key_file" --ssh-key-expire-after 1h --strict-host-key-checking no \
|
||||
--command "chmod 755 $remote_build_script && export PUSH_TO_IMAGE_REGISTRY='${PUSH_TO_IMAGE_REGISTRY}' && export PUSH_TO_IMAGE_REPO='${PUSH_TO_IMAGE_REPO}' && export DOCKER_USERNAME='${DOCKER_USERNAME}' && export DOCKER_PASSWORD='${DOCKER_PASSWORD}' && $remote_build_script"
|
||||
|
||||
echo
|
||||
echo "Done!"
|
||||
|
||||
@@ -28,7 +28,7 @@ then
|
||||
exit 1
|
||||
fi
|
||||
# check whether the kube-cert-agent binary has particular symbols that only exist when it's compiled with non-boring crypto
|
||||
kube_cert_agent_has_regular_crypto="$(go tool nm './image/rootfs/usr/local/bin/pinniped-concierge-kube-cert-agent' | grep sha256 | grep di | grep -v fips)"
|
||||
kube_cert_agent_has_regular_crypto="$(go tool nm './image/rootfs/usr/local/bin/pinniped-concierge-kube-cert-agent' | grep sha256 | grep di)"
|
||||
# if any of these symbols exist, that means it was compiled wrong and it should fail.
|
||||
if [ -n "$kube_cert_agent_has_regular_crypto" ]
|
||||
then
|
||||
|
||||
@@ -10,6 +10,6 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
run:
|
||||
path: pinniped-ci/pipelines/shared-tasks/confirm-built-with-fips/task.sh
|
||||
|
||||
@@ -10,7 +10,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
run:
|
||||
# Confirm that the correct git sha was baked into the executables and that they log the version as their
|
||||
# first line of output. Do this by directly running the server binary from the rootfs of the built image.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -20,10 +20,10 @@ echo "Creating $INSTANCE_NAME in $INSTANCE_ZONE..."
|
||||
gcloud compute instances create "${INSTANCE_NAME}" \
|
||||
--zone "${INSTANCE_ZONE}" \
|
||||
--machine-type=e2-standard-2 \
|
||||
--image=debian-11-bullseye-v20210916 --image-project=debian-cloud \
|
||||
--boot-disk-size=30GB --boot-disk-type=pd-ssd \
|
||||
--labels "kind-node-builder=" \
|
||||
--no-service-account --no-scopes \
|
||||
--network-interface=stack-type=IPV4_ONLY,subnet=projects/"$SHARED_VPC_PROJECT"/regions/"${SUBNET_REGION}"/subnetworks/"${SUBNET_NAME}",no-address \
|
||||
--create-disk=auto-delete=yes,boot=yes,device-name="${INSTANCE_NAME}",image=projects/"${DISK_IMAGES_PROJECT}"/global/images/labs-saas-gcp-debian11-packer-latest,mode=rw,size=30,type=pd-ssd \
|
||||
--tags=kind-node-image-builder
|
||||
|
||||
echo "$INSTANCE_NAME" > name
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -12,9 +12,5 @@ params:
|
||||
GCP_PROJECT:
|
||||
GCP_USERNAME:
|
||||
GCP_JSON_KEY:
|
||||
SHARED_VPC_PROJECT:
|
||||
SUBNET_REGION:
|
||||
SUBNET_NAME:
|
||||
DISK_IMAGES_PROJECT:
|
||||
run:
|
||||
path: pinniped-ci/pipelines/shared-tasks/create-kind-node-builder-vm/task.sh
|
||||
|
||||
@@ -1,24 +1,33 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${BRANCH:-}" || -z "${COMMIT_MESSAGE:-}" || -z "${PR_TITLE:-}" || -z "${PR_BODY:-}" ]]; then
|
||||
echo "BRANCH, COMMIT_MESSAGE, PR_TITLE, and PR_BODY env vars are all required"
|
||||
exit 1
|
||||
fi
|
||||
branch="${BRANCH:-"pinny/bump-deps"}"
|
||||
|
||||
cd pinniped
|
||||
|
||||
# Print the current status to the log.
|
||||
git status
|
||||
|
||||
# Copied from https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints
|
||||
github_hosts='
|
||||
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
|
||||
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
|
||||
github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
|
||||
'
|
||||
|
||||
# Prepare to be able to do commits and pushes.
|
||||
ssh_dir="$HOME"/.ssh/
|
||||
mkdir "$ssh_dir"
|
||||
echo "$github_hosts" >"$ssh_dir"/known_hosts
|
||||
echo "${DEPLOY_KEY}" >"$ssh_dir"/id_rsa
|
||||
chmod 600 "$ssh_dir"/id_rsa
|
||||
git config user.email "pinniped-ci-bot@users.noreply.github.com"
|
||||
git config user.name "Pinny"
|
||||
git remote add https_origin "https://${GH_TOKEN}@github.com/vmware/pinniped.git"
|
||||
git remote add ssh_origin "git@github.com:vmware-tanzu/pinniped.git"
|
||||
|
||||
# Add all the changed files.
|
||||
git add .
|
||||
@@ -36,9 +45,9 @@ fi
|
||||
|
||||
# Check if the branch already exists on the remote.
|
||||
new_branch="no"
|
||||
if [[ -z "$(git ls-remote https_origin "$BRANCH")" ]]; then
|
||||
if [[ -z "$(git ls-remote ssh_origin "$branch")" ]]; then
|
||||
echo "The branch does not already exist, so create it."
|
||||
git checkout -b "$BRANCH"
|
||||
git checkout -b "$branch"
|
||||
git status
|
||||
new_branch="yes"
|
||||
else
|
||||
@@ -47,9 +56,9 @@ else
|
||||
git status
|
||||
git stash
|
||||
# Fetch all the remote branches so we can use one of them.
|
||||
git fetch https_origin
|
||||
git fetch ssh_origin
|
||||
# The branch already exists, so reuse it.
|
||||
git checkout "$BRANCH"
|
||||
git checkout "$branch"
|
||||
# Pull to sync up commits with the remote branch.
|
||||
git pull --rebase --autostash
|
||||
# Throw away all previous commits on the branch and set it up to look like main again.
|
||||
@@ -67,14 +76,14 @@ git --no-pager diff --staged
|
||||
echo
|
||||
|
||||
# Commit.
|
||||
echo "Committing changes to branch $BRANCH. New branch? $new_branch."
|
||||
git commit -m "$COMMIT_MESSAGE"
|
||||
echo "Committing changes to branch $branch. New branch? $new_branch."
|
||||
git commit -m "Bump dependencies"
|
||||
|
||||
# Push.
|
||||
if [[ "$new_branch" == "yes" ]]; then
|
||||
# Push the new branch to the remote.
|
||||
echo "Pushing the new branch."
|
||||
git push --set-upstream https_origin "$BRANCH"
|
||||
git push --set-upstream ssh_origin "$branch"
|
||||
else
|
||||
# Force push the existing branch to the remote.
|
||||
echo "Force pushing the existing branch."
|
||||
@@ -84,10 +93,11 @@ fi
|
||||
# Now check if there is already a PR open for our branch.
|
||||
# If there is already an open PR, then we just updated it by force pushing the branch.
|
||||
# Note that using the gh CLI without login depends on setting the GH_TOKEN env var.
|
||||
open_pr=$(gh pr list --head "$BRANCH" --json title --jq '. | length')
|
||||
open_pr=$(gh pr list --head "$branch" --json title --jq '. | length')
|
||||
if [[ "$open_pr" == "0" ]]; then
|
||||
# There is no currently open PR for this branch, so open a new PR for this branch
|
||||
# against main, and set the title and body.
|
||||
echo "Creating PR."
|
||||
gh pr create --head "$BRANCH" --base main --title "$PR_TITLE" --body "$PR_BODY"
|
||||
gh pr create --head "$branch" --base main \
|
||||
--title "Bump dependencies" --body "Automatically bumped all go.mod direct dependencies and/or images in dockerfiles."
|
||||
fi
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -10,8 +10,5 @@ params:
|
||||
DEPLOY_KEY:
|
||||
GH_TOKEN:
|
||||
BRANCH:
|
||||
COMMIT_MESSAGE:
|
||||
PR_TITLE:
|
||||
PR_BODY:
|
||||
run:
|
||||
path: pinniped-ci/pipelines/shared-tasks/create-or-update-pr/task.sh
|
||||
|
||||
@@ -13,7 +13,7 @@ aws configure set credential_source Environment --profile service-account
|
||||
aws configure set role_arn "$AWS_ROLE_ARN" --profile service-account
|
||||
|
||||
# Set some variables.
|
||||
CLUSTER_NAME="eks-$(openssl rand -hex 8)"
|
||||
CLUSTER_NAME="eks-$(python -c 'import os,binascii; print binascii.b2a_hex(os.urandom(8))')"
|
||||
ADMIN_USERNAME="$CLUSTER_NAME-admin"
|
||||
export CLUSTER_NAME
|
||||
export ADMIN_USERNAME
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -13,25 +13,13 @@ export USE_GKE_GCLOUD_AUTH_PLUGIN=True
|
||||
cd deploy-gke-cluster-output
|
||||
gcloud auth activate-service-account "$GCP_SERVICE_ACCOUNT" --key-file <(echo "$GCP_JSON_KEY") --project "$GCP_PROJECT"
|
||||
|
||||
# Decide if we want a regional or zonal cluster.
|
||||
if [[ -n "$CLUSTER_REGION" ]]; then
|
||||
region_or_zone_flag="--region=$CLUSTER_REGION"
|
||||
region_or_zone_suffix="region-$CLUSTER_REGION"
|
||||
# regional clusters have 3 nodes (one per zone, minimum 3 zones), so use a smaller machine type
|
||||
machine_type="e2-medium"
|
||||
else
|
||||
region_or_zone_flag="--zone=$CLUSTER_ZONE"
|
||||
region_or_zone_suffix="zone-$CLUSTER_ZONE"
|
||||
# zonal clusters have 1 node, so use a bigger machine type
|
||||
machine_type="e2-standard-4"
|
||||
fi
|
||||
|
||||
if [ -n "$KUBE_VERSION" ]; then
|
||||
echo
|
||||
echo "Trying to use Kubernetes version $KUBE_VERSION"
|
||||
|
||||
# Look up the latest GKE version for KUBE_VERSION.
|
||||
GKE_VERSIONS="$(gcloud container get-server-config "$region_or_zone_flag" --format json \
|
||||
GKE_VERSIONS="$(gcloud container get-server-config --zone "$CLUSTER_ZONE" --format json \
|
||||
| jq -r '.validMasterVersions[]')"
|
||||
echo
|
||||
echo "Found all versions of Kubernetes supported by GKE:"
|
||||
@@ -48,38 +36,28 @@ else
|
||||
export VERSION_FLAG="--release-channel=${GKE_CHANNEL:-"regular"}"
|
||||
fi
|
||||
|
||||
# Include the region or zone of the cluster in its name. This will allow us to change our preferred region/zone for new
|
||||
# clusters anytime we want, and the existing clusters can still be deleted because the old region/zone can
|
||||
# Include the zone of the cluster in its name. This will allow us to change our preferred zone for new
|
||||
# clusters anytime we want, and the existing clusters can still be deleted because the old zone can
|
||||
# be parsed out from the cluster name at deletion time.
|
||||
CLUSTER_NAME="gke-$(openssl rand -hex 4)-${region_or_zone_suffix}"
|
||||
CLUSTER_NAME="gke-$(openssl rand -hex 4)-zone-${CLUSTER_ZONE}"
|
||||
|
||||
# The cluster name becomes the name of the lock in the pool.
|
||||
echo "$CLUSTER_NAME" >name
|
||||
echo "$CLUSTER_NAME" > name
|
||||
|
||||
# Start the cluster
|
||||
# Note that --enable-network-policy is required to enable NetworkPolicy resources. Otherwise they are ignored.
|
||||
gcloud container clusters create "$CLUSTER_NAME" \
|
||||
"$region_or_zone_flag" \
|
||||
--zone "$CLUSTER_ZONE" \
|
||||
"$VERSION_FLAG" \
|
||||
--num-nodes 1 \
|
||||
--machine-type "$machine_type" \
|
||||
--machine-type e2-standard-4 \
|
||||
--preemptible \
|
||||
--issue-client-certificate \
|
||||
--no-enable-basic-auth \
|
||||
--enable-network-policy \
|
||||
--tags "gke-broadcom" \
|
||||
--enable-master-authorized-networks \
|
||||
--master-authorized-networks "10.0.0.0/8" \
|
||||
--enable-private-nodes \
|
||||
--enable-private-endpoint \
|
||||
--enable-ip-alias \
|
||||
--network "projects/${SHARED_VPC_PROJECT}/global/networks/${SHARED_VPC_NAME}" \
|
||||
--subnetwork "projects/${SHARED_VPC_PROJECT}/regions/${SUBNET_REGION}/subnetworks/${SUBNET_NAME}" \
|
||||
--cluster-secondary-range-name "services" \
|
||||
--services-secondary-range-name "pods"
|
||||
--enable-network-policy
|
||||
|
||||
# Get the cluster details back, including the admin certificate:
|
||||
gcloud container clusters describe "$CLUSTER_NAME" "$region_or_zone_flag" --format json \
|
||||
gcloud container clusters describe "$CLUSTER_NAME" --zone "$CLUSTER_ZONE" --format json \
|
||||
> /tmp/cluster.json
|
||||
|
||||
# Make a new kubeconfig user "cluster-admin" using the admin cert.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -10,14 +10,9 @@ outputs:
|
||||
params:
|
||||
KUBE_VERSION:
|
||||
CLUSTER_ZONE:
|
||||
CLUSTER_REGION:
|
||||
GCP_PROJECT:
|
||||
GCP_SERVICE_ACCOUNT:
|
||||
GCP_JSON_KEY:
|
||||
GKE_CHANNEL:
|
||||
SHARED_VPC_PROJECT:
|
||||
SHARED_VPC_NAME:
|
||||
SUBNET_REGION:
|
||||
SUBNET_NAME:
|
||||
run:
|
||||
path: pinniped-ci/pipelines/shared-tasks/deploy-gke-cluster/task.sh
|
||||
|
||||
@@ -1,22 +1,22 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# This is the script that runs at startup to launch Kind on GCE.
|
||||
# A log of the output of this script can be viewed by running this command on the VM:
|
||||
# sudo journalctl -u google-startup-scripts.service --no-pager
|
||||
# sudo journalctl -u google-startup-scripts.service
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
function cleanup() {
|
||||
# Upon exit, try to save the log of everything that happened to make debugging errors easier.
|
||||
curl --retry-all-errors --retry 5 -X PUT --data "$(journalctl -u google-startup-scripts.service --no-pager)" \
|
||||
curl --retry-all-errors --retry 5 -X PUT --data "$(journalctl -u google-startup-scripts.service)" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/guest-attributes/kind/init_log -H "Metadata-Flavor: Google"
|
||||
}
|
||||
trap "cleanup" EXIT SIGINT
|
||||
|
||||
INTERNAL_IP="$(curl --retry-all-errors --retry 5 http://metadata/computeMetadata/v1/instance/network-interfaces/0/ip -H "Metadata-Flavor: Google")"
|
||||
PUBLIC_IP="$(curl --retry-all-errors --retry 5 http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip -H "Metadata-Flavor: Google")"
|
||||
KIND_VERSION="$(curl --retry-all-errors --retry 5 http://metadata.google.internal/computeMetadata/v1/instance/attributes/kind_version -H "Metadata-Flavor: Google")"
|
||||
K8S_VERSION="$(curl --retry-all-errors --retry 5 http://metadata.google.internal/computeMetadata/v1/instance/attributes/k8s_version -H "Metadata-Flavor: Google")"
|
||||
KIND_NODE_IMAGE="$(curl --retry-all-errors --retry 5 http://metadata.google.internal/computeMetadata/v1/instance/attributes/kind_node_image -H "Metadata-Flavor: Google")"
|
||||
@@ -92,18 +92,9 @@ kubeadmConfigPatches:
|
||||
apiVersion: ${KUBE_ADM_VERSION}
|
||||
kind: ClusterConfiguration
|
||||
# ControlPlaneEndpoint sets a stable IP address or DNS name for the control plane.
|
||||
# Although this worked when the VM had a public IP address that we could use here,
|
||||
# this does not work when using the VM's internal IP address. kubeadm fails to connect
|
||||
# to this endpoint during liveness probes, so it thinks that the api-server is not
|
||||
# running (when it actually is running), which causes cluster creation to fail.
|
||||
# Instead, we will add the internal IP as a SAN on the api-server's TLS certificate below,
|
||||
# which will still allow us to validate TLS when connecting to the cluster using the
|
||||
# VM's internal IP.
|
||||
#controlPlaneEndpoint: "${INTERNAL_IP}:6443"
|
||||
controlPlaneEndpoint: "${PUBLIC_IP}:6443"
|
||||
# mount the kind extraMounts into the API server static pod so we can use the audit config
|
||||
apiServer:
|
||||
certSANs:
|
||||
- "${INTERNAL_IP}"
|
||||
extraVolumes:
|
||||
- name: audit-config
|
||||
hostPath: /audit-config/audit-config.yaml
|
||||
@@ -186,8 +177,8 @@ fi
|
||||
|
||||
/var/lib/google/kind create cluster --wait 5m --kubeconfig /tmp/kubeconfig.yaml --image "$image" --config /tmp/kind.yaml |& tee /tmp/kind-cluster-create.log
|
||||
|
||||
# Change the kubeconfig to make the server address match the IP configured as controlPlaneEndpoint above.
|
||||
sed -i "s/0\\.0\\.0\\.0/${INTERNAL_IP}/" /tmp/kubeconfig.yaml
|
||||
# Change the kubeconfig to make the server address match the public IP configured as controlPlaneEndpoint above.
|
||||
sed -i "s/0\\.0\\.0\\.0/${PUBLIC_IP}/" /tmp/kubeconfig.yaml
|
||||
|
||||
# The above YAML config file specifies one node, and Kind should never put the "control-plane"
|
||||
# taint on the node for single-node clusters. Due to the issue described in
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -20,7 +20,7 @@ gcloud auth activate-service-account "$GKE_USERNAME" --key-file <(echo "$GKE_JSO
|
||||
|
||||
# https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
|
||||
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
|
||||
gcloud container clusters get-credentials "$GKE_CLUSTER_NAME" --zone us-west1-c --project "$PINNIPED_GCP_PROJECT"
|
||||
gcloud container clusters get-credentials "$GKE_CLUSTER_NAME" --zone us-central1-c --project "$PINNIPED_GCP_PROJECT"
|
||||
|
||||
pushd pinniped >/dev/null
|
||||
|
||||
@@ -48,8 +48,9 @@ CONCIERGE_NAMESPACE=concierge-acceptance \
|
||||
SUPERVISOR_LOAD_BALANCER_DNS_NAME="$LOAD_BALANCER_DNS_NAME" \
|
||||
SUPERVISOR_LOAD_BALANCER_STATIC_IP="$RESERVED_LOAD_BALANCER_STATIC_IP" \
|
||||
SUPERVISOR_INGRESS=yes \
|
||||
SUPERVISOR_INGRESS_DNS_NAME="$INGRESS_DNS_NAME" \
|
||||
SUPERVISOR_INGRESS_DNS_NAME="$INGRESS_DNS_ENTRY_GCLOUD_NAME" \
|
||||
SUPERVISOR_INGRESS_STATIC_IP_NAME="$INGRESS_STATIC_IP_GCLOUD_NAME" \
|
||||
SUPERVISOR_INGRESS_PATH_PATTERN='/*' \
|
||||
IMAGE_PULL_SECRET="$image_pull_secret" \
|
||||
IMAGE_REPO="$CI_BUILD_IMAGE_NAME" \
|
||||
IMAGE_DIGEST="$digest" \
|
||||
@@ -80,7 +81,7 @@ cp /tmp/integration-test-env integration-test-env-vars/
|
||||
|
||||
# So that the tests can avoid using the GKE auth plugin, create an admin kubeconfig which uses certs (without the plugin).
|
||||
# Get the cluster details back, including the admin certificate:
|
||||
gcloud container clusters describe "$GKE_CLUSTER_NAME" --zone us-west1-c --format json >/tmp/cluster.json
|
||||
gcloud container clusters describe "$GKE_CLUSTER_NAME" --zone us-central1-c --format json >/tmp/cluster.json
|
||||
# Make a new kubeconfig user "cluster-admin" using the admin cert.
|
||||
jq -r .masterAuth.clientCertificate /tmp/cluster.json | base64 -d >/tmp/client.crt
|
||||
jq -r .masterAuth.clientKey /tmp/cluster.json | base64 -d >/tmp/client.key
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -29,10 +29,10 @@ params:
|
||||
|
||||
# Set up a LoadBalancer for the Supervisor.
|
||||
RESERVED_LOAD_BALANCER_STATIC_IP: # An IP reserved for this purpose in our GCP project.
|
||||
LOAD_BALANCER_DNS_NAME: # A DNS name for the above IP address. Must be created manually in the DNS provider.
|
||||
LOAD_BALANCER_DNS_NAME: # A DNS entry in our GCP project for the above IP address.
|
||||
# Set up an Ingress for the Supervisor, as an alternate way to access it.
|
||||
INGRESS_STATIC_IP_GCLOUD_NAME: # The name of a static IP reservation in our GCP project used for this purpose.
|
||||
INGRESS_DNS_NAME: # A DNS name for the above static IP. Must be created manually in the DNS provider.
|
||||
INGRESS_DNS_ENTRY_GCLOUD_NAME: # A DNS entry in our GCP project for the IP address represented by the above static IP reservation name.
|
||||
|
||||
# Set to a non-empty value to remove the CPU requests from these deployments.
|
||||
SUPERVISOR_AND_CONCIERGE_NO_CPU_REQUEST:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -175,7 +175,6 @@ pinniped_test_ldap_bind_account_username="cn=admin,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_bind_account_password=password
|
||||
pinniped_test_ldap_users_search_base="ou=users,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_groups_search_base="ou=groups,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_groups_search_filter=""
|
||||
pinniped_test_ldap_user_dn="cn=pinny,ou=users,dc=pinniped,dc=dev"
|
||||
pinniped_test_ldap_user_cn="pinny"
|
||||
pinniped_test_ldap_user_password=${ldap_test_password}
|
||||
@@ -292,7 +291,6 @@ export PINNIPED_TEST_LDAP_BIND_ACCOUNT_USERNAME='${pinniped_test_ldap_bind_accou
|
||||
export PINNIPED_TEST_LDAP_BIND_ACCOUNT_PASSWORD='${pinniped_test_ldap_bind_account_password}'
|
||||
export PINNIPED_TEST_LDAP_USERS_SEARCH_BASE='${pinniped_test_ldap_users_search_base}'
|
||||
export PINNIPED_TEST_LDAP_GROUPS_SEARCH_BASE='${pinniped_test_ldap_groups_search_base}'
|
||||
export PINNIPED_TEST_LDAP_GROUPS_SEARCH_FILTER='${pinniped_test_ldap_groups_search_filter}'
|
||||
export PINNIPED_TEST_LDAP_USER_DN='${pinniped_test_ldap_user_dn}'
|
||||
export PINNIPED_TEST_LDAP_USER_CN='${pinniped_test_ldap_user_cn}'
|
||||
export PINNIPED_TEST_LDAP_USER_PASSWORD='${pinniped_test_ldap_user_password}'
|
||||
@@ -311,9 +309,6 @@ export PINNIPED_TEST_CLI_OIDC_ISSUER_CA_BUNDLE='${test_cli_oidc_issuer_ca_bundle
|
||||
export PINNIPED_TEST_CLI_OIDC_ISSUER='${test_cli_oidc_issuer}'
|
||||
export PINNIPED_TEST_CLI_OIDC_PASSWORD='${test_cli_oidc_password}'
|
||||
export PINNIPED_TEST_CLI_OIDC_USERNAME='${test_cli_oidc_username}'
|
||||
export PINNIPED_TEST_CLI_OIDC_USERNAME_CLAIM='${test_supervisor_upstream_oidc_username_claim}'
|
||||
export PINNIPED_TEST_CLI_OIDC_GROUPS_CLAIM='${test_supervisor_upstream_oidc_groups_claim}'
|
||||
export PINNIPED_TEST_CLI_OIDC_EXPECTED_GROUPS='${test_supervisor_upstream_oidc_groups}'
|
||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_CALLBACK_URL='${test_supervisor_upstream_oidc_callback_url}'
|
||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_ADDITIONAL_SCOPES='${test_supervisor_upstream_oidc_additional_scopes}'
|
||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_USERNAME_CLAIM='${test_supervisor_upstream_oidc_username_claim}'
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -75,7 +75,6 @@ params:
|
||||
JUMPCLOUD_LDAP_BIND_ACCOUNT_USERNAME:
|
||||
JUMPCLOUD_LDAP_BIND_ACCOUNT_PASSWORD:
|
||||
JUMPCLOUD_LDAP_USERS_SEARCH_BASE:
|
||||
JUMPCLOUD_LDAP_GROUPS_SEARCH_FILTER:
|
||||
JUMPCLOUD_LDAP_GROUPS_SEARCH_BASE:
|
||||
JUMPCLOUD_LDAP_USER_DN:
|
||||
JUMPCLOUD_LDAP_USER_CN:
|
||||
@@ -88,26 +87,7 @@ params:
|
||||
JUMPCLOUD_LDAP_EXPECTED_DIRECT_GROUPS_CN:
|
||||
JUMPCLOUD_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN:
|
||||
|
||||
# only needed when wanting to test using Okta LDAP instead of OpenLDAP.
|
||||
OKTA_LDAP_HOST:
|
||||
OKTA_LDAP_STARTTLS_ONLY_HOST:
|
||||
OKTA_LDAP_BIND_ACCOUNT_USERNAME:
|
||||
OKTA_LDAP_BIND_ACCOUNT_PASSWORD:
|
||||
OKTA_LDAP_USERS_SEARCH_BASE:
|
||||
OKTA_LDAP_GROUPS_SEARCH_BASE:
|
||||
OKTA_LDAP_GROUPS_SEARCH_FILTER:
|
||||
OKTA_LDAP_USER_DN:
|
||||
OKTA_LDAP_USER_CN:
|
||||
OKTA_LDAP_USER_PASSWORD:
|
||||
OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_NAME:
|
||||
OKTA_LDAP_USER_UNIQUE_ID_ATTRIBUTE_VALUE:
|
||||
OKTA_LDAP_USER_EMAIL_ATTRIBUTE_NAME:
|
||||
OKTA_LDAP_USER_EMAIL_ATTRIBUTE_VALUE:
|
||||
OKTA_LDAP_EXPECTED_DIRECT_GROUPS_DN:
|
||||
OKTA_LDAP_EXPECTED_DIRECT_GROUPS_CN:
|
||||
OKTA_LDAP_EXPECTED_DIRECT_POSIX_GROUPS_CN:
|
||||
|
||||
# only needed when wanting to test using GitHub as an identity provider
|
||||
# only needed when wanting to test using GitHub as an identity provider
|
||||
PINNIPED_TEST_GITHUB_APP_CLIENT_ID:
|
||||
PINNIPED_TEST_GITHUB_APP_CLIENT_SECRET:
|
||||
PINNIPED_TEST_GITHUB_OAUTH_APP_CLIENT_ID:
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -6,8 +6,8 @@ platform: linux
|
||||
image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
repository: debian
|
||||
tag: 10.8-slim
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: release-semver
|
||||
@@ -22,6 +22,8 @@ run:
|
||||
args:
|
||||
- -xeuc
|
||||
- |
|
||||
( apt update && apt install -y git ) 2>&1 > install.log || cat install.log
|
||||
|
||||
THIS_VERSION="v$(cat release-semver/version)"
|
||||
PREVIOUS_VERSION="v$(cat previous-release-semver/version)"
|
||||
|
||||
@@ -43,7 +45,7 @@ run:
|
||||
|
||||
| Image | Registry |
|
||||
| -------------- | ------------- |
|
||||
| \`ghcr.io/vmware/pinniped/pinniped-server:$THIS_VERSION\` | GitHub Container Registry |
|
||||
| \`ghcr.io/vmware-tanzu/pinniped/pinniped-server:$THIS_VERSION\` | GitHub Container Registry |
|
||||
| \`docker.io/getpinniped/pinniped-server:$THIS_VERSION\` | DockerHub |
|
||||
|
||||
These images can also be referenced by their digest: \`$(cat ci-build-image/digest)\`.
|
||||
@@ -67,7 +69,7 @@ run:
|
||||
### Diffs
|
||||
|
||||
*TODO*: Make sure the following references the correct version tags. Note that the link will not work until the release is published (made public):<br/>
|
||||
A complete list of changes can be found [here](https://github.com/vmware/pinniped/compare/$PREVIOUS_VERSION...$THIS_VERSION).
|
||||
A complete list of changes can be found [here](https://github.com/vmware-tanzu/pinniped/compare/$PREVIOUS_VERSION...$THIS_VERSION).
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -euo pipefail
|
||||
@@ -9,41 +9,15 @@ CLUSTER_NAME="$(cat gke-cluster-pool/name)"
|
||||
export CLUSTER_NAME
|
||||
export KUBECONFIG="gke-cluster-pool/metadata"
|
||||
|
||||
# Parse the region or zone name from the cluster name, in case it was created in a different region/zone
|
||||
# compared to the region/zone in which we are currently creating new clusters.
|
||||
# Parse the zone name from the cluster name, in case it was created in a different zone
|
||||
# compared to the zone in which we are currently creating new clusters.
|
||||
zone=${CLUSTER_NAME##*-zone-}
|
||||
region=${CLUSTER_NAME##*-region-}
|
||||
|
||||
# If the region/zone name was empty, or if there was no region/zone delimiter in the cluster name to start with...
|
||||
if [[ (-z $zone || "$CLUSTER_NAME" != *"-zone-"*) && (-z $region || "$CLUSTER_NAME" != *"-region-"*) ]]; then
|
||||
echo "Umm... the cluster name $CLUSTER_NAME did not contain either region or zone name."
|
||||
# If the zone name was empty, or if there was no zone delimiter in the cluster name to start with...
|
||||
if [[ -z $zone || "$CLUSTER_NAME" != *"-zone-"* ]]; then
|
||||
echo "Umm... the cluster name did not contain a zone name."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Decide if we have a regional or zonal cluster.
|
||||
if [[ -n "$region" ]]; then
|
||||
region_or_zone_flag="--region=$region"
|
||||
else
|
||||
region_or_zone_flag="--zone=$zone"
|
||||
fi
|
||||
|
||||
gcloud auth activate-service-account "$GCP_SERVICE_ACCOUNT" --key-file <(echo "$GCP_JSON_KEY") --project "$GCP_PROJECT"
|
||||
|
||||
for i in $(seq 1 10); do
|
||||
echo "Checking $CLUSTER_NAME for ongoing operations (iteration $i)...."
|
||||
running_ops=$(gcloud container operations list \
|
||||
--filter="targetLink:$CLUSTER_NAME AND status != done" \
|
||||
--project "$GCP_PROJECT" "$region_or_zone_flag" --format yaml)
|
||||
if [[ -z "$running_ops" ]]; then
|
||||
echo
|
||||
break
|
||||
fi
|
||||
echo "Found a running cluster operation:"
|
||||
echo "$running_ops"
|
||||
echo
|
||||
# Give some time for the operation to finsh before checking again.
|
||||
sleep 30
|
||||
done
|
||||
|
||||
echo "Removing $CLUSTER_NAME..."
|
||||
gcloud container clusters delete "$CLUSTER_NAME" "$region_or_zone_flag" --quiet
|
||||
gcloud auth activate-service-account "$GCP_SERVICE_ACCOUNT" --key-file <(echo "$GCP_JSON_KEY") --project "$GCP_PROJECT"
|
||||
gcloud container clusters delete "$CLUSTER_NAME" --zone "$zone" --quiet
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2024-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# Sometimes something goes wrong with a GKE test job's cleanup and a
|
||||
@@ -11,8 +11,7 @@
|
||||
# 1. Are running in GCP with a name that indicates that it was auto-created for testing,
|
||||
# 2. And are older than some number of hours since their creation time.
|
||||
#
|
||||
# Params are CLUSTER_REGION, CLUSTER_ZONE, GCP_PROJECT, GCP_SERVICE_ACCOUNT, and GCP_JSON_KEY.
|
||||
# Search for both zonal and regional orphaned clusters.
|
||||
# Params are CLUSTER_ZONE, GCP_PROJECT, GCP_SERVICE_ACCOUNT, and GCP_JSON_KEY.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -21,23 +20,17 @@ gcloud auth activate-service-account \
|
||||
--key-file <(echo "$GCP_JSON_KEY") \
|
||||
--project "$GCP_PROJECT"
|
||||
|
||||
all_zonal=($(gcloud container clusters list \
|
||||
all_cloud=($(gcloud container clusters list \
|
||||
--zone "$CLUSTER_ZONE" --project "$GCP_PROJECT" \
|
||||
--filter "name~gke-[a-f0-9]+-zone-${CLUSTER_ZONE}" --format 'table[no-heading](name)' | sort))
|
||||
|
||||
all_regional=($(gcloud container clusters list \
|
||||
--region "$CLUSTER_REGION" --project "$GCP_PROJECT" \
|
||||
--filter "name~gke-[a-f0-9]+-region-${CLUSTER_REGION}" --format 'table[no-heading](name)' | sort))
|
||||
|
||||
now_in_seconds_since_epoch=$(date +"%s")
|
||||
hours_ago_to_delete=2
|
||||
regional_clusters_to_remove=()
|
||||
zonal_clusters_to_remove=()
|
||||
clusters_to_remove=()
|
||||
|
||||
echo
|
||||
echo "All auto-created GKE clusters (with creation time in UTC):"
|
||||
|
||||
for i in "${all_zonal[@]}"; do
|
||||
for i in "${all_cloud[@]}"; do
|
||||
creation_time=$(gcloud container clusters describe "$i" \
|
||||
--zone "$CLUSTER_ZONE" --project "$GCP_PROJECT" \
|
||||
--format 'table[no-heading](createTime.date(tz=UTC))')
|
||||
@@ -46,7 +39,7 @@ for i in "${all_zonal[@]}"; do
|
||||
# Note: on MacOS this date command would be: date -ju -f '%Y-%m-%dT%H:%M:%S' "$creation_time" '+%s'
|
||||
creation_time_seconds_since_epoch=$(date -u -d "$creation_time" '+%s')
|
||||
if (($((now_in_seconds_since_epoch - creation_time_seconds_since_epoch)) > $((hours_ago_to_delete * 60 * 60)))); then
|
||||
zonal_clusters_to_remove+=("$i")
|
||||
clusters_to_remove+=("$i")
|
||||
echo "$i $creation_time (older than $hours_ago_to_delete hours)"
|
||||
else
|
||||
echo "$i $creation_time (less than $hours_ago_to_delete hours old)"
|
||||
@@ -56,45 +49,16 @@ for i in "${all_zonal[@]}"; do
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
for i in "${all_regional[@]}"; do
|
||||
creation_time=$(gcloud container clusters describe "$i" \
|
||||
--region "$CLUSTER_REGION" --project "$GCP_PROJECT" \
|
||||
--format 'table[no-heading](createTime.date(tz=UTC))')
|
||||
# UTC date format example: 2022-04-01T17:01:59
|
||||
if [[ "$creation_time" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}$ ]]; then
|
||||
# Note: on MacOS this date command would be: date -ju -f '%Y-%m-%dT%H:%M:%S' "$creation_time" '+%s'
|
||||
creation_time_seconds_since_epoch=$(date -u -d "$creation_time" '+%s')
|
||||
if (($((now_in_seconds_since_epoch - creation_time_seconds_since_epoch)) > $((hours_ago_to_delete * 60 * 60)))); then
|
||||
regional_clusters_to_remove+=("$i")
|
||||
echo "$i $creation_time (older than $hours_ago_to_delete hours)"
|
||||
else
|
||||
echo "$i $creation_time (less than $hours_ago_to_delete hours old)"
|
||||
fi
|
||||
else
|
||||
echo "GKE cluster creation time not in expected time format: $creation_time"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ${#all_zonal[@]} -eq 0 && ${#all_regional[@]} -eq 0 ]]; then
|
||||
if [[ ${#all_cloud[@]} -eq 0 ]]; then
|
||||
echo "none"
|
||||
fi
|
||||
|
||||
echo
|
||||
if [[ ${#zonal_clusters_to_remove[@]} -eq 0 ]]; then
|
||||
echo "No old orphaned zonal GKE clusters found to remove."
|
||||
if [[ ${#clusters_to_remove[@]} -eq 0 ]]; then
|
||||
echo "No old orphaned GKE clusters found to remove."
|
||||
else
|
||||
echo "Removing ${#zonal_clusters_to_remove[@]} GKE clusters(s) which are older than $hours_ago_to_delete hours in $CLUSTER_ZONE: ${zonal_clusters_to_remove[*]} ..."
|
||||
gcloud container clusters delete --zone "${CLUSTER_ZONE}" --quiet ${zonal_clusters_to_remove[*]}
|
||||
fi
|
||||
|
||||
echo
|
||||
if [[ ${#regional_clusters_to_remove[@]} -eq 0 ]]; then
|
||||
echo "No old orphaned regional GKE clusters found to remove."
|
||||
else
|
||||
echo "Removing ${#regional_clusters_to_remove[@]} GKE clusters(s) which are older than $hours_ago_to_delete hours in $CLUSTER_REGION: ${regional_clusters_to_remove[*]} ..."
|
||||
gcloud container clusters delete --region "${CLUSTER_REGION}" --quiet ${regional_clusters_to_remove[*]}
|
||||
echo "Removing ${#clusters_to_remove[@]} GKE clusters(s) which are older than $hours_ago_to_delete hours in $CLUSTER_ZONE: ${clusters_to_remove[*]} ..."
|
||||
gcloud container clusters delete --zone "${CLUSTER_ZONE}" --quiet ${clusters_to_remove[*]}
|
||||
fi
|
||||
|
||||
echo
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -7,7 +7,6 @@ inputs:
|
||||
- name: pinniped-ci
|
||||
outputs:
|
||||
params:
|
||||
CLUSTER_REGION:
|
||||
CLUSTER_ZONE:
|
||||
GCP_PROJECT:
|
||||
GCP_SERVICE_ACCOUNT:
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: pinniped-ci
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# Run the integration tests against a remote target cluster.
|
||||
@@ -156,7 +156,7 @@ if [[ "${START_GCLOUD_PROXY:-no}" == "yes" ]]; then
|
||||
# seems to be no way to avoid it. :( So we'll use regular ssh.
|
||||
gcloud_instance_ip=$(gcloud compute instances describe \
|
||||
--zone "$GCP_ZONE" --project "$GCP_PROJECT" "${cluster_name}" \
|
||||
--format='get(networkInterfaces[0].networkIP)')
|
||||
--format='get(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||
|
||||
# Now start some simultaneous background jobs.
|
||||
for mapping in "${ssh_mappings[@]}"; do
|
||||
@@ -256,9 +256,6 @@ fi
|
||||
# and that kubectl is configured to talk to the cluster. They also have the
|
||||
# k14s tools available (ytt, kapp, etc) in case they want to do more deploys.
|
||||
if [[ "$(id -u)" == "0" ]]; then
|
||||
# Give the testrunner user permission to create the Go cache dirs that we configured at the top of this script.
|
||||
chmod 777 "$initial_working_directory/cache"
|
||||
|
||||
# Downgrade to a non-root user to run the tests. We don't want them reading the
|
||||
# environment of any parent process, e.g. by reading from /proc. This user account
|
||||
# was created in the Dockerfile of the container image used to run this script in CI.
|
||||
|
||||
@@ -14,4 +14,9 @@ export GOCACHE="$PWD/cache/gocache"
|
||||
export GOMODCACHE="$PWD/cache/gomodcache"
|
||||
|
||||
cd pinniped
|
||||
go test -short -timeout 15m -race -coverprofile "${COVERAGE_OUTPUT}" -covermode atomic ./...
|
||||
|
||||
# Temporarily avoid using the race detector for the impersonator package due to https://github.com/kubernetes/kubernetes/issues/128548
|
||||
# Note that this will exclude the impersonator package from the code coverage for now as a side effect.
|
||||
# TODO: change this back to using the race detector everywhere
|
||||
go test -short -timeout 15m -race -coverprofile "${COVERAGE_OUTPUT}" -covermode atomic $(go list ./... | grep -v internal/concierge/impersonator)
|
||||
go test -short ./internal/concierge/impersonator
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: pinniped-ci
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: pinniped-ci
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: pinniped-ci
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped
|
||||
- name: pinniped-ci
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -xeuo pipefail
|
||||
@@ -27,7 +27,8 @@ if [[ "$current_revision" != "$new_revision" ]]; then
|
||||
> homebrew-pinniped-out/pinniped-cli.rb
|
||||
|
||||
cd homebrew-pinniped-out
|
||||
|
||||
apt update >/dev/null
|
||||
apt install git -y >/dev/null
|
||||
git config user.email "pinniped-ci-bot@users.noreply.github.com"
|
||||
git config user.name "Pinny"
|
||||
git commit -a -m "pinniped-cli.rb: update to $new_version"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
|
||||
# Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
---
|
||||
@@ -6,8 +6,8 @@ platform: linux
|
||||
image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: latest
|
||||
repository: debian
|
||||
tag: 10.8
|
||||
inputs:
|
||||
- name: pinniped-ci
|
||||
- name: github-release
|
||||
|
||||
@@ -47,13 +47,6 @@ git add "$configdoc"
|
||||
# Print the current status to the log.
|
||||
git status
|
||||
|
||||
# Restore the unstaged changes, if any.
|
||||
echo "Restoring any unstaged changes."
|
||||
git restore .
|
||||
|
||||
# Print the current status to the log.
|
||||
git status
|
||||
|
||||
# Did we just stage any changes?
|
||||
staged=$(git --no-pager diff --staged)
|
||||
if [[ "$staged" == "" ]]; then
|
||||
@@ -64,4 +57,8 @@ else
|
||||
echo "Found changes for $clidoc or $configdoc:"
|
||||
echo
|
||||
echo "$staged"
|
||||
echo
|
||||
# Commit.
|
||||
echo "Committing changes."
|
||||
git commit -m "Updated versions in docs for $pinniped_tag release"
|
||||
fi
|
||||
|
||||
@@ -7,7 +7,7 @@ image_resource:
|
||||
type: registry-image
|
||||
source:
|
||||
repository: golang
|
||||
tag: '1.25.5'
|
||||
tag: '1.24.3'
|
||||
inputs:
|
||||
- name: pinniped-ci
|
||||
- name: github-final-release
|
||||
|
||||
Reference in New Issue
Block a user