Merge pull request #2461 from vmware/new_ci

update docs and tests for new internal CI
This commit is contained in:
Ryan Richard
2025-07-01 12:12:13 -07:00
committed by GitHub
5 changed files with 69 additions and 25 deletions

View File

@@ -11,7 +11,7 @@ assignees: ''
# Release checklist
- [ ] Ensure that Pinniped's dependencies have been upgraded, to the extent desired by the team (refer to the diff output from the latest run of the [all-golang-deps-updated](https://ci.pinniped.dev/teams/main/pipelines/security-scan/jobs/all-golang-deps-updated/) CI job)
- [ ] Ensure that Pinniped's dependencies have been upgraded, to the extent desired by the team (refer to the diff output from the latest run of the [all-golang-deps-updated](https://ci.pinniped.broadcom.net/teams/main/pipelines/security-scan/jobs/all-golang-deps-updated/) CI job)
- [ ] If you are updating golang in Pinniped, be sure to update golang in CI as well. Do a search-and-replace to update the version number everywhere in the pinniped `ci` branch.
- [ ] If the Fosite library is being updated and the format of the content of the Supervisor's storage Secrets are changed, or if any change to our own code changes the format of the content of the Supervisor's session storage Secrets, then be sure to update the `accessTokenStorageVersion`, `authorizeCodeStorageVersion`, `oidcStorageVersion`, `pkceStorageVersion`, `refreshTokenStorageVersion`, variables in files such as `internal/fositestorage/accesstoken/accesstoken.go`. Failing tests should signal the need to update these values.
- [ ] For go.mod direct dependencies that are v2 or above, such as `github.com/google/go-github/vXX`, check to see if there is a new major version available. Try using `hack/update-go-mod/update-majors.sh`.
@@ -19,13 +19,14 @@ assignees: ''
- [ ] Ensure that Pinniped's codegen is up-to-date with the latest Kubernetes releases by making sure this [file](https://github.com/vmware-tanzu/pinniped/blob/main/hack/lib/kube-versions.txt) is updated compared to the latest releases listed [here for active branches](https://kubernetes.io/releases/) and [here for non-active branches](https://kubernetes.io/releases/patch-releases/#non-active-branch-history)
- [ ] Ensure that the `k8s-code-generator` CI job definitions are up-to-date with the latest Go, K8s, and `controller-gen` versions
- [ ] All relevant feature and docs PRs are merged
- [ ] The [main pipeline](https://ci.pinniped.dev/teams/main/pipelines/main) is green, up to and including the `ready-to-release` job. Check that the expected git commit has passed the `ready-to-release` job.
- [ ] The [main pipeline](https://ci.pinniped.broadcom.net/teams/main/pipelines/main) is green, up to and including the `ready-to-release` job. Check that the expected git commit has passed the `ready-to-release` job.
- [ ] Manually trigger the jobs `run-int-misc`, `run-int-cloud-providers`, and `run-int-k8s-versions` in the main pipeline to run other pre-release tests. Depending on the number of Concourse workers, you may need to run these one at a time.
- [ ] Optional: a blog post for the release is written and submitted as a PR but not merged yet
- [ ] All merged user stories are accepted (manually tested)
- [ ] Only after all stories are accepted, manually trigger the `release` job to create a draft GitHub release
- [ ] Manually edit the draft release notes on the [GitHub release](https://github.com/vmware-tanzu/pinniped/releases) to describe the contents of the release, using the format which was automatically added to the draft release
- [ ] Publish (i.e. make public) the draft release
- [ ] After making the release public, the jobs in the [main pipeline](https://ci.pinniped.dev/teams/main/pipelines/main) beyond the release job should auto-trigger, so check to make sure that they passed
- [ ] After making the release public, the jobs in the [main pipeline](https://ci.pinniped.broadcom.net/teams/main/pipelines/main) beyond the release job should auto-trigger, so check to make sure that they passed
- [ ] Edit the blog post's date to make it match the actual release date, and merge the blog post PR to make it live on the website
- [ ] Publicize the release via tweets, etc.
- [ ] Close this issue

View File

@@ -177,17 +177,20 @@ Each run of `hack/prepare-for-integration-tests.sh` can result in different valu
### Observing Tests on the Continuous Integration Environment
[CI](https://ci.pinniped.dev/teams/main/pipelines/pull-requests)
will not be triggered on a pull request until the pull request is reviewed and
CI will not be triggered on a pull request until the pull request is reviewed and
approved for CI by a project [maintainer](MAINTAINERS.md). Once CI is triggered,
the progress and results will appear on the Github page for that
[pull request](https://github.com/vmware-tanzu/pinniped/pulls) as checks. Links
will appear to view the details of each check.
Starting in mid-2025, Pinniped's CI system is no longer externally visible due to corporate policies.
Please contact the maintainers for help with your PR if you encounter any CI failures.
They will be happy to share CI logs with you directly for your PR.
## CI
Pinniped's CI configuration and code is in the [`ci`](https://github.com/vmware-tanzu/pinniped/tree/ci)
branch of this repo. The CI results are visible to the public at https://ci.pinniped.dev.
branch of this repo.
## Documentation

View File

@@ -66,13 +66,22 @@ func TestCredentialIssuer(t *testing.T) {
require.NotNil(t, actualStatusStrategy)
if env.HasCapability(testlib.ClusterSigningKeyIsAvailable) {
kubernetesAPIServerURLFromKubeconfig := config.Host
expectedServer := kubernetesAPIServerURLFromKubeconfig
if actualStatusStrategy.Frontend.TokenCredentialRequestAPIInfo.Server == "https://kind-control-plane:6443" {
// When our Kind clusters running in CI are on a VM with only an internal IP address,
// then the Kind cluster will not know its own hostname and will instead advertise kind-control-plane.
// This is okay, so adjust our expectation in this case.
expectedServer = "https://kind-control-plane:6443"
}
require.Equal(t, conciergeconfigv1alpha1.SuccessStrategyStatus, actualStatusStrategy.Status)
require.Equal(t, conciergeconfigv1alpha1.FetchedKeyStrategyReason, actualStatusStrategy.Reason)
require.Equal(t, "key was fetched successfully", actualStatusStrategy.Message)
require.NotNil(t, actualStatusStrategy.Frontend)
require.Equal(t, conciergeconfigv1alpha1.TokenCredentialRequestAPIFrontendType, actualStatusStrategy.Frontend.Type)
expectedTokenRequestAPIInfo := conciergeconfigv1alpha1.TokenCredentialRequestAPIInfo{
Server: config.Host,
Server: expectedServer,
CertificateAuthorityData: base64.StdEncoding.EncodeToString(config.CAData),
}
require.Equal(t, &expectedTokenRequestAPIInfo, actualStatusStrategy.Frontend.TokenCredentialRequestAPIInfo)

View File

@@ -212,15 +212,18 @@ func TestImpersonationProxy(t *testing.T) { //nolint:gocyclo // yeah, it's compl
// this point depending on the capabilities of the cluster under test. We handle each possible case here.
switch {
case impersonatorShouldHaveStartedAutomaticallyByDefault && clusterSupportsLoadBalancers:
var originalSpecAnnotations map[string]string
if oldCredentialIssuer.Spec.ImpersonationProxy != nil {
originalSpecAnnotations = oldCredentialIssuer.Spec.ImpersonationProxy.Service.Annotations
}
// configure the credential issuer spec to have the impersonation proxy in auto mode
updateCredentialIssuer(ctx, t, env, adminConciergeClient, conciergeconfigv1alpha1.CredentialIssuerSpec{
ImpersonationProxy: &conciergeconfigv1alpha1.ImpersonationProxySpec{
Mode: conciergeconfigv1alpha1.ImpersonationProxyModeAuto,
Service: conciergeconfigv1alpha1.ImpersonationProxyServiceSpec{
Type: conciergeconfigv1alpha1.ImpersonationProxyServiceTypeLoadBalancer,
Annotations: map[string]string{
"service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout": "4000",
},
// Use the pre-existing annotations, which might include an annotation to request a private IP.
Annotations: originalSpecAnnotations,
},
},
})
@@ -1762,6 +1765,7 @@ func TestImpersonationProxy(t *testing.T) { //nolint:gocyclo // yeah, it's compl
if env.Proxy == "" {
t.Skip("Skipping ClusterIP test because squid proxy is not present")
}
clusterIPServiceURL := fmt.Sprintf("%s.%s.svc.cluster.local", impersonationProxyClusterIPName(env), env.ConciergeNamespace)
updateCredentialIssuer(ctx, t, env, adminConciergeClient, conciergeconfigv1alpha1.CredentialIssuerSpec{
ImpersonationProxy: &conciergeconfigv1alpha1.ImpersonationProxySpec{
@@ -1793,6 +1797,10 @@ func TestImpersonationProxy(t *testing.T) { //nolint:gocyclo // yeah, it's compl
})
t.Run("using externally provided TLS serving cert with stringData", func(t *testing.T) {
if env.Proxy == "" {
t.Skip("Skipping ClusterIP test because squid proxy is not present")
}
var externallyProvidedCA *certauthority.CA
externallyProvidedCA, err = certauthority.New("Impersonation Proxy Integration Test CA", 1*time.Hour)
require.NoError(t, err)
@@ -1864,6 +1872,10 @@ func TestImpersonationProxy(t *testing.T) { //nolint:gocyclo // yeah, it's compl
})
t.Run("using externally provided TLS serving cert with data []byte arrays", func(t *testing.T) {
if env.Proxy == "" {
t.Skip("Skipping ClusterIP test because squid proxy is not present")
}
var externallyProvidedCA *certauthority.CA
externallyProvidedCA, err = certauthority.New("Impersonation Proxy Integration Test CA", 1*time.Hour)
require.NoError(t, err)
@@ -1935,6 +1947,10 @@ func TestImpersonationProxy(t *testing.T) { //nolint:gocyclo // yeah, it's compl
})
t.Run("manually disabling the impersonation proxy feature", func(t *testing.T) {
if env.Proxy == "" {
t.Skip("Skipping disable test because squid proxy is not present")
}
// Update configuration to force the proxy to disabled mode
updateCredentialIssuer(ctx, t, env, adminConciergeClient, conciergeconfigv1alpha1.CredentialIssuerSpec{
ImpersonationProxy: &conciergeconfigv1alpha1.ImpersonationProxySpec{
@@ -1952,19 +1968,13 @@ func TestImpersonationProxy(t *testing.T) { //nolint:gocyclo // yeah, it's compl
}
// Check that the impersonation proxy port has shut down.
// Ideally we could always check that the impersonation proxy's port has shut down, but on clusters where we
// do not run the squid proxy we have no easy way to see beyond the load balancer to see inside the cluster,
// so we'll skip this check on clusters which have load balancers but don't run the squid proxy.
// The other cluster types that do run the squid proxy will give us sufficient coverage here.
if env.Proxy != "" {
testlib.RequireEventually(t, func(requireEventually *require.Assertions) {
// It's okay if this returns RBAC errors because this user has no role bindings.
// What we want to see is that the proxy eventually shuts down entirely.
_, err := impersonationProxyViaSquidKubeClientWithoutCredential(t, proxyServiceEndpoint).CoreV1().Namespaces().List(ctx, metav1.ListOptions{})
isErr, _ := isServiceUnavailableViaSquidError(err, proxyServiceEndpoint)
requireEventually.Truef(isErr, "wanted service unavailable via squid error, got %v", err)
}, 20*time.Second, 500*time.Millisecond)
}
testlib.RequireEventually(t, func(requireEventually *require.Assertions) {
// It's okay if this returns RBAC errors because this user has no role bindings.
// What we want to see is that the proxy eventually shuts down entirely.
_, err := impersonationProxyViaSquidKubeClientWithoutCredential(t, proxyServiceEndpoint).CoreV1().Namespaces().List(ctx, metav1.ListOptions{})
isErr, _ := isServiceUnavailableViaSquidError(err, proxyServiceEndpoint)
requireEventually.Truef(isErr, "wanted service unavailable via squid error, got %v", err)
}, 20*time.Second, 500*time.Millisecond)
// Check that the generated TLS cert Secret was deleted by the controller because it's supposed to clean this up
// when we disable the impersonator.
@@ -2317,6 +2327,8 @@ func updateCredentialIssuer(ctx context.Context, t *testing.T, env *testlib.Test
return err
}
t.Logf("updating CredentialIssuer spec from %+v to new spec %+v", newCredentialIssuer.Spec, spec)
spec.DeepCopyInto(&newCredentialIssuer.Spec)
_, err = adminConciergeClient.ConfigV1alpha1().CredentialIssuers().Update(ctx, newCredentialIssuer, metav1.UpdateOptions{})
return err

View File

@@ -1,10 +1,12 @@
// Copyright 2020-2024 the Pinniped contributors. All Rights Reserved.
// Copyright 2020-2025 the Pinniped contributors. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package testlib
import (
"encoding/base64"
"fmt"
"net/url"
"os"
"sort"
"strings"
@@ -145,10 +147,27 @@ type TestGithubUpstream struct {
// ProxyEnv returns a set of environment variable strings (e.g., to combine with os.Environ()) which set up the configured test HTTP proxy.
func (e *TestEnv) ProxyEnv() []string {
e.t.Helper()
if e.Proxy == "" {
return nil
}
return []string{"http_proxy=" + e.Proxy, "https_proxy=" + e.Proxy, "no_proxy=127.0.0.1"}
// We should never need to use the proxy to access the Kube API server from the kubeconfig.
// When the cluster is a Kind cluster running in CI, and if the VM has no external IP, then
// the squid proxy running inside the cluster will not able to reach the IP of the VM at all
// due to limitations of Docker networking, so in that case we must ensure that we are not
// trying to use the proxy to reach the Kubernetes API server from the outside. Therefore,
// always add the Kube API server's address or hostname to the no_proxy list.
kubeClientConfig := NewClientConfig(e.t)
parsedKubeAPIServerURL, err := url.Parse(kubeClientConfig.Host)
require.NoError(e.t, err)
return []string{
"http_proxy=" + e.Proxy,
"https_proxy=" + e.Proxy,
fmt.Sprintf("no_proxy=127.0.0.1,%s", parsedKubeAPIServerURL.Host),
}
}
// memoizedTestEnvsByTest maps *testing.T pointers to *TestEnv. It exists so that we don't do all the