- "Ready" condition & supporting conditions
- Legacy "Phase" for convenience
- Refactor newCachedJWTAuthenticator() func
to improve ability to provide additional conditions
- Update JWTAuthenticator.Status type
- Update RBAC for SA to get/watch/update JWTAuthenticator.Status
- Update logger to plog, add tests for logs & statuses
- update Sync() to reduce enqueue when error is config/user managed, perhaps remove validateJWKSResponse()
- update kind config to include local registry
- configure kind cluster to talk to local registry
- docker build & push pinniped dev code to local registry
- deploy dev code of the following via the local registry:
- concierge
- supervisor
- local-user-authenticator
- Update values.yaml for supervisor,concierge to schema files
- Update values.yaml for local-user-authenticator to schema file
- Add ytt openapi-v3 generation to build carvel package script
- Add supervisor carvel package files
- Add concierge carvel package files
- Add local-user-authenticator carvel package files
- Add hack script to build openapi-v3 files
- add --post-install to hack/prepare-for-integration-tests.sh
- cleanup local registry in kind-down.sh
- webhook_ca_bundle moved in hack script
- adjust were to call post-install script
- deploy/{}/values.yml image_pull_dockerconfigjson type change to base64 string
- Add PINNIPED_USE_LOCAL_KIND_REGISTRY env var
- ensures regular use of hack/prepare-for-integration-tests.sh
- PINNIPED_USE_LOCAL_KIND_REGISTRY=1 ./hack/prepare-for-integration-tests.sh --clean --alternate-deploy ./hack/noop.sh --post-install ./hack/build-carvel-packages.sh
- ./hack/prepare-for-integration-tests.sh --clean
- if PINNIPED_USE_LOCAL_KIND_REGISTRY for kind-down.sh in hack/prepare-for-integration-tests.sh
- Split carvel build & deploy scripts, add --pre-install flag
- add pre-install flag to hack/prepare-for-integration-tests.sh
- split /hack/build-carvel-packages.sh and
/hack/deploy-carvel-packages.sh
- Remove --alternate-deploy-* flags from hack script
- Move scripts to hack/lib/carvel_packages
- Split build.sh deploy.sh
- Separate template files from install artifacts
- Generate all install artifacts in $root/deploy_carvel
- remove $root/deploy_carvel from git
- Extract ytt values to file in hack/prepare-for-integration-tests.sh
- pass registry/repo to carvel build scripts
Where possible, use securityContext settings which will work with the
most restrictive Pod Security Admission policy level (as of Kube 1.25).
Where privileged containers are needed, use the namespace-level
annotation to allow them.
Also adjust some integration tests to make similar changes to allow the
integration tests to pass on test clusters which use restricted PSAs.
- Note that v0.8.0 no longer supports the "trivialVersions=true"
command-line option, so remove that from update-codegen.sh.
It doesn't seem to impact the output (our generated CRD yaml files).
- Used to determine on which port the impersonation proxy will bind
- Defaults to 8444, which is the old hard-coded port value
- Allow the port number to be configured to any value within the
range 1024 to 65535
- This commit does not include adding new config knobs to the ytt
values file, so while it is possible to change this port without
needing to recompile, it is not convenient
- Allow the port number to be configured to any value within the
range 1024 to 65535
- This commit does not include adding new config knobs to the ytt
values file, so while it is possible to change this port without
needing to recompile, it is not convenient
This makes it so that our service selector will match exactly the
YAML we specify instead of including an extra "kapp.k14s.io/app" key.
This will take us closer to the standard kubectl behavior which is
desirable since we want to avoid future bugs that only manifest when
kapp is not used.
Signed-off-by: Monis Khan <mok@vmware.com>
Fixes#801. The solution is complicated by the fact that the Selector
field of Deployments is immutable. It would have been easy to just
make the Selectors of the main Concierge Deployment, the Kube cert agent
Deployment, and the various Services use more specific labels, but
that would break upgrades. Instead, we make the Pod template labels and
the Service selectors more specific, because those not immutable, and
then handle the Deployment selectors in a special way.
For the main Concierge and Supervisor Deployments, we cannot change
their selectors, so they remain "app: app_name", and we make other
changes to ensure that only the intended pods are selected. We keep the
original "app" label on those pods and remove the "app" label from the
pods of the Kube cert agent Deployment. By removing it from the Kube
cert agent pods, there is no longer any chance that they will
accidentally get selected by the main Concierge Deployment.
For the Kube cert agent Deployment, we can change the immutable selector
by deleting and recreating the Deployment. The new selector uses only
the unique label that has always been applied to the pods of that
deployment. Upon recreation, these pods no longer have the "app" label,
so they will not be selected by the main Concierge Deployment's
selector.
The selector of all Services have been updated to use new labels to
more specifically target the intended pods. For the Concierge Services,
this will prevent them from accidentally including the Kube cert agent
pods. For the Supervisor Services, we follow the same convention just
to be consistent and to help future-proof the Supervisor app in case it
ever has a second Deployment added to it.
The selector of the auto-created impersonation proxy Service was
also previously using the "app" label. There is no change to this
Service because that label will now select the correct pods, since
the Kube cert agent pods no longer have that label. It would be possible
to update that selector to use the new more specific label, but then we
would need to invent a way to pass that label into the controller, so
it seemed like more work than was justified.
These fields were changed as a minor hardening attempt when we switched to Distroless, but I bungled the field names and we never noticed because Kapp doesn't apply API validations.
This change fixes the field names so they act as was originally intended. We should also follow up with a change that validates all of our installation manifest in CI.
Signed-off-by: Matt Moyer <moyerm@vmware.com>