1
0
mirror of https://github.com/google/nomulus synced 2025-12-23 06:15:42 +00:00

Convert gsutil to gcloud storage (#2670)

Use of gsutil is discouraged / deprecated, see https://cloud.google.com/storage/docs/gsutil
This commit is contained in:
gbrodman
2025-02-07 16:01:19 -05:00
committed by GitHub
parent a63812160e
commit 34103ec815
13 changed files with 56 additions and 56 deletions

View File

@@ -61,7 +61,7 @@ $ mkdir /tmp/brda.$$; for date in 2015-02-26 2015-03-05; \
* Store the generated files to the GCS bucket.
```shell
$ gsutil -m cp /tmp/brda.$$/*.{ryde,sig} gs://{PROJECT-ID}-icann-brda/`
$ gcloud storage cp /tmp/brda.$$/*.{ryde,sig} gs://{PROJECT-ID}-icann-brda/`
```
* Mirror the files in the GCS bucket to the sFTP server.

View File

@@ -99,12 +99,12 @@ that no cooldown period is necessary.
## Listing deposits in Cloud Storage
You can list the files in Cloud Storage for a given TLD using the gsutil tool.
You can list the files in Cloud Storage for a given TLD using the gcloud storage tool.
All files are stored in the {PROJECT-ID}-rde bucket, where {PROJECT-ID} is the
name of the App Engine project for the particular environment you are checking.
```shell
$ gsutil ls gs://{PROJECT-ID}-rde/zip_2015-05-16*
$ gcloud storage ls gs://{PROJECT-ID}-rde/zip_2015-05-16*
gs://{PROJECT-ID}-rde/zip_2015-05-16-report.xml.ghostryde
gs://{PROJECT-ID}-rde/zip_2015-05-16.xml.ghostryde
gs://{PROJECT-ID}-rde/zip_2015-05-16.xml.length
@@ -167,7 +167,7 @@ Sometimes you'll want to take a peek at the contents of a deposit that's been
staged to cloud storage. Use this command:
```shell
$ gsutil cat gs://{PROJECT-ID}-rde/foo.ghostryde | nomulus -e production ghostryde --decrypt | less
$ gcloud storage cat gs://{PROJECT-ID}-rde/foo.ghostryde | nomulus -e production ghostryde --decrypt | less
```
## Identifying which phase of the process failed
@@ -242,7 +242,7 @@ $ nomulus -e production ghostryde --encrypt \
# 3. Copy to Cloud Storage so RdeUploadTask can find them.
$ gsutil cp ${tld}_${date}_full_S1_R0{,-report}.xml.ghostryde gs://{PROJECT-ID}-rde/
$ gcloud storage cp ${tld}_${date}_full_S1_R0{,-report}.xml.ghostryde gs://{PROJECT-ID}-rde/
```
## Updating an RDE cursor

View File

@@ -29,12 +29,11 @@ service like [Spinnaker](https://www.spinnaker.io/) for release management.
## Detailed Instruction
We use [`gcloud`](https://cloud.google.com/sdk/gcloud/) and
[`terraform`](https://terraform.io) to configure the proxy project on GCP. We
use [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to
deploy the proxy to the project. Additionally,
[`gsutil`](https://cloud.google.com/storage/docs/gsutil) is used to create GCS
bucket for storing the terraform state file. These instructions assume that all
four tools are installed.
[`terraform`](https://terraform.io) to configure the proxy project on GCP and to create a GCS
bucket for storing the terraform state file. We use
[`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to deploy
the proxy to the project. These instructions assume that all three tools are
installed.
### Setup GCP project
@@ -59,8 +58,8 @@ environment.
In the proxy project, create a GCS bucket to store the terraform state file:
```bash
$ gsutil config # only if you haven't run gsutil before.
$ gsutil mb -p <proxy-project> gs://<bucket-name>/
$ gcloud auth login # only if you haven't run gcloud before.
$ gcloud storage buckets create gs://<bucket-name>/ --project <proxy-project>
```
### Obtain a domain and SSL certificate
@@ -185,7 +184,7 @@ This encrypted file is then uploaded to a GCS bucket specified in the
`config.tf` file.
```bash
$ gsutil cp <combined_secret.pem.enc> gs://<your-certificate-bucket>
$ gcloud storage cp <combined_secret.pem.enc> gs://<your-certificate-bucket>
```
### Edit proxy config file
@@ -379,8 +378,8 @@ A file named `ssl-cert-key.pem.enc` will be created. Upload it to a GCS bucket
in the proxy project. To create a bucket and upload the file:
```bash
$ gsutil mb -p <proxy-project> gs://<bucket-name>
$ gustil cp ssl-cert-key.pem.enc gs://<bucket-name>
$ gcloud storage buckets create gs://<bucket-name> --project <proxy-project>
$ gcloud storage cp ssl-cert-key.pem.enc gs://<bucket-name>
```
The proxy service account needs the "Cloud KMS CryptoKey Decrypter" role to
@@ -396,9 +395,9 @@ The service account also needs the "Storage Object Viewer" role to retrieve the
encrypted file from GCS:
```bash
$ gsutil iam ch \
serviceAccount:<service-account-email>:roles/storage.objectViewer \
gs://<bucket-name>
$ gcloud storage buckets add-iam-policy-binding gs://<bucket-name> \
--member=serviceAccount:<service-account-email> \
--role=roles/storage.objectViewer
```
### Proxy configuration

View File

@@ -21,7 +21,7 @@ function fetchVersion() {
local deployed_system=${1}
local env=${2}
local dev_project=${3}
echo $(gsutil cat \
echo $(gcloud storage cat \
gs://${dev_project}-deployed-tags/${deployed_system}.${env}.tag)
}

View File

@@ -37,9 +37,9 @@ steps:
else
project_id="domain-registry-${_ENV}"
fi
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/nomulus-config-${_ENV}.yaml .
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-scheduler-tasks-${_ENV}.xml .
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-tasks-queue.xml .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/nomulus-config-${_ENV}.yaml .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-scheduler-tasks-${_ENV}.xml .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-tasks-queue.xml .
deployCloudSchedulerAndQueue nomulus-config-${_ENV}.yaml cloud-scheduler-tasks-${_ENV}.xml $project_id --gke
deployCloudSchedulerAndQueue nomulus-config-${_ENV}.yaml cloud-tasks-queue.xml $project_id --gke
# Save the deployed tag for the current environment on GCS, and update the
@@ -51,12 +51,12 @@ steps:
- |
set -e
echo ${TAG_NAME} | \
gsutil cp - gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.tag
gcloud storage cp - gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.tag
now=$(TZ=UTC date '+%Y-%m-%dT%H:%M:%S.%3NZ')
echo "${TAG_NAME},$now" | \
gsutil cp - gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.tmp
gcloud storage cp - gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.tmp
# Atomically append uploaded tmp file to nomulus-gke.${_ENV}.versions
gsutil compose \
gcloud storage objects compose \
gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.versions \
gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.tmp \
gs://$PROJECT_ID-deployed-tags/nomulus-gke.${_ENV}.versions

View File

@@ -34,9 +34,9 @@ steps:
else
project_id="domain-registry-${_ENV}"
fi
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/nomulus-config-${_ENV}.yaml .
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-scheduler-tasks-${_ENV}.xml .
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-tasks-queue.xml .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/nomulus-config-${_ENV}.yaml .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-scheduler-tasks-${_ENV}.xml .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/cloud-tasks-queue.xml .
deployCloudSchedulerAndQueue nomulus-config-${_ENV}.yaml cloud-scheduler-tasks-${_ENV}.xml $project_id
deployCloudSchedulerAndQueue nomulus-config-${_ENV}.yaml cloud-tasks-queue.xml $project_id
# Deploy the GAE config files.
@@ -54,7 +54,7 @@ steps:
else
project_id="domain-registry-${_ENV}"
fi
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/${_ENV}.tar .
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/${_ENV}.tar .
tar -xvf ${_ENV}.tar
unzip default/WEB-INF/lib/core.jar
gcloud -q --project $project_id app deploy default/WEB-INF/appengine-generated/dispatch.yaml
@@ -67,7 +67,7 @@ steps:
- |
set -e
echo ${TAG_NAME} | \
gsutil cp - gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.tag
gcloud storage cp - gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.tag
# Update the release to AppEngine version mapping.
if [ ${_ENV} == production ]; then
project_id="domain-registry"
@@ -85,9 +85,9 @@ steps:
echo "Expecting exactly five active services. Found $num_versions"
exit 1
fi
gsutil cp "$local_map" gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.tmp
gcloud storage cp "$local_map" gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.tmp
# Atomically append uploaded tmp file to nomulus.${_ENV}.versions
gsutil compose \
gcloud storage objects compose \
gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.versions \
gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.tmp \
gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.versions

View File

@@ -19,15 +19,16 @@ steps:
# uploading process.
- name: 'gcr.io/${PROJECT_ID}/builder:live'
entrypoint: /bin/bash
args: ['gsutil', '-m', 'rsync', '-d', '-r', 'build/docs/javadoc', 'gs://${PROJECT_ID}-javadoc']
args: ['gcloud', 'storage', 'rsync', '--delete-unmatched-destination-objects', '--recursive', 'build/docs/javadoc',
'gs://${PROJECT_ID}-javadoc']
# Upload the files to GCS
# We don't use GCB's built-in artifacts uploader because we want to delete
# the existing files in the bucket first, and we want to parallelize the
# uploading process.
- name: 'gcr.io/${PROJECT_ID}/builder:live'
entrypoint: /bin/bash
args: ['gsutil', '-m', 'rsync', '-d', '-r', 'db/src/main/resources/sql/er_diagram',
'gs://${PROJECT_ID}-er-diagram']
args: ['gcloud', 'storage', 'rsync', '--delete-unmatched-destination-objects', '--recursive',
'db/src/main/resources/sql/er_diagram', 'gs://${PROJECT_ID}-er-diagram']
timeout: 3600s
options:
machineType: 'E2_HIGHCPU_32'

View File

@@ -226,7 +226,7 @@ steps:
done
done
# Upload the Gradle binary to GCS if it does not exist and point URL in Gradle wrapper to it.
- name: 'gcr.io/cloud-builders/gsutil'
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
@@ -237,17 +237,17 @@ steps:
gradle_bin=$(basename $gradle_url)
gcs_loc="domain-registry-maven-repository/gradle"
curl -O -L ${gradle_url}
if gsutil -q stat gs://${gcs_loc}/${gradle_bin}
if gcloud storage objects describe gs://${gcs_loc}/${gradle_bin}
then
local_md5=$(md5sum ${gradle_bin} | awk '{print $1}')
remote_md5=$(gsutil hash -h gs://${gcs_loc}/${gradle_bin} | grep md5 | awk '{print $3}')
remote_md5=$(gcloud storage hash -h gs://${gcs_loc}/${gradle_bin} | grep md5 | awk '{print $3}')
if [[ ${local_md5} != ${remote_md5} ]]
then
echo "${gradle_bin} HAS CHANGED ON GRADLE WEBSITE, USING THE BINARY ON GCS."
fi
else
gsutil cp $gradle_bin gs://${gcs_loc}/
gsutil acl ch -u AllUsers:R gs://${gcs_loc}/${gradle_bin}
gcloud storage cp $gradle_bin gs://${gcs_loc}/
gcloud storage objects update --predefined-acl=publicRead gs://${gcs_loc}/${gradle_bin}
fi
rm ${gradle_bin}
sed -i s%services.gradle.org/distributions%storage.googleapis.com/${gcs_loc}% \

View File

@@ -62,7 +62,7 @@ steps:
- -c
- |
set -e
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/schema.jar \
gcloud storage cp gs://$PROJECT_ID-deploy/${TAG_NAME}/schema.jar \
/flyway/jars
# Deploy SQL schema
- name: 'gcr.io/$PROJECT_ID/schema_deployer:latest'
@@ -83,7 +83,7 @@ steps:
- |
set -e
echo ${TAG_NAME} | \
gsutil cp - gs://$PROJECT_ID-deployed-tags/sql.${_ENV}.tag\
gcloud storage cp - gs://$PROJECT_ID-deployed-tags/sql.${_ENV}.tag\
timeout: 3600s
options:
machineType: 'E2_HIGHCPU_32'

View File

@@ -64,9 +64,9 @@ steps:
- -c
- |
set -e
deployed_schema_tag=$(gsutil cat \
deployed_schema_tag=$(gcloud storage cat \
gs://$PROJECT_ID-deployed-tags/sql.${_ENV}.tag)
gsutil cp gs://$PROJECT_ID-deploy/$deployed_schema_tag/schema.jar \
gcloud storage cp gs://$PROJECT_ID-deploy/$deployed_schema_tag/schema.jar \
/schema
# Verify the schema
- name: 'gcr.io/$PROJECT_ID/schema_verifier:latest'

View File

@@ -18,11 +18,11 @@ steps:
# Rsync the folder where deployment artifacts are uploaded.
- name: 'gcr.io/$PROJECT_ID/builder:latest'
args:
- gsutil
- -m
- gcloud
- storage
- rsync
- -d
- -r
- --delete-unmatched-destination-objects
- --recursive
- gs://$PROJECT_ID-deploy/${TAG_NAME}
- gs://$PROJECT_ID-deploy/live
- # Tag nomulus

View File

@@ -122,8 +122,8 @@ class RollbackTestCase(unittest.TestCase):
'.*gcloud app services set-traffic.*')
self.assertRegex(steps[9].info(), '.*gcloud app versions stop.*')
self.assertRegex(steps[13].info(),
'.*echo nomulus-20201014-RC00 | gsutil cat -.*')
self.assertRegex(steps[14].info(), '.*gsutil -m rsync -d .*')
'.*echo nomulus-20201014-RC00 | gcloud storage cat -.*')
self.assertRegex(steps[14].info(), '.*gcloud storage rsync --delete-unmatched-destination-objects .*')
if __name__ == '__main__':

View File

@@ -26,8 +26,8 @@ import common
class RollbackStep:
"""One rollback step.
Most steps are implemented using commandline tools, e.g., gcloud and
gsutil, and execute their commands by forking a subprocess. Each step
Most steps are implemented using commandline tools, e.g., gcloud,
and execute their commands by forking a subprocess. Each step
also has a info method that returns its command with a description.
Two steps are handled differently. The _UpdateDeployTag step gets a piped
@@ -147,7 +147,7 @@ class _UpdateDeployTag(RollbackStep):
destination: str
def execute(self) -> None:
with subprocess.Popen(('gsutil', 'cp', '-', self.destination),
with subprocess.Popen(('gcloud', 'storage', 'cp', '-', self.destination),
stdin=subprocess.PIPE) as p:
try:
p.communicate(self.nom_tag.encode('utf-8'))
@@ -165,7 +165,7 @@ def update_deploy_tags(dev_project: str, env: str,
return _UpdateDeployTag(
f'Update Nomulus tag in {env}',
(f'echo {nom_tag} | gsutil cp - {destination}', ''), nom_tag,
(f'echo {nom_tag} | gcloud storage cp - {destination}', ''), nom_tag,
destination)
@@ -183,4 +183,4 @@ def sync_live_release(dev_project: str, nom_tag: str) -> RollbackStep:
return RollbackStep(
f'Syncing {artifacts_folder} to {live_folder}.',
('gsutil', '-m', 'rsync', '-d', artifacts_folder, live_folder))
('gcloud', 'storage', 'rsync', '--delete-unmatched-destination-objects', artifacts_folder, live_folder))