1
0
mirror of https://github.com/google/nomulus synced 2026-01-31 10:02:28 +00:00

Compare commits

...

240 Commits

Author SHA1 Message Date
Rachel Guan
c9efa61198 Update expiring certificate notification email content (#1294)
* Update expiring certificate notification email content

* Improve test cases
2021-08-30 11:51:05 -04:00
gbrodman
054c0625a8 Add SQL functionality to DeleteProberDataAction (#1218)
This includes a change to how the JPA transaction manager handles
existence and load checks for entities with compound IDs. Previously, we
relied on the fields all being named the same in the ID entity and the
parent entity. This didn't work for History objects (e.g. DomainHistory)
so existence checks were broken. Now, we use the methods the same way
that Hibernate does (if possible).

Note as well that there's a bit of semi-duplicated logic in
DeleteProberDataAction (between the mapper and the SQL logic). The
mapper code will be deleted once we've shifted to SQL, and for now it's
better to keep it in place for logging purposes.
2021-08-27 21:09:08 -04:00
gbrodman
b03639d7fc Implement read-only transaction manager modes for R3.0 migration (#1241)
This involves:
- Altering both transaction managers to check for a read-only mode at
the start of standard write actions (e.g. delete, put).
- Altering both raw layers (entity manager, ofy) to throw exceptions on
write actions as well
- Implementing bypass routes for reading / setting / removing the schedule itself
so that we don't get "stuck"
2021-08-27 15:59:16 -04:00
Rachel Guan
bd9af0de84 Improve logging for SendExpiringCertificateNotificationEmailAction.java (#1302)
* Improve logging for SendExpiringCertificateNotificationEmailAction.java
2021-08-27 13:11:54 -04:00
gbrodman
ae911a5280 Fix semantic merge conflict accidentally introduced (#1301) 2021-08-26 16:15:56 -04:00
gbrodman
d57597f40f Clean up ReplicateToDatastoreAction and tests (#1299)
* Clean up ReplicateToDatastoreAction and tests

1. applyTransaction should throw an error if it fails; this allows us to
have more information in the caller (and it shouldn't usually happen)
2. Set a response code + payload now, since this is an action that is
called by cron
3. Add a method to the test log subject that allows us to check if a
severe log with a particular Throwable cause was logged (since the cause
isn't contained in the log message itself directly)
2021-08-25 14:45:05 -06:00
gbrodman
2641d0d462 Save indexes when replaying EppResources SQL->DS (#1300)
* Save indexes when replaying EppResources SQL->DS

We implement this similarly to how we implement the
beforeSqlSaveOnReplay callback in the other direction -- a
beforeDatastoreSaveOnReplay method that is called when replaying a
Mutation to Datastore. This means that the asynchronous replay will
create the relevant ForeignKeyIndex and EppResourceIndex objects for
EppResources saved when SQL is primary.
2021-08-25 14:44:44 -06:00
sarahcaseybot
5b41f0b9b6 Remove ClaimsList from Datastore Schema (#1298)
* Remove ClaimsList from Datastore schema

* Remove some Datastore references

* Remove unnecessary annotations
2021-08-25 11:58:44 -04:00
Lai Jiang
1a26677d72 Implement a util class to manage push queues using Cloud Tasks API (#1290)
* Implement a util class to manage push queues using Cloud Tasks API

Push queues were part of App Engine when they debuted. As a result the
Task Queue API were part of the App Engine SDK and can only be used in
App Engine classic runtime. The new Cloud Tasks API can be used in any
runtime but it only supports push queues. In this PR we implement a util
class (CloudTasksUtils) like TaskQueueUtils to handle enqueuing tasks to
push queues using Cloud Tasks. One action (TldFanoutAction) was
converted to use the new API as a demo. Mass migration of other call sites of
the old API will follow in a separate PR.

TESTED=deployed to alpha and verified that tasks are corrected enqueued
and executed.
2021-08-24 21:13:54 -04:00
gbrodman
f1beeb4016 Add double-replay to remaining existing ReplayExtension calls (#1297)
The only other change is that we need to reconstitute
serverApproveEntities for DomainTransferData in more situations (to fill
out the ofy keys)
2021-08-23 15:08:09 -04:00
gbrodman
5c33286056 Compare SQL and Datastore objects in SQL->DS replay testing (#1291)
Add double-replay to the Host*Flow tests to show how this works. The
only change to the double replay itself is that now we store the
Datastore entity in the TransactionEntity object -- this is because we
use Objectify to serialize the objects into bytes and we need it to know
about the entity in question.
2021-08-23 11:05:14 -04:00
gbrodman
603a95d719 Add DS->SQL replay cron job to production (#1292)
* Add DS->SQL replay cron job to production

This won't do anything until we set the migration schedule to
DATASTORE_PRIMARY. Actions in order:

1. Add this cron job (it'll be a no-op)
2. Run the init-sql-pipeline to populate production's SQL DB
3. Set the SqlReplayCheckpoint to a time before the smear backup that
was used in step #1 (maybe 30 minutes)
4. Set the database migration schedule to transition to
DATASTORE_PRIMARY at some point
2021-08-23 07:59:51 -06:00
gbrodman
0a3774d3f7 Add withDsAndCloudSql to flow test (#1293)
* Add withDsAndCloudSql to flow test

Not sure why this wasn't failing before
2021-08-20 09:07:38 -06:00
Rachel Guan
cc60b27dd3 Add sending notification email mechanism for expiring certificates (#1179)
* Resolve rebase conflict

* Fix and imporove based on feedback.
2021-08-19 12:49:45 -04:00
Rachel Guan
52c18f9967 Remove files that are not longer used for create/update premium list (#1288)
* Remove files that are not longer used for create/update premium list

* Remove comments/notes related to create/update premium list action files
2021-08-18 14:04:57 -04:00
gbrodman
5339b3cb6c Remove -- from crash cron comment (#1289)
This is causing the release build to fail, see https://pantheon.corp.google.com/cloud-build/builds;region=global/22ec980b-c2b6-43fe-994a-aa98c0dbc9d4?project=domain-registry-dev
2021-08-18 11:30:01 -04:00
sarahcaseybot
d18dab3327 Remove ReservedList from Datastore schema (#1285)
* Remove ReservedList from Datastore schema

* Remove some Datastore references

* Add a different non-replicated entity to ReplayCommitLogsToSqlActionTest
2021-08-17 16:56:00 -04:00
gbrodman
61932c1809 Use direct ofyTm reference when clearing cache in tool (#1287)
We shouldn't reference tm() at all before initializing the JPA
transaction manager, since tm() looks at the database migration schedule
when figuring out which transaction manager to use.
2021-08-17 13:20:17 -06:00
sarahcaseybot
8eb8c810e8 Remove DeleteEntityAction (#1282) 2021-08-16 13:21:00 -04:00
Weimin Yu
c03a7b0b33 Update cron jobs in crash (#1284)
* Update cron jobs in crash

Add wipeout cron jobs for the duration of migration testing with
production data.

* Disable Datastore-related cron jobs
2021-08-16 12:03:45 -04:00
gbrodman
7a4c109b36 Remove recursive load in DBMSS cache (#1286)
* Remove recursive load in DBMSS cache

This occurs because if we do a standard transaction, the JpaTxnManager
checks to see if we should be doing backups, which involves loading the
migration state schedule (causing the recursion). When starting the
transaction to load the schedule, we should explicitly
transactWithoutBackup so there's no need to check.

This wasn't hit in tests because we previously manually set the
replication to not occur in the JpaTransactionManagerExtension -- we
remove that and related setters.
2021-08-14 12:34:23 -06:00
Ben McIlwain
22b1b8d21a Add instructions for two-step DB schema updates (#1283)
* Add instructions for two-step DB schema updates

These expanded steps are required by the recent enabling of the SQL integration test suite.
2021-08-13 17:21:38 -04:00
Weimin Yu
5bbabadafd Generate string to uniquely identify a SqlEntity (#1271)
* Generate string to uniquely identify a SqlEntity

Add a method to SqlEntity that returns a string built from the entity's
primary key(s). This string can be used in logging.
2021-08-13 16:22:54 -04:00
Ben McIlwain
6c73161ff8 Add the domain DNS refresh request time field to the DB schema (#1280)
* Add the domain DNS refresh request time field to the DB schema

This isn't used yet, but it will eventually be the replacement for the dns-pull
task queue once we get further in the migration.

* Remove index
2021-08-13 15:32:18 -04:00
Rachel Guan
7faee04422 Modify class name to remove checkstyleTest warning (#1281) 2021-08-13 14:16:58 -04:00
Ben McIlwain
b340b2b5e9 Add tx/s instrumentation to replay action and re-enable it on sandbox (#1276) 2021-08-12 18:33:47 -04:00
gbrodman
7f733cd16d Store DatabaseMigrationSchedule in SQL instead of Datastore (#1269)
* Store DatabaseMigrationSchedule in SQL instead of Datastore

This requires messing around with some of the JPA unit test rule
creation since it requires saving / retrieving the schedule pretty much
always (which itself includes the hstore extension).
2021-08-12 15:57:31 -06:00
Ben McIlwain
60469479a4 Consolidate all remaining schema classes into model package (#1278) 2021-08-12 13:38:50 -04:00
Ben McIlwain
5158673f21 Consolidate all Registry/TLD-related classes into google.registry.model.tld (#1277) 2021-08-11 18:04:51 -04:00
gbrodman
743ca4106c Add SQL schema additions for DatabaseMigrationStateSchedule (#1274) 2021-08-10 16:46:07 -04:00
Ben McIlwain
2b99ee61d4 Load DatabaseMigrationStateSchedule in a more performant way (#1273)
This performs a direct load-by-key (the most efficient Datastore operation),
rather than attempting to load all entities by type using an ancestor query. The
existing implementation is possibly more error-prone as well, and might be
responsible for the "cross-group transaction need to be explicitly specified"
error we're seeing.
2021-08-10 14:47:59 -04:00
sarahcaseybot
28a1cc613c Remove PremiumList from Datastore schema (#1256)
* Remove PremiumList from Datastore schema

* Remove commented out code

* Change lastUpdateTime to creationTimestamp

* Remove extra file

* Remove currency unit from input data to parse

* Revert extra file

* Check currency in parse

* Create all PremiumEntries before saving them in bulk

* small fixes

* Fix merge conflict
2021-08-10 13:26:13 -04:00
sarahcaseybot
9811cdb85c Initialize data in cloudSqlOnly tests (#1266)
* Initialize data in cloudSqlOnly tests

* combine conditionals
2021-08-09 13:04:35 -04:00
Lai Jiang
761ae612fd Remove backported LocalStorageHelper (#1267)
* Remove backported LocalStorageHelper

The released version on Maven Central now contains the fix to the
serialization bug.
2021-08-06 21:10:32 -04:00
gbrodman
e2fa60a9c6 Use one SQL transaction per Datastore transaction in replay to SQL (#1268)
There was a subtle issue that we encountered in sandbox when using one
transaction per file that was difficult to replicate. Basically,
1. Save a domain with dsData
2. Save the domain without dsData
3. Save the domain with the same dsData as step 1
4. Delete literally any object

If one performs steps 2-4 in the same transaction, Hibernate will throw
an exception (cascade re-saving a cascade-deleted object). Note that
step 4 is in fact necessary to reproduce the issue, yay Hibernate.

We will test this and if one transaction per transaction is too slow,
we'll figure out ways to reduce the number of SQL transactions.
2021-08-06 16:05:36 -04:00
sarahcaseybot
b04dfbf740 Migrate invoicing pipeline to read from Cloud SQL (#1220)
* Save entities to Cloud SQL for tests

* Fix merge conflict

* Filter out non-real registrars and non-invoicing TLDs

* Add 1 month filter

* Handle cancellations

* Add to pipeline

* Use database in pipeline

* fix formatting

* Add a full pipeline test

* Fix repo ids in tests

* Move query to separate file

* Remove unused variables

* Remove unnecessary debugging remnant

* Reformat sql file

* Add jpql issue description

* Use DateTimeUtils

* Fix license header year

* Fix SQL formatting

* Use regex pattern

* Fix string building

* Add test for makeCloudSqlQuery

* Add clarifying comment
2021-08-06 15:56:04 -04:00
Weimin Yu
a1668ceafd Drop the KmsSecret table (#1258)
* Drop the KmsSecret table

Code using this table has been removed in PR 1252.
2021-08-04 23:23:58 -04:00
Lai Jiang
406d49ac99 Fix GCS bucket/subdir handling in IcannReportingStager (#1265)
After the migration to the new GCS API it becomes apparent that the
BlobId.of() method needs to take the bucket name (without any trailing
directories) as the first argument. I did a search on all occurrences of
"BlobId.of" in the code base and verified that it is only in the ICANN
reporting job that the API was misused.

<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1265)
<!-- Reviewable:end -->
2021-08-04 14:01:04 -04:00
Weimin Yu
f7b82bc190 Allow db wipeouts in non-prod/sandbox enviroments (#1263)
* Allow db wipeouts in non-prod/sandbox enviroments
2021-08-03 17:41:10 -04:00
Lai Jiang
45c398149b Write RDE files and advance cursors in Beam pipeline (#1249)
This PR re-implements most of the logic in the RdeStagingReducer, with
the exception of the last enqueue operations, due to the fact that the
task queue API is not available outside of App Engine SDK. This part
will come in a separate PR.

Another deviation from the reducer is that we forwent the lock -- it is
difficult do it across different beam transforms. Instead we write each
report to a different folder according to its unique beam job name. When
enqueueing the publish tasks we will then pass the folder prefix as a
URL parameter.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1249)
<!-- Reviewable:end -->
2021-07-30 16:24:58 -04:00
gbrodman
183c0653fb Add a test for replaying cascading deletes to SQL (#1259)
* Add a test for replaying cascading deletes to SQL
2021-07-30 15:31:46 -04:00
gbrodman
fb002953c8 Add more logging in the event of replay put/delete failure (#1262)
* Add more logging in the event of replay put/delete failure
2021-07-30 15:09:45 -04:00
Rachel Guan
a369e57e5c Apply registrar code change after registrar schema change is taken effect (#1254)
* Apply code change after registrar schema change is taken effect
2021-07-28 16:40:49 -04:00
gbrodman
f32d80fb9d Use the DB migration schedule for SQL->DS replay (#1242)
This is instead of the current configuration parameter.

In addition, this adds some helpers to DatabaseHelper to make the
transitions easier, since we more frequently need to alter + reset the
schedule.
2021-07-27 16:05:59 -04:00
gbrodman
afa5a353f1 Use raw EntityManager to load during beforeSqlSave (#1253)
If we use the transaction manager methods, JpaTransactionManagerImpl
will attempt to detach the EppResource in question that we're loading --
this fails because that entity has been saved in the same transaction
already. We don't need detaching during these methods (it's just for
resource population) so we can use the raw loads to get around it.
2021-07-26 19:14:49 -04:00
Rachel Guan
c4c5ac85da Remove isNearingExpiration() after shouldReceiveExpiringNotification() being added to code base (#1255)
* Resolve merge conflict
2021-07-26 18:23:14 -04:00
Ben McIlwain
4d0078607f Add SECURITY.md security policy (#1257)
* Add SECURITY.md security policy
2021-07-26 17:35:59 -04:00
Rachel Guan
2b78433682 Add method that checks if client should be notified for expiring certificate (#1245)
* fix merge conflict
2021-07-26 17:20:12 -04:00
Weimin Yu
a0fcd02ed2 Remove KmsSecret model entities (#1252)
* Remove KmsSecret model entities

Now that we have been using the SecretManager for almost a month now,
remove the KmsSecret and KmsSecretRevision entities from Java code base.
A follow-up PR will drop the relevant tables in the schema.

Also removed a few unused classes in the beam package.
2021-07-26 17:09:09 -04:00
Rachel Guan
58e413af89 Expand registrar schema to support sending expiring certificate notification emails (#1247)
* Expand registrar schema to support sending expiring certificate notification emails

* Remove java change (restrictly schema change only)
2021-07-22 17:11:32 -04:00
gbrodman
38c8e81690 Fix runtime issues with commit-log-to-SQL replay (#1240)
* Fix runtime issues with commit-log-to-SQL replay

- We now use a more intelligent prefix to narrow the listObjects search
space in GCS. Otherwise, we're returning >30k objects which can take
roughly 50 seconds. This results in a listObjects time of 1-3 seconds.

- We now search hour by hour to efficiently make use of the prefixing.
Basically, we keep searching for new files until we hit the current time
or until we hit the overall replay timeout.

- Dry-run only prints out the first hour's worth of files
2021-07-22 13:59:28 -04:00
Rachel Guan
3beb207fcc Add email set up for sending expiring certificate notification emails (#1248)
* Add email set up for sending expiring certificate notification emails
2021-07-21 15:47:27 -04:00
gbrodman
8cf88b7e18 Avoid unnecessary tm() calls without ofy init in Spec11PipelineTest (#1250)
* Avoid unnecessary tm() calls without ofy init in Spec11PipelineTest
2021-07-20 15:10:50 -04:00
gbrodman
6ec2e9501d Fix flaky test issues caused by lack of ofy init (#1246) 2021-07-20 13:14:41 -04:00
sarahcaseybot
6849bf6914 Use less strict isolation level in Spec11 pipeline (#1244) 2021-07-16 15:46:34 -04:00
gbrodman
34f3823960 Fix hanging threads in GcsDiffFileLister (#1243)
* Fix hanging threads in GcsDiffFileLister

Basically, whenever we request threads using the request thread factory,
we must be on the request thread itself. Dagger doesn't guarantee this
for us if we provide the ExecutorService directly in the action (or in
the GcsDiffFileLister), but we can gurantee that we're on the request
thread itself by simply injecting a Lazy, so that the executor is
instantiated inside the request itself.

In addition, add a timeout on the futures just in case.
2021-07-16 14:13:20 -04:00
gbrodman
bb5d2dcf0a Use the DatabaseMigrationSchedule to determine which TM to use (#1233)
* Use the DatabaseMigrationSchedule to determine which TM to use

We still allow the "manual" specification of a particular transaction
manager, most useful in @DualDatabaseTest classes. If that isn't
specified, we examine the migration schedule to see which to return.

Notes:
- This requires that any test that sets the migration schedule clean up
after itself so that it won't affect future test runs of other classes
(because the migration schedule cache is static)
- One alternative would, instead of having a "test override" for the
transaction manager, be to examine the registry environment and only
override the transaction manager in the UNIT_TEST environment. This
doesn't work because there are many instances in which tests simulate
non-test environment.
2021-07-14 13:05:01 -04:00
sarahcaseybot
6ce0211537 Remove key references from BaseDomainLabelList (#1239) 2021-07-13 16:49:34 -04:00
Lai Jiang
676616a172 Remove the use of GCS APIs provided from GAE SDK (#1228)
The API provided by the GAE SDK will not be available outside GAE
runtime. This presents a problem when we migrate off of GAE. More
pressingly, the RDE pipeline migration to Beam requires that we write to
GCS on GCE. Previously we were able to sidestep the issue by delegating
the writes to FileIO provided by Beam, which knows how to write to GCS.
However the RDE pipeline cannot use FileIO directly as it needs to write
to multiple files in one go and explicit use of GCS API is needed.

An unfortunate side effect of the API migration is that the new testing
library contains a bug which makes serializing GcsUtils impossible. It
is fixed upstream but not released yet. The fix has been backported for
the time being.

<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1228)
<!-- Reviewable:end -->
2021-07-13 14:52:37 -04:00
Weimin Yu
62c556cebf Restore commit logs from other project (#1236)
* Restore commit logs from other project

Allow non-production projects to restore commit logs from another
project. This feature can be used to duplicate a realistic testing
environment.

An optional parameter is added that can override the default commit log
location.

Tested successfully in QA.
2021-07-12 16:56:47 -04:00
Ben McIlwain
535f84a912 Add better logging/error messages for Cloud DNS failures (#1237)
* Add better logging/error messages for Cloud DNS failures
2021-07-09 17:04:57 -04:00
sarahcaseybot
d283cf1c90 Remove old DomainList fields from Registry (#1231)
* Remove old DomainList fields from Registry

I also resaved all Registry objects in sandbox and production to make sure that the new field is populated on all entity objects.

* small fixes

* Some more small fixes

* Delete commented out code

* Remove existence check in tests
2021-07-08 17:19:11 -04:00
Rachel Guan
f5d344d5c9 Add cc support to email service (#1230)
* Add cc support to email service
2021-07-08 12:03:03 -04:00
Weimin Yu
61d029d955 Ensure VKey is actually serializable (#1235)
* Ensure VKey is actually serializable

Tighten field type so that non-serializable object cannot be set as
sqlKey.

This would make it easier to make EppResource entities Serializable in
the future.
2021-07-08 10:54:22 -04:00
Lai Jiang
2195ba90fa Add a method to set a "not in" WHERE clause in CriteriaQueryBuilder (#1225)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1225)
<!-- Reviewable:end -->
2021-07-07 15:49:29 -04:00
gbrodman
e5b9ff1498 Add a dry-run option to commit-log-replay action and use it in Sandbox (#1234) 2021-07-03 11:18:00 -04:00
Ben McIlwain
3f9fec98d5 Add more logging to replay commit logs action (#1232) 2021-07-02 18:06:04 -04:00
Ben McIlwain
4e30d020ca Set payload response in happy path of ReplayCommitLogsToSqlAction (#1229)
* Set payload response in happy path of ReplayCommitLogsToSqlAction

I suspect this may be the reason the logs are missing on the happy path (when it
runs successfully), but are visible on the exception paths (which do set the
payload response). I don't think App Engine likes it when a Web request
terminates without a response.

This also adds more logging and error handling.
2021-07-01 18:21:17 -04:00
Lai Jiang
047444831b Add a Beam pipeline to generate RDE deposit (part 1) (#1219)
This is the first part of the RdeStagingAction SQL migration where the
mapper logic is implemented in Beam.

A few helper methods are added to convert the DomainContent, HostBase
and ContactBase to their respective terminal child classes. This is
necessary and possible because the child classes do not have extra
fields and the base classes exist only to be embedded to other entities
(such as the various HistoryEntry entities). The conversion is necessary
because most of our code expects the terminal classes, such as the
RdeMarshaller's various marshallXXX() methods. The alternative would be
to change all the call sites, which seems to be much more disruptive.

Unfortunately there is is no good way to do this conversion than just
creating a builder and setting every fields there is.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1219)
<!-- Reviewable:end -->
2021-06-30 13:54:24 -04:00
Weimin Yu
7adcbee5ad Retry flaky tests for ReplicateToDatastoreAction (#1226)
* Retry flaky tests for ReplicateToDatastoreAction

The occassional failures seem to be caused by the test Datastore.
2021-06-29 17:12:02 -04:00
Michael Muller
78a750b7e1 Support testing SQL -> DS replication in ReplayExt (#1216)
* Support testing SQL -> DS replication in ReplayExt

Support testing of Postgres -> Datastore replication in the ReplayExtension
when running in SQL mode in a DualDatabaseTest.

This is currently only enabled for one test (HostInfoFlowTest) since this form
of replication is likely to be problematic in many cases.

As part of this change:

- Add a thread-local flag so that we don't attempt to do certain data
  transformations when serializing entities for storage in a Transaction
  record. (These typically need to be called in a datastore transaction).
- Replace tm() in datastore translators with ofyTm() (these should only be
  called from within an ofy transaction) and also in the replay system itself.
- Add a transactWithoutBackup() method for use within the replay itself.
- Prevent replication of entities that are not intended to be replicated.
- Make some of the ReplicateToDatastoreAction methods public so we can invoke
  them from ReplayExtension.
- Change the way that the test type is stored in the extension context in a
  DualDatabaseTest so that we can check for it from the ReplayExtension.

* Limit number of tests and show output

Trying to debug why these are failing in kokoro.

* Move HostInfoFlowTest to fragile for now

The test now manipulates a globel variable that causes problems for other
tests.  There's likely a better fix for this, but for purposes of this PR we
can just move it to "fragile."

* Fix a few more problems

-   "replay" flag should have been initialized to false -- as it stands,
    replay wasn't happening.
-   disable "always save with backup" in the datastore helper, we were
    apparently getting some unwanted commit log entries that were causing
    timestamp inversions in other tests.  Also clear out the replay queue
    just for good hygiene.
-   Check for a null replicator in replayToOfy before proceeding.
-   Use a local inOfyContext flag to track whether we're in ofy context, as
    the tm() function is less reliable in dual-database tests.
2021-06-29 10:00:39 -04:00
Ben McIlwain
2e8a1c422d Set HistoryEntry modification time in FlowModule (#1222)
* Set HistoryEntry modification time in FlowModule

Rather than having to set it individually to now (the current transaction time)
in every transactional flow, just do it once at the beginning when the
HistoryEntry.Builder is first being provided. This is also safer, as just doing
it in one place gives us stronger guarantees that it always corresponds to the
execution time of the flow, rather than leaving the potential open that in one
flow it's unintentionally set to the wrong thing.
2021-06-29 09:05:12 -04:00
gbrodman
0e5605b175 Set a 5min time limit on the SQL replay action (#1224)
This means we avoid GAE request timeouts and can get progress logs more
quickly (logs weren't showing up on GAE in Sandbox).
2021-06-28 17:01:16 -04:00
Ben McIlwain
a10b5d8b30 Rename a few soy files for consistency (#1223)
* Rename a few soy files for consistency

This prefers the ResourceAction.soy naming convention for .soy files that
contain EPP XMLs so that they match the name of the corresponding EPP flow. E.g.
DomainDelete.soy now matches DomainDeleteFlow.java
2021-06-28 12:00:08 -04:00
Ben McIlwain
b7ce08dfdc Fix BigDecimal precision of PremiumList.getLabelsToPrices() (#1221)
* Fix BigDecimal precision of PremiumList.getLabelsToPrices()

Different currencies have different numbers of decimal places (e.g. USD has 2,
JPY has 0, and some even have 3). Thus, when loading the contents of a premium
list, we need to set the precision correctly on all of the BigDecimal prices.

This issue was introduced as part of the Registry 3.0 database migration when we
changed each PremiumEntry to being a Money to a BigDecimal (to remove the
redundancy of storing the same currency value over and over).
2021-06-25 19:10:21 -04:00
Lai Jiang
a3e8bf219f Remove some unnecessary Ofy key creation (#1212) 2021-06-24 17:35:39 -04:00
gbrodman
546eba68bd Add SQL functionality to DeleteLoadTestDataAction (#1211)
* Add SQL functionality to DeleteLoadTestDataAction

This isn't directly meant to be run in production so some of the rough
edges (doesn't delete domains, can't delete contacts that are referenced
by an existing domain) are fine. We can handle those in
DeleteProberTestAction when we do the more comprehensive deletions.
2021-06-23 15:39:22 -04:00
Weimin Yu
81fcdbdcea Make SQL queries return scrollable results (#1214)
* Make SQL queries return scrollable results

With Postgresql, we must override the default fetchSize (0) to enable
scrollable result sets. Previously we only did this in QueryComposer.

In this change we enable scrollable results for all queries by default.
We also provide a helper function
(JpaTransactionManager.setQueryFetchSize) that can override the default.
2021-06-22 22:13:57 -04:00
Weimin Yu
2b91e3bb89 Fix appId during cross-project commitlog imports (#1213)
* Fix appId during cross-project commitlog imports

When importing commit logs from another project, we must override the
appId in every entity key instances.

The fixEntity method in the EntityImports class is a straightforward
translation of the python function of the same name used by the
storage team.
2021-06-22 15:59:58 -04:00
Lai Jiang
ce03556683 Fix a GCB job description (#1215)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1215)
<!-- Reviewable:end -->
2021-06-22 13:51:26 -04:00
Lai Jiang
967304588b Make RegistryJpaIO use CriteriaQuery intead of QueryComposer (#1209)
QueryComposer could be used when the transaction manager is not
determined (i. e. it supports both ofy and sql), but this also imposes
limits on what you can do with it. For example it does not support IN
operator in the where clause.

Since QueryComposer itself creates a CriteriaQuery for JPA TM it make
sense to have RegistryJpaIO take a CriteriaQuery directly as it only
uses JPA.

Also add some more helper methods to use native queries and typed
queires, and fix some generic type warnings.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1209)
<!-- Reviewable:end -->
2021-06-18 10:29:00 -04:00
sarahcaseybot
a2754a0eff Add new domain list fields to Registry objects (#1208)
* Add domain list name fields to Registry objects

* Add some comments

* Added scrap command

* Fix typo

* capitalize TLD
2021-06-16 15:13:46 -04:00
Michael Muller
276bbc09c2 Add RDE Staging to QA crontab. (#1210)
* Add RDE Staging to QA crontab.
2021-06-15 15:02:47 -04:00
Lai Jiang
fd461a78e7 Unwrap the return value of loadAtPointInTime (#1205)
In SQL we do not need to wrap it in a Result. Unfortunately we cannot
overload a function based on its return value so we renamed the existing
one and created a new one with the old name that returns the resource
directly. Once we no longer have use of Datastore we can delete the now
renamed function that returns a Result<? extends EppResource>

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1205)
<!-- Reviewable:end -->
2021-06-14 11:55:24 -04:00
gbrodman
0374ad60d8 Add ReplayCommitLogsToSqlAction to backend routing (#1203)
Necessary so that we can actually call it from the cron job
2021-06-14 09:59:06 -04:00
sarahcaseybot
fcc027e0c8 Add Cloud SQL read to Spec11Pipeline (#1173)
* Add Cloud SQL read to Spec11Pipeline

* Add database option

* Add database parameter

* Add a test of the full pipeline

* Use DatabaseHelper in tests

* restore the original tm

* More test fixes
2021-06-11 14:25:20 -04:00
Weimin Yu
c3a4887845 Fix timestamp inversion error in a test (#1207)
* Fix timestamp inversion error in a test
2021-06-11 11:05:10 -04:00
Ben McIlwain
a0b6437f4c Add reason/registrar request options when creating/updating domains (#1202)
* Add reason/registrar_request options when creating/updating domains
2021-06-11 10:50:32 -04:00
Lai Jiang
a7210a26b4 Make RefreshDnsForAllDomains SQL-aware (#1197)
Also marks a few mapreduce actions as @Deprecated as they are no longer
needed in SQL.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1197)
<!-- Reviewable:end -->
2021-06-10 21:09:19 -04:00
Lai Jiang
c7096a1b71 Fix a flaky test (#1204)
In testSuccess_expandSingleEvent_notIdempotentforDifferentRecurring(),
two Recurring entities are created with the only difference being their IDs. If
we don't order the Recurrings by ID when loading them there is no guarantee
which one is expanded first. In this test the expected OneTime entities are
created with the assumption that the first loaded DomainHistory (parent of a
OneTime) corresponds to the expanding the Recurring with the smaller ID (2L).
Since the DomainHistory entities are loaded in order of IDs, and the IDs are
created monotonically in time in tests, we need to load the Recurrings in
order of their IDs to ensure that the first DomainHistory is the result of
expanding the Recurring with ID of 2L. This should impose minimum performance
penalty as we are ordering by the primary key.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1204)
<!-- Reviewable:end -->
2021-06-10 14:06:05 -04:00
gbrodman
30634ff404 Convert EppResourceUtils::loadAtPointInTime to SQL+DS (#1194)
* Convert EppResourceUtils::loadAtPointInTime to SQL+DS

This required the following changes:
- The branching / conversion logic itself, where we load the most recent
history object for the resource in question (or just return the resource
itself)
- For simplicity's sake, adding a method in the *History objects that
returns the generic resource -- this means that it can be called when we
don't know or care which subclass it is.
- Populating the domain's dsData and gracePeriods fields from the
DomainHistory fields, and adding factories in the relevant classes to
allow us to do the conversions nicely (the history classes are almost
the same as the regular ones, but not quite).
- Change the tests to use the clocks properly and to allow comparison of
e.g. DomainContent to DomainBase. The objects aren't the same (one is a
superclass of the other) but the fields are.

Note as well a slight behavioral change: commit logs only allow us
24-hour granularity, so two updates in the same day mean that the
earlier update is ignored and inaccessible. This is not the case for
*History objects in SQL; all versions are accessible.
2021-06-10 12:25:06 -04:00
Lai Jiang
4f71d780ab Make ExportDomainListsAction SQL-aware (#1195)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1195)
<!-- Reviewable:end -->
2021-06-10 12:03:17 -04:00
Michael Muller
14ad56a392 Fix Datastore "count" queries (#1201)
* Fix Datastore "count" queries

The objectify "count()" method doesn't work for result sets larger than 1000
elements, use the original trick from "count domains" that fetches the keys
and counts them.

* Added an SO link
2021-06-08 15:23:25 -04:00
gbrodman
a1b56b0521 Convert remaining ofy() calls to auditedOfy() (#1200)
* Convert remaining ofy() calls to auditedOfy()
2021-06-08 13:52:13 -04:00
gbrodman
3f41f7f444 Start the DS->SQL replay cron job in non-prod environments (#1199)
* Start the DS->SQL replay in non-prod environments

This should be a no-op since we haven't enabled it but this means that
when we set the schedule, we'll start replaying
2021-06-08 11:35:47 -04:00
gbrodman
4f6bcea63f Fix a test flake in SetDatabaseMigrationScheduleCommandTest (#1198)
* Fix a test flake in SetDatabaseMigrationScheduleCommandTest

The cache is static so some odd state may stick around between tests --
we should clear it
2021-06-08 11:35:29 -04:00
Lai Jiang
bd0ef626a1 Fix a few test annotations (#1196) 2021-06-08 00:40:58 -04:00
Lai Jiang
68304133c4 Make RefreshDnsOnHostRenameAction SQL-aware (#1190)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1190)
<!-- Reviewable:end -->
2021-06-07 10:24:49 -04:00
Weimin Yu
16392c3808 Fix access to a nullable field in HistoryEntry (#1193)
* Fix access to a nullable field in HistoryEntry
2021-06-04 16:30:25 -04:00
gbrodman
5f479488fa Use DB migration state to determine running async replay SQL->DS (#1191)
* Use DB migration state to determine running async replay SQL->DS

The SQL->DS replay likely could use more work (locking, returning the
right codes, things like that) but that's outside the scope of this PR.
2021-06-04 16:18:25 -04:00
Michael Muller
886a970ed6 Use detaching queries for all criteria queries (#1192)
* Make all criteria queries use jpaTm().query()

This causes all criteria queries to detach-on-load.

* Detach results of criteria queries

Wrap the criteria queries in DetachingTypedQuery now that the latter is
merged.
2021-06-04 14:37:53 -04:00
Michael Muller
d7f7568761 Fix copy causing premature hash calculation (#1189)
* Fix copy causing premature hash calculation

The creation of a builder to set the DomainContent repo id in DomainHistory
triggers an equality check which causes the hash code of an associated
transfer data object to be calculated prematurely, before the Ofy keys are
reconstituted.  Replace this with a simple setter, which is acceptible in this
case because the object is being loaded and is considered to be not fully
constructed yet.

* Do setRepoId() in Contact and Host history

Not essential for these as far as we know, but it's safer and more consistent.

* Fixed typos
2021-06-04 11:38:42 -04:00
gbrodman
2017930a8f Add commands to set and check the database migration state (#1174) 2021-06-04 09:57:08 -04:00
gbrodman
ed07fc8181 Use DB migration state to determine running async replay DS->SQL (#1175)
* Use DB migration state to determine running async replay DS->SQL
2021-06-03 11:43:26 -04:00
Lai Jiang
aa2898ebfc Make ExpandRecurringBillingEventAction SQL-aware (#1181)
There is some complication regarding how the
CancellationMatchingBillingEvent of the generated OneTime can be
reconstructed when loading from SQL. I decided to only address it in
testing as there is no real value to fully reconstruct this VKey in
production where we are either in SQL or Ofy mode, both never in both.
Therefore the VKey in a particular mode only needs to contain the
corresponding key in order to function.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1181)
<!-- Reviewable:end -->
2021-06-03 10:21:16 -04:00
gbrodman
586189d7ee Use a TimedTransitionProperty for the DB migration schedule (#1186)
This includes the following changes:
- Convert the single-valued database migration state to a timed
transition property, meaning that we can switch all instances over at
the same time and schedule it in advance
- Use a "cache" (technically an expiring memoized supplier) when
retrieving the database migration state value
- Delete the old DatabaseTransitionSchedule because it is no longer
necessary. We took the idea from that and used it for the new
DatabaseMigrationStateSchedule, though we cannot reuse the entity itself
because the structure is fundamentally different.
- Removed references to the DatabaseTransitionSchedule, mainly in the
getter/setter commands+tests and a few odd references elsewhere.
2021-06-02 14:06:28 -04:00
Lai Jiang
275f364dcb Handle cases where periodYears is NULL in a OneTime (#1187)
There are cases where periodYears is not set when creating a OneTime
billing event, for example when performing a registry lock (default cost = $0)
or when performing a server status update, such as applying the
serverUpdateProhibited status (default cost = $20). This is not currently
handled currently in the billing pipeline because the parseFromRecord
method checks for nullness for all fields. Even if it does not validate
the fields, the null periodYears will still cause problem when the
billing event is converted to CSV files.

This PR alters the BigQuery SQL file to convert a NULL to 0 when
creating the BillingEvent in the invoicing pipeline. It also sets the EndDate
in the invoice CSV to an empty string when periodYears is 0. Note that when the
cost is also 0, the billing event is filtered out in the invoice CSV so only
the non-free OneTime with null periodYear will have an impact on the output.
For detailed reports all billing events are included and the zero
periodYears is printed as is.

Setting the EndDate to empty is the correct behavior per
go/manual-integration-csv#end-date.
2021-06-02 11:52:47 -04:00
Weimin Yu
66867e4397 Use SecretManager for nomulus-tool-cloudbuild cred (#1188)
* Use SecretManager for nomulus-tool-cloudbuild cred

Store cloudbuild's nomulus-tool credential in SecretManager and make the
deployment pipeline load it from the SecretManager.

The tool-credential.json.enc file in the
gs://domain-registry-dev-deploy/secrets folder is no longer needed.
2021-06-02 09:32:57 -04:00
Weimin Yu
3fa56dec45 Make keyring use SecretManager as sole storage (#1185)
* Make keyring use SecretManager as sole storage

The Keyring will only use the SecretManager as storage. Accesses to the
Datastore are removed.

Also consolidated KmsKeyringTest into KmsKeyingUpdaterTest. The latter
is left with its original name to facilitate code reviews. It will be
renamed in planned cleanups.

Additional cleanup is left for a future PR. These include:

- Remove KmsConnection and its associated injection modules

- Remove KmsSecretRevision from SQL schema and code

- Rename relevant files to more appropriate names.
2021-06-01 15:28:22 -04:00
Michael Muller
92f5f8989b Detach entities loaded by loadSingleton() (#1184)
* Detach entities loaded by loadSingleton()

* Reformatted
2021-06-01 14:22:57 -04:00
Michael Muller
810adf0158 Detach result objects obtained through jpaTm().query() (#1183)
* Added TransformingTypedQuery class

Added class to wrap TypedQuery so that we can detach all objects on load.

* Don't detach non-entity results; complete tests

* Changes for review

* Make non-static and call detach directly
2021-06-01 14:20:04 -04:00
gbrodman
f6004181f8 Convert DeleteExpiredDomainsAction to QueryComposer (#1180)
I think this one needed to wait until the detach-on-load PR went in, but
now we should be all set.
2021-06-01 13:32:25 -04:00
Michael Muller
296440b277 Remove labels from output of list_premium_lists (#1182)
* Remove labels from output of list_premium_lists

Remove the ability to show all of the labels associated with a premium list in
the list_premium_lists command.  Supporting this requires loading the entire
contents of all premium lists from the database as opposed to just the list
records, and the information can be obtained using get_premium_list.
2021-05-27 10:39:15 -04:00
Lai Jiang
50f80744d8 Change BillingEvent parent to Key<DomainHistory> (#1178) 2021-05-25 18:48:47 -04:00
Michael Muller
826320c7fd Always detach entities during load (#1116)
* Always detach entities during load

The mutations on non-transient fields that we do in some of the PostLoad
methods have been causing the objects to be marked as "dirty", and hibernate
has been quietly persisting them during transaction commit.

By detaching the entities on load, we avoid any possibility of this, which
works in our case because we treat all of our model objects as immutable
during normal use.

There is another mixed blessing to this: lazy loading won't work on these
objects once they are detached from a session, meaning that all fields must be
lazy loaded up front.  This is unfortunate in that we don't always need those
lazy-loaded fields and there is a performance cost to loading them, but it is
also useful in that objects will now be complete when used outseide of the
transaction that loaded them (prior to this, an attempt to access a
lazy-loaded field after its transaction closed would have caused an error at
runtime).

* Changes requested in review

* A few improvements to test logic

* Deal with premature detachment of mutated objects

* Add unit tests, use a more specific exception

* Changes for review

- Deal with DomainDeleteFlow, which appears to be the only case in the
  codebase where we're doing a load-after-save.
- Display the object that is being loaded after save in the exception message.
- Add a TODO for figuring out why Eager loads aren't working as expected.

* Move the recurring billing event into a parameter

* Changes for review and rebase error fix

* Remove initialization of list entries

Remove initialization of list entries that we want to be lazy loaded (premium,
reserved, and claims lists).

* Post-rebase cleanups
2021-05-25 14:34:24 -04:00
Michael Muller
8099789012 Safely lazy load claims and reserved lists (#1177)
* Safely lazy load claims and reserved lists

This moves the entries of all of these lists into "insignificant" fields and
manages them explicitly.

* Additional fixes

Fix a few problems that came up in the merge or weren't caught in earlier
local test runs.

* Changes for review

- removed debug code
- added comments
- improved some methods that were loading the entire claims list
  unnecessarily.

* Fixed javadoc links

* Reformatted

* Minor fix for review
2021-05-25 11:28:30 -04:00
gbrodman
20a0e4ce3f Remove a couple additional ofy() calls (#1171)
* Remove a couple additional ofy() calls
2021-05-24 13:12:40 -04:00
Lai Jiang
2f2e9dd49f Add methods to return subtypes of HistoryEntry when querying (#1172)
This is useful when we expect a specific subtype in the return value so
that we can set the parent resource (e. g. DomainContent for
DomainHistory) on it, or when a specific subtype is needed from the call
site.

This PR also fixes some use of generic return values. It is always better to
return <HistoryEntry> than a wildcard <? extends HistoryEntry>, because for
immutable collections, <? extends HistoryEntry> is no different than
<HistoryEntry> as return value -- you can only get a HistoryEntry from it.
The wildcard return value means that even if you are indeed getting a
<DomainHistory> from the query, the call site has no compile time knowledge of
it and can only assume it is a <HistoryEntry>.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1172)
<!-- Reviewable:end -->
2021-05-24 11:36:11 -04:00
gbrodman
5e28694053 Add an object to store database migration stages (#1170)
* Add an object to store database migration stages

go/registry-3.0-stage-management for more details

This basically boils down to storing an enum in the database so that we
can tell what stage of the migration we're in.

We use a cross-TLD parent so that we can have strong transactional
consistency on retrieval.
2021-05-21 11:49:35 -04:00
sarahcaseybot
642405375b Stop writing ClaimsList to Datastore (#1169)
* Stop writing ClaimsList to Datastore

* Fix some failing tests

* Rename ClaimsListShard to ClaimsList
2021-05-20 15:44:40 -04:00
Lai Jiang
02eb7cfcc3 Switch from using raw HistoryEntries to typed subclasses thereof (#1150)
HistoryEntry is used to record all histories (contact, domain, host) in
Datastore. In SQL it is now split into three subclasses (and thus
tables): ContactHistory, DomainHistory and HostHistory. Its builder is
genericized as a result which led to a lot of compiler warnings for the
use of a raw HistoryEntry in the existing code base.

This PR cleans things up by replacing all the explicit use of
raw HistoryEntry with the corresponding subclass and also adds some
guardrails to prevent the use of raw HistoryEntry accidentally.

Note that because DomainHistory includes nsHosts and gracePeriodHistory,
both of which are assigned a roid from ofy when built, the assigned roids for
resources after history entries are built are incremented compared to
when only HistoryEntrys are built (before this PR) in
RdapDomainSearchActionTest.

Also added a convenient tm().updateAll() varargs method.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1150)
<!-- Reviewable:end -->
2021-05-20 11:58:41 -04:00
Michael Muller
f7dca7fa96 Make PremiumList.labelsToPrices "insignificant" (#1167)
* Make PremiumList.labelsToPrices "insignificant"

Add the ImmutableObject.Insignificant annotation to labelsToPrices and also
mark it as Transient.  In order to do lazy-loads on this field, we need to do
so explicitly: doing otherwise breaks the immutability contract and prevents
detaching the object upon load.

Note that this is an expedient solution to this problem, but not the optimal
one.  Ideally, the disassociation between PremiumList and its PremiumEntry's
would be more explicit.  However, breaking labelsToPrices out would at minimum
require reworking the Create/UpdatePremiumList commands, which currently rely
on passing around a self-contained PremiumList object, both from the parser
interfaces and to the database.

If this approach is acceptable, we can apply it to ReservedList and ClaimsList
as well (though it may be easier to break the association in those cases).

* Fix premium list "delete" to support a test

* Fix a few more tests

* Changes for review (updated javadocs)

* Minor fixes

* Updated getLablesToPrices() comment

* Format fixes, fixed PremiumEntry interfaces

PremiumEntry can now be SQL only.
2021-05-20 11:21:37 -04:00
gbrodman
a7e8ae5a2c Add loadOnlyOf method to tm() (#1162)
* Add loadOnlyOf method to tm()

In addition there's a bit of a refator of SqlReplayCheckpoint to make it
more in line with the other singletons. This method is useful for the
singleton classes where we expect at most one entity to exist, e.g.
ServerSecret.
2021-05-20 10:59:01 -04:00
Michael Muller
dc7f21ca68 Convert most poll message queries to QueryComposer (#1151)
* Convert most poll message queries to QueryComposer

* Add unit test and a better exception for datastore

* Remove datastorePollMessageQuery from PollFlowUtils

* Reformatted.

* Improved test equality checks

* Changes for review

* Converted concatenated string to String.format()
2021-05-19 15:58:20 -04:00
Weimin Yu
e96873f2d0 Support text-based JPQL query for BEAM (#1168)
* Support text-based JPQL query for BEAM
2021-05-19 14:45:04 -04:00
Lai Jiang
b5f05405a0 Fix linter warnings (#1165) 2021-05-18 18:30:01 -04:00
gbrodman
f702f2670b Use a flatMap in StaticPremiumPricingEngine (#1166)
* Use a flatMap in StaticPremiumPricingEngine
2021-05-18 12:20:04 -04:00
sarahcaseybot
21aeedae11 Fix NullPointerException in StaticPremiumPricingEngine (#1164)
* Fix NullPointerException in StaticPremiumPricingEngine

* Make getPremiumList return optional

* add isPresent checks
2021-05-18 10:55:27 -04:00
sarahcaseybot
c1f0c29134 Stop writing ReservedList to Datastore (#1163) 2021-05-17 17:46:21 -04:00
gbrodman
16641e05a1 Update GCL dependency to avoid security alert (#1139)
* Update GCL dependency to avoid security alert

This required a few changes in addition to the dependency update.

- a few transitive / required dependency updates as well
- updating soyutils_usegoog.js and adding checks.js because they're
necessary as part of the Soy compilation process
- Using a trustedResourceUri in the buildSrc Soy compilation instead of
a string
- changing the arguments to the Soy-to-Java compiler to comply with the
new version
- Moving all Soy UI files to be in the registrar directory. This was
not the case before due to previous thinking that we'd have separate
admin and registrar consoles -- this is no longer the case so it's no
longer necessary. This necessitated various refactorings and reference
changes.
  - The new soy-to-javascript compiler requires this, as it removes the
  "deps" param that we were previously using to say "use the general UI
  utils as dependencies for the registrar-console files".
- Creating a SQL environment and loading test data in the test server
main method -- previously, the local test server did not work.
- Fix some JS code that was referencing now-deleted library functions
- Removal of the Karma tests, as the karma-closure library hasn't been
updated since 2018 and it no longer works. We never noticed any errors
from the Karma tests, we never change the JS, and we have the
Java+Selenium screenshot differ tests to test the UI anyway.
2021-05-17 13:21:26 -04:00
Ben McIlwain
bf1c34cc3b Add sanity checks to history entry construction (#1156)
* Add sanity checks to history entry construction

* Add more missing setClientId() calls and delete scrap tool

* Merge branch 'master' into synthetic-requestedby

* Set more client IDs

* Merge branch 'master' into synthetic-requestedby
2021-05-14 19:54:35 -04:00
sarahcaseybot
93dc812ea2 Stop writing PremiumList to Datastore (#1160)
* Stop writing PremiumList to Datastore

* Fix formatting

* Format fix

* Rename the DAO

* Fix merge conflicts and add comment
2021-05-14 16:13:05 -04:00
Weimin Yu
e09138645f Fix RegistryJpaIO.Read problem with large data (#1161)
* Fix RegistryJpaIO.Read problem with large data

The read connector needs to detach loaded entities. This 
is now the default behavior in QueryComposer

Also removed the 'transaction mode' property from the Read connector.
There are no obvious use cases for non-transaction query, and
implementation is not straightforward with the current code base.

Also changed the return type of QueryComposer.list() to ImmutableList.
2021-05-14 15:19:12 -04:00
gbrodman
238deb25ec Clean up some SqlEntity classes (#1158)
* Clean up some SqlEntity classes

This started as having a better check for when to run the
ReplayCommitLogsToSqlAction but that'll require a bit more thought, and
this is a fairly simple PR that can be split out.
2021-05-14 11:25:11 -04:00
Ben McIlwain
6ce2926c6d Remove final vestiges of domain applications (#1153)
* Remove final vestiges of domain applications
2021-05-14 10:39:25 -04:00
Rachel Guan
27f431b9cf Change premium list command to be based off of mutating command (#1123)
* Change premium list command to be based off of mutating command

* Modify test cases and add comments for better readability

* Fix typo
2021-05-14 08:40:03 -04:00
gbrodman
2bb0e7305d Convert even more classes to auditedOfy() (#1157)
* Convert even more classes to auditedOfy()

This covers almost all of the classes in the second round of the sheet.
There are still some classes that need conversion but this is the vast
majority of them.

https://docs.google.com/spreadsheets/d/1aFEFuyH6vVW6b-h71O9f5CuUc6Y7YjZ2kdRL3lwXcVk/edit?resourcekey=0-guwZVKfSH-pntER1tUit6w#gid=1355213322
for notes
2021-05-13 14:12:13 -04:00
Lai Jiang
10757863ce Reorder steps (#1159) 2021-05-13 13:15:46 -04:00
gbrodman
02079010c6 Add mapreduce action to create synthetic history entries (#1125)
* Add mapreduce action to create synthetic history entries

RDE and zone file generation require being able to tell what objects
looked like in the past (though not beyond 30 days, or whatever the
Datastore retention period is set to). In Datastore, to answer this we
look at commit logs, and in SQL we will look at the History objects
stored for each EPP resource. This action can be run once while in
Datastore-primary-SQL-secondary to make sure that every EPP resource has
at least one history entry for which the resource-at-this-time field is
filled out in the SQL world.
2021-05-13 11:48:19 -04:00
Lai Jiang
4246e7e4e0 Add indexes on contacts in the Domain table (#1145)
These indexes are used to find if a contact is linked to a domain in
during a contact delete.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1145)
<!-- Reviewable:end -->
2021-05-13 10:47:35 -04:00
Lai Jiang
9f21989f13 Remove the logic to add full certificate in the headers (#1143)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1143)
<!-- Reviewable:end -->
2021-05-12 20:52:16 -04:00
gbrodman
2073f5b59f Populate the host in HostHistory objects in Host flows (#1129)
* Populate the host in HostHistory objects in Host flows
2021-05-12 19:11:30 -04:00
Weimin Yu
66ac000ef4 Fix the JPA Read connector for large data (#1155)
* Fix the JPA Read connector for large data

Allow result set streaming by setting the fetchSize on JDBC statements.
Many JDBC drivers by default buffers the entire result set, causing
delays in first result and/or out of memory errors.

Also fixed a entity instantiation problem exposed in production runs.

Lastly, removed incorrect comments.
2021-05-12 19:07:38 -04:00
Rachel Guan
85bac9834f Add stageEntityChange() method to display difference when creating a reserved list (#1149)
* Add stageEntityChange() method to display difference before execution when creating a reserved list
2021-05-12 17:32:57 -04:00
Weimin Yu
484e30cd80 Restore a fix for flaky test (#1154)
* Restore a fix for flaky test

Restore a speculative fix for the flakiness in
DeleteExpiredDomainsActionTest. Although we identified a bug and fixed
it in a previous commit, it may not be the only bug. The removed fix may
still be necessary.
2021-05-12 16:03:42 -04:00
gbrodman
af67356aa0 Convert more ofy() to auditedOfy() calls (#1152)
A couple of these use the QueryComposer interface to avoid branching.

In addition, we enforce the Datastore restriction that there can be at
most 1 field with an inequality query, see https://cloud.google.com/appengine/docs/standard/go111/datastore/query-restrictions#inequality_filters_are_limited_to_at_most_one_property
2021-05-12 15:06:19 -04:00
Rachel Guan
8c9a2b5f4a Fix typo in comment of premium list example file (#1148)
* Fix typo in comment of premium list example file
2021-05-11 18:25:05 -04:00
gbrodman
0d67ea3a6e Combine the two Lock classes into one class (#1126)
* Combine the two Lock classes into one class

This allows us to remove the DAO and to just treat locks the same as we
would treat any other object -- generically grabbing them from the
transaction manager.

We do not need to be concerned about the changeover between Datastore
and SQL because we assume that any such changeover will require
sufficient downtime that any currently-valid acquired locks will expire
during the downtime. Otherwise, we could get into a situation where an
action has acquired a particular lock in Datastore but not SQL.
2021-05-11 16:37:40 -04:00
Rachel Guan
5b56e8b71b Create key based on the change type (#1147)
* Create key based on the change type
2021-05-11 15:24:35 -04:00
Weimin Yu
6eba8aa1c4 Fix timestamp inversion bug (#1144)
* Fix timestamp inversion bug

Set the number of commitLog buckets to 1 in CommitLog replay tests to
expose all timestamp inversion problems due to replay. Fixed
PollAckFlowTest which is related to this problem.

Also fixed a few tests that failed to advance the fake clock when they
should, using the following approaches:

- If DatabaseHelper used but clock is not injected, do it. This
  allows us to remove some unnecessary manual clock advances.
- Manually advance the clock where convenient.
- Enable clock autoIncrement mode when calling production classes that
  performs multiple transactions.

We should consider making 1-bucket the default setting for tests. This
is left to another PR.
2021-05-11 14:51:10 -04:00
Lai Jiang
8d18450e56 Update README.md (#1146)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1146)
<!-- Reviewable:end -->
2021-05-11 13:40:07 -04:00
sarahcaseybot
65be65fb24 Always use Cloud SQL as primary for ClaimsList (#1127)
* Always use Cloud SQL as primary for ClaimsList

* Add a test back
2021-05-10 16:47:34 -04:00
Weimin Yu
984f1118e3 Make secretmanager primary storage for keyring (#1124)
* Make secretmanager primary storage for keyring

Also removed the migrate_kms_keyring command.
2021-05-10 11:11:26 -04:00
gbrodman
0bcb142bc9 Add an auditedOfy marker method for allow-listed ofy() calls (#1138)
* Add an auditedOfy marker method for allow-listed ofy() calls

This will allow us to make sure that every usage of ofy() has been
hand-examined and specifically allowed.
2021-05-10 10:55:28 -04:00
Lai Jiang
d93a4e562a Delete hosts synchronously when using SQL (#1141)
Also put some common logic in helper funcions in ContactDeleteFlowTest
to reduce clutter.
2021-05-10 10:22:01 -04:00
Lai Jiang
420a579e01 Fix flaky Spec11PipelineTest (#1133) 2021-05-07 15:01:11 -04:00
Lai Jiang
1ec96b66e2 Perform synchronous contact delete in SQL (#1137)
In SQL the contact of a domain is an indexed field and therefore we can
find linked domains synchronously, without the need for MapReduce.

The delete logic is mostly lifted from DeleteContactsAndHostsAction, but
because everything happens in a transaction we do not need to recheck a
lot of the preconditions that were necessary to ensure that the async
delete request still meets the conditions that when the request was
enqueued.
2021-05-07 10:48:51 -04:00
gbrodman
51a7ba249e Populate the contact in ContactHistory objects created in Contact flows (#1111)
* Populate the contact in ContactHistory objects created in Contact flows

Minimal interesting changes here
- a bit of reconstruction in ContactHistory to get the repo ID from the
key
- making the History revision ID Long instead of long so that it can be
null in non-built intermediate entities
- adding a copyFrom(HistoryEntry.Builder) method in HistoryEntry.Builder
so that we don't need to allocate quite as many unnecessary IDs, i.e.
removing the .build() lines in provideContactHistory and
provideDomainHistory
2021-05-06 14:38:55 -04:00
Lai Jiang
5120397607 Upload the GCB delete job yaml file to GCS (#1135)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1135)
<!-- Reviewable:end -->
2021-05-05 21:43:51 -04:00
sarahcaseybot
038825f254 Always use Cloud SQL as primary for Reserved and Premium Lists (#1113)
* Always use Cloud SQL as primary for Reserved and Premium Lists

* small typos

* Add a state check

* Add test for bloom filter

* fix import
2021-05-05 17:24:06 -04:00
Weimin Yu
b38574a9fc Add a BEAM read connector for JPA entities (#1132)
* Add a BEAM read connector for JPA entities

Added a Read connector to load JPA entities from Cloud SQL.

Also attempted a fix to the null threadfactory problem.
2021-05-05 15:45:03 -04:00
Lai Jiang
3f6ec8f1b0 Re-enable tests in RC build (#1130)
There has been a case where the CI was broken on Friday and no one
noticied or fixed it and a RC build was built with broken tests.
The tests were disabled due to unknown test failures that have since
been fixed.

Also update the machine type used by GCB to be more powerful. This is
necessary for the tests to past because N1_HIGHCPU_8 is RAM constraint
and the tests crashes. I updated all jobs to use the new type which
hopefully will make the build faster as well.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1130)
<!-- Reviewable:end -->
2021-05-05 13:53:21 -04:00
gbrodman
65fb0c6cff Update Karma version to avoid security hole in dependency (#1134)
This also forces the karma test to use the Gradle-installed version of
node instead of the global version. The global version installed on the
Kokoro machines is too old to function with some of the newer libraries.
2021-05-05 13:50:45 -04:00
Lai Jiang
e63085fb6a Add a GCB job to delete stopped GAE versions (#1128) 2021-05-05 11:27:46 -04:00
gbrodman
b5363e9457 Populate the domain in DomainHistory objects created in Domain flows (#1106)
Unfortunately, much of the time there's a bit of a circular dependency
in the object creation, e.g. the Domain object stores references to the
billing events which store references to the history object which
contains the Domain object. As a result, we allocate the history
object's ID before creating it, so that it can be referenced in the
other objects that store that reference, e.g. billing events.

In addition, we add a utility copyFrom method in HistoryEntry.Builder to
avoid unnecessary ID allocations.
2021-05-04 19:09:27 -04:00
Ben McIlwain
cb16df235a Remove unnecessary MockitoExtension from Spec11PipelineTest (#1115)
* Remove unnecessary MockitoExtension from Spec11PipelineTest

This is kind of a shot in the dark here, but this is one of the obvious
differences between this test class (which frequently experiences flakes) and
the other pipeline test classes which do not.

It's also possible we were getting the wrong runner if the test framework was
incorrectly detecting an App Engine runtime environment, so I added an assert
that will make it very clear if this is the cause of any failures.
2021-05-04 18:38:24 -04:00
Lai Jiang
d285edef3d Fix a few linter warnings (#1122) 2021-05-04 13:35:31 -04:00
Weimin Yu
509c0dcd17 Handle bad production data when migrating to SQL (#1120)
* Handle bad production data when migrating to SQL

Ignore or fix bad entites when populating SQL with production data in
Datastore. These are mostly inconsistent foreign keys.

See b/185954992 for details.
2021-05-03 16:09:43 -04:00
sarahcaseybot
ce18bf0690 Use FakeClock to prevent Expired Certificate Violations (#1121)
* Use FakeClock to prevent Expired Certificate Violations

* Format fixes

* Make CertificateChecker static
2021-05-03 15:10:26 -04:00
Lai Jiang
8d63cbfca0 Remove enforcement date from the SslServerInitializer (#1117)
The enforcement data has passed and ICANN has confirmed that their web
WHOIS prober conforms to our requirements.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1117)
<!-- Reviewable:end -->
2021-04-30 15:44:03 -04:00
Lai Jiang
eb6a1fe1ed Remove Pipeline as a field in pipeline classes (#1119)
In tests we use a TestPipelineExtension which does some static
initialization that should not be repeated the same JVM. In our
XXXPipeline classes we save the pipeline as a field and usually write lambdas
that are pass to the pipeline. Because lambdas are effectively anonymous inner
classes they are bound to their enclosing instances. When they get serialized
during pipeline execution, their enclosing classes also do. This might result
in undefined behavior when multiple lambdas in the same XXXPipeline are used
on the same JVM (such as in tests) where the static initialization may be done
multiple times if different class loaders are used. This is very
unlikely to happen but as a best practice we still remove them as
fields.
2021-04-30 14:32:33 -04:00
Weimin Yu
431710c95b Improve usability of WipeOutCloudSqlAction (#1118)
* Improve usability of WipeOutCloudSqlAction

Replace the "drop owned" statement with ones that drops only tables and
sequences. The former statement also drops default grants for the
nomulus user, which must be restored before the database can be used by
the nomulus server and tools.
2021-04-29 23:09:20 -04:00
Michael Muller
1fdf9cb979 Convert GenerateLordnCommand to tm (#1091)
* Convert GenerateLordnCommand to tm

This makes use of QueryComposer and adds a `list()` method to it.

Since there was no test for GenerateLordnCommand, this also implements one.

* Changes requested in review

* Add test for list queries

* Stream domains instead of listing them

* Reformatted
2021-04-29 13:14:56 -04:00
Michael Muller
95fdd36c77 Make nom_build not check for ".git" directory (#1110)
* Make nom_build not check for ".git" directory

nom_build tries to verify that it is in the root of the tree prior to doing
anything, however checking for a .git directory doesn't work in a merged
directory.

* Minor formatting fix to attempt to force rebuild
2021-04-28 11:23:39 -04:00
Ben McIlwain
d239a4d706 Make the ReadDnsQueueAction tests retry on failures (#1114)
These tests are flaky due to some kind of contention/collision on the mock task
queue. Retrying seems to fix the vast majority of flakes, is easy to implement,
and is more performant than moving these tests into the fragileTests test suite.
2021-04-28 10:20:36 -04:00
gbrodman
d99278e723 Convert remaining read-only flow tests to dual-DB (#1107)
Note that there are many flow tests that aren't
@DualDatabaseTest-annotated yet but those will come later, as they will
require more changes to the flows (other PRs are coming or in progress).
This only includes the remaining EppResource flows that don't create a
history entry.
2021-04-27 20:37:09 -04:00
Ben McIlwain
9d4de806f5 Improve error when creating domain label lists for non-existent TLDs (#1112)
* Improve error message when creating domain label lists for non-existent TLDs
2021-04-27 19:17:23 -04:00
sarahcaseybot
2528ee05dd Remove SMDRL completely from Datastore (#1104)
* Remove SMDRL completely from Datastore

* Remove some unnecessary stuff

* Change row count to 10000

* Remove implement EntityTestCase
2021-04-26 17:15:50 -04:00
Rachel Guan
367a38c5b0 Display changes when updating reserved list (#1093)
* add stageEntityChange to show diff

* add test cases
2021-04-26 13:31:57 -04:00
Lai Jiang
8884425a05 Fix build (#1109) 2021-04-26 10:34:29 -04:00
gbrodman
2c4c0bf9f8 Convert more tests to use @DualDatabaseTest and SQL in general (#1101)
Nothing super crazy here other than persisting the entity changes in
DomainDeleteFlow at the end of the flow rather than almost at the end.
This means that when we return the results we give the results as they
were originally present, rather than the subsequently-changed values.
2021-04-23 18:26:44 -04:00
Michael Muller
9c89643367 Fix Spec11 domain check (#1105)
* Fix Spec11 domain check

We should be checking to see if there are _any_ active domains for a given
reported domain, not to see if _the_ domain for the name is active.

The last change caused an exception for domains with soft-deleted past domains
of the same name.  The original code only checked the first domain returned
from the query, which may have been soft-deleted.  This version checks all
domain records to see if any are active.

* filter().count() -> anyMatch()
2021-04-23 14:20:31 -04:00
gbrodman
9f69a0bf2e Begin saving the EppResource parent in *History objects (#1090)
* Begin saving the EppResource parent in *History objects

We use DomainCreateFlow as an example here of how this will work. There
were a few changes necessary:

- various changes around GracePeriod / GracePeriodHistory so that we can
actually store them without throwing NPEs
- Creating one injectable *History.Builder field and using in place of
the HistoryEntry.Builder injected field in DomainCreateFlow
- Saving the EppResource as the parent in the *History.Builder setParent
calls
- Converting to/from HistoryEntry/*History classes in
DatastoreTransactionManager. Basically, we'll want to return the
*History subclasses (and similar in the ofy portions of HistoryEntryDao)
- Converting a few HistoryEntry.Builder usages to DomainHistory.Builder
usages. Eventually we should convert all of them.
2021-04-22 15:03:37 -04:00
sarahcaseybot
40db04db8d Use CommandWithRemoteApi in SetDatabaseTransitionScheduleCommand (#1099)
* Use CommandWithRemoteApi in ConfirmingCommand

* Remove unnecessary extensions

* Remove from ConfirmingCommand
2021-04-22 14:50:19 -04:00
Lai Jiang
217b37b9d5 Migrate the billing pipeline to flex template (#1100)
This is similar to the migration of the spec11 pipeline in #1073. Also removed
a few Dagger providers that are no longer needed.

TESTED=tested the dataflow job on alpha.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1100)
<!-- Reviewable:end -->
2021-04-22 10:26:15 -04:00
Lai Jiang
09b6e300fc Remove unused BeamJpaExtension and related classes (#1102)
* Remove unused BeamJpaExtension and related classes

* Remove unused qualifiers
2021-04-22 10:02:18 -04:00
Lai Jiang
4d99a5dd35 Remove a linter warning (#1103)
* Remove a linter warning

* Remove duplicate
2021-04-22 09:42:05 -04:00
gbrodman
5d3e9da750 Defer all foreign keys in SQL (#1094)
* Defer all foreign keys in SQL

The main difference here is that the constraint violation exceptions
won't be thrown until the transaction is completed, rather than when the
insert is first performed within the transaction. We get the same error
message either way. The primary benefit to this is that when dealing
with large operations inside a single transaction (flows), we don't need
to worry about the order of insertions of removals with regards to
foreign keys.
2021-04-21 14:29:20 -04:00
Lai Jiang
464f9aed1f Migrate Spec11 pipeline to flex template (#1073)
* Migrate Spec11 pipeline to flex template

Unfortunately this PR has turned out to be much bigger than I initially
conceived. However this is no good way to separate it out because the
changes are intertwined. This PR includes 3 main changes:

1. Change the spec11 pipline to use Dataflow Flex Template.
2. Retire the use of the old JPA layer that relies on credential saved
   in KMS.
3. Some extensive refactoring to streamline the logic and improve test
   isolation.

* Fix job name and remove projectId from options

* Add parameter logs

* Set RegistryEnvironment

* Remove logging and modify safe browsing API key regex

* Rename a test method and rebase

* Remove unused Junit extension

* Specify job region
2021-04-21 00:09:50 -04:00
sarahcaseybot
a0995fa0eb Stop dual read and dual write of SMDRL (#1095)
* Stop dual read and dual write of SMDRL

* Remove some more stuff from SignedMarkRevocationListDaoTest

* Change some names
2021-04-20 17:08:59 -04:00
Weimin Yu
fff95b20e6 Skip undefined secrets in keyring migration (#1098)
* Skip undefined secrets in keyring migration

If a secret does not exist in datastore, log and skip it.
2021-04-20 16:26:40 -04:00
gbrodman
23896b64c7 Set default value of 1 for new not-null columns (#1097)
Use 1 since it's the constant singleton ID
2021-04-20 15:25:20 -04:00
Ben McIlwain
844b5ab713 Send an immediate poll message for superuser domain deletes (#1096)
* Send an immediate poll message for superuser domain deletes

This poll message is in addition to the normal poll message that is sent when
the domain's deletion is effective (typically 35 days later). It's needed
because, in the event of a superuser deletion, the owning registrar won't
otherwise necessarily know it's happening.

Note that, in the case of a --immediate superuser deletion, the normal poll
message is already being sent immediately, so this additional poll message is
not necessary.
2021-04-20 15:22:49 -04:00
sarahcaseybot
aac952d6a3 Return to using hash for login validation (#1084)
* Return to using hash for login validation

This PR also removes the start date for certificate enforcement.

* Inline verify certificate compliance
2021-04-20 14:07:01 -04:00
gbrodman
ee31f1fd95 Update various tests to work with SQL as well (#1078)
* Update various tests to work with SQL as well

The main weird bit here is adding a method in DatabaseHelper to
retrieve and initialize all objects in either database. The
initialization is necessary since it's used post-command-dry-run to make
sure that no changes were actually made.
2021-04-20 11:52:53 -04:00
Michael Muller
4657be21b7 Convert CountDomainsCommand to tm (#1092)
* Convert CountDomainsCommand to tm

As part of this, implement "select count(*)" queries in the QueryComposer.

* Replaced kludgy trick for objectify count
2021-04-20 10:38:38 -04:00
sarahcaseybot
48732c51e8 Always use Cloud SQL as primary in SignedMarkRevocationListDao (#1061)
* Modify ClaimsList DAO to always use Cloud SQL as primary

* Revert ClaimsList add changes to SignedMarkRevocationList

* Fix flow tests

* Use start of time for empty list

* replace lambda with method reference
2021-04-19 14:51:00 -04:00
Weimin Yu
7893ba746a Upload latest version of RDE report to icann (#1089)
* Upload latest version of RDE report to icann

Currently the RdeReportAction is hard coded to load the initial version
of a report. This is wrong when reports have been regenerated.

Changed lines are copied from RdeUploadAction.
2021-04-16 17:12:02 -04:00
Michael Muller
1c96cd64fe Implement query abstraction (#1069)
* Implement query abstraction

Implement a query abstraction layer ("QueryComposer") that allows us to
construct fluent-style queries that work across both Objectify and JPA.

As a demonstration of the concept, convert Spec11EmailUtils and its test to
use the new API.

Limitations:
-  The primary limitations of this system are imposed by datastore, for
   example all queryable fields must be indexed, orderBy must coincide with
   the order of any inequality queries, inequality filters are limited to one
   property...
-  JPA queries are limited to a set of where clauses (all of which must match)
   and an "order by" clause.  Joins, functions, complex where logic and
   multi-table queries are simply not allowed.
-  Descending sort order is currently unsupported (this is simple enough to
   add).
2021-04-16 12:21:03 -04:00
Ben McIlwain
bc2a5dbc02 Fix bug that was incorrectly assuming Cursor would always exist (#1088)
* Fix bug that was incorrectly assuming Cursor would always exist

In fact, the Cursor entity does not always exist (i.e. if an upload has never
previously been done on this TLD, i.e. it's a new TLD), and the code needs to be
resilient to its non-existence.

This bug was introduced in #1044.
2021-04-15 17:03:25 -04:00
Weimin Yu
98d259449b Use lazy injection in SendEscrow command (#1086)
* Use lazy injection in SendEscrow command

The injected object in SendEscrowReportToIcannCommand creates Ofy keys
in its static initialization routine. This happens before the RemoteApi
setup. Use lazy injection to prevent failure.
2021-04-15 16:15:01 -04:00
gbrodman
1cc8af4acd Specify explicit ofyTm usage in SetDatabaseTransitionScheduleCommand (#1081)
* Specify explicit ofyTm usage in SetDatabaseTransitionScheduleCommand

We cannot use the standard MutatingCommand because the DB schedule is
explicitly always stored in Datastore, and once we transition to
SQL-as-primary, MutatingCommand will stage the entity changes to SQL.

In addition, we remove the raw ofy() call from the test.
2021-04-15 11:59:04 -04:00
Rachel Guan
fbef643488 make transitionId a required parameter (#1083) 2021-04-15 10:42:15 -04:00
Lai Jiang
2161e46a4b Fix a typo (#1085) 2021-04-15 08:15:31 -04:00
Lai Jiang
d7f27bdad3 Update the gradle appengine plugin (#1082) 2021-04-14 19:33:55 -04:00
sarahcaseybot
78e139b2c8 Add a ComparePremiumLists command (#1056)
* Add a ComparePremiumLists command

* Add a command description

* fix output

* Fix comment format

* Add periods

* Small output message change

* Inline getting stdout

* Use sets

* Inline Sets.difference
2021-04-14 18:10:47 -04:00
gbrodman
87d511d5e3 Convert more classes to using SQL / TM (#1067)
* Convert more classes to using SQL / TM

Nothing much particularly crazy here
2021-04-14 16:45:06 -04:00
sarahcaseybot
eff79e9c99 Remove unecessary ClaimsList in FlowTest (#1077) 2021-04-14 13:49:35 -04:00
Weimin Yu
bb453b1982 Migrate Keyring secrets to Secret Manager (#1072)
* Migrate Keyring secrets to Secret Manager

Implented dual-read of Keyring secrets with Datastore as primary.

Implemented dual-write of keyring secrets with Datastore as primary.
Secret manager write failures are simply thrown. This is fine since all
keyring writes are manual, throught eh update_kms_keyring command.

Added a one-way migration command that copies all data to secret manager
(unencrypted).
2021-04-14 10:17:33 -04:00
Weimin Yu
8b41b5c76f Upgrade testcontainers to work around a race (#1080)
* Upgrade testcontainers to work around a race

testcontainers 1.15.? has a race condition that occassionally causes deadlocks.
This can be worked around by upgrading to 1.15.2 and set transport type to
http5.

See https://github.com/testcontainers/testcontainers-java/issues/3531
for more information.

There are two changes that are not lockfiles:
- dependencies.gradle
- java_common.gradle
2021-04-14 09:45:09 -04:00
Lai Jiang
881f0f5f09 Make cross referencing work in Kythe, take 2 (#1079)
* Make cross referencing work in Kythe, take 2

Per suggestions on b/184284124.
2021-04-14 09:13:05 -04:00
Weimin Yu
abe6a193a8 Add hoc tool to fix duplicate contactId (#1076)
* Add hoc tool to fix duplicate contactId
2021-04-13 22:29:22 -04:00
gbrodman
d35460f14c Convert TmchCrl and ServerSecret to cleaner tm() impls (#1068)
* Convert TmchCrl and ServerSecret to cleaner tm() impls

When I implemented this originally I knew a lot less than I know now
about how we'll be storing and retrieving these singletons from SQL. The
optimal way here is to use the single SINGLETON_ID as the primary key,
that way we always know how to create the key that we can use in the
tm() retrieval.

This allows us to use generic tm() methods and to remove the handcrafted
SQL queries.
2021-04-13 20:50:07 -04:00
gbrodman
245e2ea5a8 Enforce consistency in non-cached FKI loads (#1075)
* Enforce consistency in non-cached FKI loads

For the cached code path, we do not require consistency but we do
require the ability to load / operate on large numbers of entities (so,
we must do so without a Datastore transaction). For the non-cached code
path, we require consistency but do not care about large numbers of
entities, so we must remain in the transaction that we're already in.
2021-04-13 15:14:02 -04:00
sarahcaseybot
65f35ac8c1 Fix TimestampInversionException (#1065)
* Fix TimestampInversionException

* Add autoIncrement

* unset auto increment mode
2021-04-13 11:59:14 -04:00
sarahcaseybot
994af085d8 Add a CompareReservedListCommand (#1054)
* Add a CompareReservedListCommand

* compare maps

* output format fixes

* Clean up loops

* Inline Sets.difference()

* Remove ImmutableCopy()
2021-04-13 11:45:45 -04:00
Lai Jiang
ce25cea134 Disable TLS tests related to v1.1 (#1074)
There is no need for this test now because we've past the enforcement
date. We should take out the entire enforcement date logic but right now
this test is failing because TLS 1.1 is not being supported anymore by
the latest release of JDK 11.

The other test is a bit tricky to fix, see comment.

Disable these tests for now to unblock development.
2021-04-13 10:30:58 -04:00
gbrodman
92dcacf78c Add a beforeSqlSave callback to ReplaySpecializer (#1062)
* Add a beforeSqlSave callback to ReplaySpecializer

When in the Datastore-primary and SQL-secondary stage, we will want to
save the EppResource-at-this-point-in-time field in the *History
objects so that later on we can examine the *History objects to see what
the resource looked like at that point in time.

Without this PR, the full object at that point in time would be lost
during the asynchronous replay since Datastore doesn't know about it.

In addition, we modify the HistoryEntry weight / priority so that
additions to it come after the additions to the resource off of which it
is based. As a result, we need to DEFER some foreign keys so that we can
write the billing / poll message objects before the history object that
they're referencing.
2021-04-12 12:11:20 -04:00
Lai Jiang
020273b184 Make Numulus compile on macOS (#1070)
* Make Numulus compile on macOS

BSD sed behaves differently than Linux sed. By adding a "-e" flag the
comand works in both systems.

See: https://unix.stackexchange.com/questions/101059/sed-behaves-different-on-freebsd-and-on-linux

* Make the regex easier to understand
2021-04-12 10:12:26 -04:00
Weimin Yu
0156a29f93 Try again to fix a flaky test (#1066)
* Try again to fix a flaky test

Fix DeleteExpiredDomainsActionTest.test_deletesThreeDomainsInOneRun
2021-04-08 11:47:35 -04:00
gbrodman
0b520f3885 Partially convert EppResourceUtils to SQL (#1060)
* Partially convert EppResourceUtils to SQL

Some of the rest will depend on b/184578521.

The primary conversion in this PR is the change in
NameserverLookupByIpCommand as that is the only place where the removed
EppResourceUtils method was called. We also convert to DualDatabaseTest
the tests of the callers of NLBIC. and use a CriteriaQueryBuilder in the
foreign key index SQL lookup (allowing us to avoid the String.format
call).
2021-04-07 19:20:13 -04:00
Weimin Yu
da6d90755e Add a wipeout action for Datastore in QA (#1064)
* Add a wipeout action for Datastore in QA
2021-04-07 16:17:51 -04:00
Weimin Yu
4d04e4fd15 Add -r when rsync a release to the live folder (#1063)
* Add -r when rsync a release to the live folder

Release folders now are no longer flat. Each of them has a 'beam'
subfolder with pipeline metadata files.
2021-04-07 10:07:00 -04:00
Weimin Yu
928b272d89 Remove SQL credentials from Keyring (#1059)
* Remove SQL credentials from Keyring

Remove SQL credentials from Keyring. SQL credentials will be managed by
an automated system (go/dr-sql-security) and the keyring is no longer a
suitable place to hold them.

Also stopped loading SQL credentials from they keyring for comparison
with those from the secret manager.
2021-04-07 10:05:59 -04:00
Ben McIlwain
e31f0cb9ba Don't send email notification when 0 uploads were attempted (#1058)
* Don't send email notification when 0 uploads were attempted
2021-04-06 18:17:57 -04:00
Michael Muller
06b0887c51 Convert RefreshDnsOnHostRenameAction to tm (#1053)
* Convert RefreshDnsOnHostRenameAction to tm

This is not quite complete because it also requires the conversion of a
map-reduce which is in scope for an entirely different work.  Tests of the
map-reduce functionality are excluded from the SQL run.

This also requires the following additional fixes:

-  Convert Lock to tm, as doing so was necessary to get this action to work.
   As Lock is being targeted as DatastoreOnly, we convert all calls in it to
   use ofyTm()
-  Fix a bug in DualDatabaseTest (the check for an AppEngineExtension field is
   wrong, and captures fields of type Object as AppEngineExtension's)
-  Introduce another VKey.from() method that creates a VKey from a stringified
   Ofy Key.

* Rename VKey.from(String) to fromWebsafeKey

* Throw NoSuchElementE. instead of NPE
2021-04-06 14:28:30 -04:00
Lai Jiang
73dcb4de4e Enable cross referencing for generated sources (#1057)
This change should allow generated classes like AutoValue or Dagger
classes to be cross-referencable on cs.nomulus.foo

See b/184284124 for context.
2021-04-06 10:35:20 -04:00
Weimin Yu
9dd08c48bc Use credential in secretmanager to deploy schema (#1055)
* Use credential in secretmanager to deploy schema

Fetch the schema_deployer credential from SecretManager when deploying
the schema to Cloud SQL.
2021-04-06 09:43:15 -04:00
sarahcaseybot
eabf056f9b Correctly get the primary database value in PremiumListDualDao (#1052)
* Correctly get the primary database value in PremiumListDualDao

* Remove extra AppEngineExtension

* get rid of ofy call

* Remove extra duration skip in test
2021-04-05 13:44:30 -04:00
gbrodman
7c3ef52026 Convert poll-message-related classes to use SQL as well (#1050)
* Convert poll-message-related classes to use SQL as well

Two relatively complex parts. The first is that we needed a small
refactor on the AckPollMessagesCommand because we could theoretically be
acking more poll messages than the Datastore transaction size boundary.
This means that the normal flow of "gather the poll messages from the DB
into one collection, then act on it" needs to be changed to a more
functional flow.

The second is that acking the poll message (deleting it in most cases)
reduces the number of remaining poll messages in SQL but not in
Datastore, since in Datastore the deletion does not take effect until
after the transaction is over.
2021-04-02 19:57:26 -04:00
sarahcaseybot
75e74f013d Add a getReservedList command (#1041)
* Add a getReservedList command

* add tests

* Remove multiple lists parameter

* print error to stderr
2021-04-02 19:23:36 +00:00
gbrodman
c077aca433 Convert AuthenticatedRegAccessor and OteStats to SQL (#1039)
This required adding a new HistoryEntryDao method but it's fairly
similar to the ones we already have.
2021-04-02 11:41:26 -04:00
gbrodman
4e7dd7a95a Convert DomainTCF and DomainContent to tm() (#1046)
Note: this also includes conversions of the tests of any class that
called the converted DomainContent method to make sure that we caught
everything.
2021-04-02 11:41:00 -04:00
sarahcaseybot
8952687207 Add CommandWithRemoteApi to DeleteReservedListCommand (#1051) 2021-04-01 21:32:40 -04:00
Ben McIlwain
0164bceb95 Fix some low-hanging code quality issue fruits (#1047)
* Fix some low-hanging code quality issue fruits

These include problems such as: use of raw types, unnecessary throw clauses,
unused variables, and more.
2021-04-01 18:04:21 -04:00
Michael Muller
dc51019fd2 Convert ofy -> tm for two more classes (#1049)
* Convert ofy -> tm for two more classes

Convert ofy -> tm for MutatingCommand and DedupeOneTimeBillingEventIdsCommand.

Note that DedupeOneTimeBillingEventIdsCommand will not be needed after
migration, so this conversion is just to remove the ofy uses from the
codebase.  We don't update the test (other than to keep it working) and it
wouldn't currently work in SQL.

* Fixed a test broken by this PR
2021-04-01 07:27:43 -04:00
gbrodman
36762b5e08 Convert ResaveEntityAction and RelockDomainAction to tm() (#1048)
In addition, we move the deleteTestDomain method to DatabaseHelper since
it'll be useful in other places (e.g. RelockDomainActionTest) and remove
the duplicate definition of ResaveEntityAction.PATH.

We also can ignore deletions of non-persisted entities in the JPA
transaction manager.
2021-03-31 15:52:25 -04:00
gbrodman
c9980fcdec Update RegistrarSettingsAction and RegistrarContact to SQL calls (#1042)
* Update RegistrarSettingsAction and RegistrarContact to SQL calls

Relevant potentially-unclear changes:
- Making sure the last update time is always correct and up to date in
the auto timestamp object
- Reloading the domain upon return when updating in a new transaction to
make sure that we use the properly-updated last update time (SQL returns
the correct result if retrieved within the same txn but DS does not)
2021-03-30 16:41:26 -04:00
gbrodman
d30ab08f6d Convert DomainTAF and DomainFlowUtils to SQL (#1045)
* Convert DomainTAF and DomainFlowUtils to SQL

The only tricky part to this is that the order of entities that we're
saving during the DomainTransferApproveFlow matters -- some entities
have dependencies on others so we need to save the latter first. We
change `entitiesToSave` to be a list to reinforce this.
2021-03-30 16:33:35 -04:00
gbrodman
b90b9af80e Convert RDE classes to use tm() (#1044)
This is mostly just using the generic Cursor load methods with the
slight difference that before we relied on ofy() returning null on
absent entities.
2021-03-30 13:09:33 -04:00
1032 changed files with 35339 additions and 29884 deletions

View File

@@ -1,4 +1,5 @@
python/
node_modules/
**/build/
**/out/
.*/

5
.gitignore vendored
View File

@@ -4,6 +4,7 @@
######################################################################
# Java Ignores
gjf.out
*.class
# Mobile Tools for Java (J2ME)
@@ -111,3 +112,7 @@ core/**/registrar_bin*.js
core/**/registrar_dbg*.js
core/**/registrar_bin*.css
core/**/registrar_dbg*.css
# Appengine generated files
core/WEB-INF/appengine-generated/*.bin
core/WEB-INF/appengine-generated/*.xml

4
SECURITY.md Normal file
View File

@@ -0,0 +1,4 @@
To report a security issue, please use http://g.co/vulnz. We use
http://g.co/vulnz for our intake, and do coordination and disclosure here on
GitHub (including using GitHub Security Advisory). The Google Security Team will
respond within 5 working days of your report on g.co/vulnz.

View File

@@ -24,7 +24,7 @@ buildscript {
}
dependencies {
classpath 'com.google.cloud.tools:appengine-gradle-plugin:2.0.1'
classpath 'com.google.cloud.tools:appengine-gradle-plugin:2.4.1'
classpath 'net.ltgt.gradle:gradle-errorprone-plugin:0.6.1'
classpath 'org.sonatype.aether:aether-api:1.13.1'
classpath 'org.sonatype.aether:aether-impl:1.13.1'
@@ -318,7 +318,7 @@ subprojects {
// expose to users.
if (project.name != 'docs') {
javadocSource << project.sourceSets.main.allJava
javadocClasspath << project.sourceSets.main.compileClasspath
javadocClasspath << project.sourceSets.main.runtimeClasspath
javadocClasspath << "${buildDir}/generated/sources/annotationProcessor/java/main"
javadocDependentTasks << project.tasks.compileJava
}
@@ -457,6 +457,8 @@ task javaIncrementalFormatApply {
task javadoc(type: Javadoc) {
source javadocSource
classpath = files(javadocClasspath)
// Exclude the misbehaving generated-by-Soy Java files
exclude "**/*SoyInfo.java"
destinationDir = file("${buildDir}/docs/javadoc")
options.encoding = "UTF-8"
// In a lot of places we don't write @return so suppress warnings about that.

View File

@@ -72,6 +72,7 @@ dependencies {
compile deps['com.google.auth:google-auth-library-credentials']
compile deps['com.google.auth:google-auth-library-oauth2-http']
compile deps['com.google.auto.value:auto-value-annotations']
compile deps['com.google.common.html.types:types']
compile deps['com.google.cloud:google-cloud-core']
compile deps['com.google.cloud:google-cloud-storage']
compile deps['com.google.guava:guava']

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.ibm.icu:icu4j:57.1
commons-codec:commons-codec:1.11
commons-logging:commons-logging:1.2
@@ -47,17 +48,16 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
org.apache.commons:commons-lang3:3.8.1
org.apache.commons:commons-text:1.6
org.apache.httpcomponents:httpclient:4.5.13
org.apache.httpcomponents:httpcore:4.4.14
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.json:json:20160212
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:7.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.ibm.icu:icu4j:57.1
commons-codec:commons-codec:1.11
commons-logging:commons-logging:1.2
@@ -47,17 +48,16 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
org.apache.commons:commons-lang3:3.8.1
org.apache.commons:commons-text:1.6
org.apache.httpcomponents:httpclient:4.5.13
org.apache.httpcomponents:httpcore:4.4.14
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.json:json:20160212
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:7.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.ibm.icu:icu4j:57.1
commons-codec:commons-codec:1.11
commons-logging:commons-logging:1.2
@@ -47,17 +48,16 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
org.apache.commons:commons-lang3:3.8.1
org.apache.commons:commons-text:1.6
org.apache.httpcomponents:httpclient:4.5.13
org.apache.httpcomponents:httpcore:4.4.14
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.json:json:20160212
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:7.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.ibm.icu:icu4j:57.1
commons-codec:commons-codec:1.11
commons-logging:commons-logging:1.2
@@ -47,17 +48,16 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
org.apache.commons:commons-lang3:3.8.1
org.apache.commons:commons-text:1.6
org.apache.httpcomponents:httpclient:4.5.13
org.apache.httpcomponents:httpcore:4.4.14
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.json:json:20160212
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:7.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.ibm.icu:icu4j:57.1
commons-codec:commons-codec:1.11
commons-logging:commons-logging:1.2
@@ -47,17 +48,16 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
org.apache.commons:commons-lang3:3.8.1
org.apache.commons:commons-text:1.6
org.apache.httpcomponents:httpclient:4.5.13
org.apache.httpcomponents:httpcore:4.4.14
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.json:json:20160212
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:7.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.ibm.icu:icu4j:57.1
@@ -49,7 +50,6 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
junit:junit:4.13.1
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
@@ -70,9 +70,9 @@ org.junit:junit-bom:5.6.2
org.mockito:mockito-core:3.7.7
org.objenesis:objenesis:3.1
org.opentest4j:opentest4j:1.2.0
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:9.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.ibm.icu:icu4j:57.1
@@ -49,7 +50,6 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
junit:junit:4.13.1
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
@@ -70,9 +70,9 @@ org.junit:junit-bom:5.6.2
org.mockito:mockito-core:3.7.7
org.objenesis:objenesis:3.1
org.opentest4j:opentest4j:1.2.0
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:9.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.ibm.icu:icu4j:57.1
@@ -49,7 +50,6 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
junit:junit:4.13.1
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
@@ -70,9 +70,9 @@ org.junit:junit-bom:5.6.2
org.mockito:mockito-core:3.7.7
org.objenesis:objenesis:3.1
org.opentest4j:opentest4j:1.2.0
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:9.0
org.threeten:threetenbp:1.5.0

View File

@@ -20,12 +20,12 @@ com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-storage:1.113.12
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.5.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
@@ -34,10 +34,11 @@ com.google.http-client:google-http-client:1.39.0
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.15.3
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.ibm.icu:icu4j:57.1
@@ -49,7 +50,6 @@ io.opencensus:opencensus-contrib-http-util:0.28.0
javax.annotation:javax.annotation-api:1.3.2
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
junit:junit:4.13.1
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
@@ -70,9 +70,9 @@ org.junit:junit-bom:5.6.2
org.mockito:mockito-core:3.7.7
org.objenesis:objenesis:3.1
org.opentest4j:opentest4j:1.2.0
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:9.0
org.threeten:threetenbp:1.5.0

View File

@@ -23,6 +23,7 @@ import static google.registry.gradle.plugin.GcsPluginUtils.toByteArraySupplier;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.html.types.TrustedResourceUrls;
import com.google.template.soy.SoyFileSet;
import com.google.template.soy.tofu.SoyTofu;
import google.registry.gradle.plugin.ProjectData.TaskData;
@@ -118,7 +119,7 @@ final class CoverPageGenerator {
builder.put("projectState", state.toString());
builder.put("title", title);
builder.put("cssFiles", ImmutableSet.of("css/style.css"));
builder.put("cssFiles", ImmutableSet.of(TrustedResourceUrls.fromConstant("css/style.css")));
builder.put("invocation", getInvocation());
builder.put("tasksByState", getTasksByStateSoyData());
return builder.build();

View File

@@ -91,7 +91,7 @@ abstract class ProjectData {
/** The task was actually run and has finished successfully. */
SUCCESS,
/** The task was up-to-date and successful, and hence didn't need to run again. */
UP_TO_DATE;
UP_TO_DATE
}
abstract String uniqueName();

View File

@@ -16,7 +16,7 @@
{template .coverPage}
{@param title: string}
{@param cssFiles: list<string>}
{@param cssFiles: list<trusted_resource_uri>}
{@param projectState: string}
{@param invocation: string}
{@param tasksByState: map<string, list<[uniqueName: string, description: string, log: string, reports: map<string, string>]>>}

View File

@@ -2,11 +2,11 @@
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
javax.inject:javax.inject:1
joda-time:joda-time:2.9.2
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0

View File

@@ -2,11 +2,11 @@
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
javax.inject:javax.inject:1
joda-time:joda-time:2.9.2
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0

View File

@@ -2,11 +2,11 @@
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
javax.inject:javax.inject:1
joda-time:joda-time:2.9.2
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0

View File

@@ -2,11 +2,11 @@
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
javax.inject:javax.inject:1
joda-time:joda-time:2.9.2
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0

View File

@@ -2,11 +2,11 @@
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
javax.inject:javax.inject:1
joda-time:joda-time:2.9.2
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0

View File

@@ -2,11 +2,11 @@
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
javax.inject:javax.inject:1
joda-time:joda-time:2.9.2
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0

View File

@@ -6,7 +6,7 @@ com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -6,7 +6,7 @@ com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -7,7 +7,7 @@ com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -7,7 +7,7 @@ com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -6,7 +6,7 @@ com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -6,7 +6,7 @@ com.google.code.findbugs:jsr305:3.0.2
com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -7,7 +7,7 @@ com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -7,7 +7,7 @@ com.google.errorprone:error_prone_annotations:2.5.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.truth:truth:1.1.2

View File

@@ -100,6 +100,15 @@ public class DateTimeUtils {
return ZonedDateTime.ofInstant(instant, ZoneId.of(dateTime.getZone().getID()).normalized());
}
/**
* Converts a Joda {@link DateTime} object to an equivalent java.time {@link ZonedDateTime}
* object.
*/
public static ZonedDateTime toZonedDateTime(DateTime dateTime, ZoneId zoneId) {
java.time.Instant instant = java.time.Instant.ofEpochMilli(dateTime.getMillis());
return ZonedDateTime.ofInstant(instant, zoneId);
}
/**
* Converts a java.time {@link ZonedDateTime} object to an equivalent Joda {@link DateTime}
* object.

View File

@@ -22,6 +22,7 @@ import google.registry.util.Clock;
import java.util.concurrent.atomic.AtomicLong;
import javax.annotation.concurrent.ThreadSafe;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.joda.time.ReadableDuration;
import org.joda.time.ReadableInstant;
@@ -35,6 +36,8 @@ public final class FakeClock implements Clock {
// threads should see a consistent flow.
private final AtomicLong currentTimeMillis = new AtomicLong();
private volatile long autoIncrementStepMs;
/** Creates a FakeClock that starts at START_OF_TIME. */
public FakeClock() {
this(START_OF_TIME);
@@ -48,7 +51,21 @@ public final class FakeClock implements Clock {
/** Returns the current time. */
@Override
public DateTime nowUtc() {
return new DateTime(currentTimeMillis.get(), UTC);
return new DateTime(currentTimeMillis.addAndGet(autoIncrementStepMs), UTC);
}
/**
* Sets the increment applied to the clock whenever it is queried. The increment is zero by
* default: the clock is left unchanged when queried.
*
* <p>Passing a duration of zero to this method effectively unsets the auto increment mode.
*
* @param autoIncrementStep the new auto increment duration
* @return this
*/
public FakeClock setAutoIncrementStep(ReadableDuration autoIncrementStep) {
this.autoIncrementStepMs = autoIncrementStep.getMillis();
return this;
}
/** Advances clock by one millisecond. */
@@ -65,4 +82,14 @@ public final class FakeClock implements Clock {
public void setTo(ReadableInstant time) {
currentTimeMillis.set(time.getMillis());
}
/** Invokes {@link #setAutoIncrementStep} with one millisecond-step. */
public FakeClock setAutoIncrementByOneMilli() {
return setAutoIncrementStep(Duration.millis(1));
}
/** Disables the auto-increment mode. */
public FakeClock disableAutoIncrement() {
return setAutoIncrementStep(Duration.ZERO);
}
}

View File

@@ -256,6 +256,7 @@ GRADLE_FLAGS = [
'Specify a task to be excluded from execution.',
True),
]
def generate_gradle_properties() -> str:
"""Returns the expected contents of gradle.properties."""
out = io.StringIO()
@@ -270,7 +271,7 @@ def generate_gradle_properties() -> str:
def get_root() -> str:
"""Returns the root of the nomulus build tree."""
cur_dir = os.getcwd()
if not os.path.exists(os.path.join(cur_dir, '.git')) or \
if not os.path.exists(os.path.join(cur_dir, 'buildSrc')) or \
not os.path.exists(os.path.join(cur_dir, 'core')) or \
not os.path.exists(os.path.join(cur_dir, 'gradle.properties')):
raise Exception('You must run this script from the root directory')

View File

@@ -79,9 +79,9 @@ PRESUBMITS = {
r".*Copyright 20\d{2} The Nomulus Authors\. All Rights Reserved\.",
("java", "js", "soy", "sql", "py", "sh", "gradle"), {
".git", "/build/", "/generated/", "/generated_tests/",
"node_modules/", "JUnitBackports.java", "registrar_bin.",
"registrar_dbg.", "google-java-format-diff.py",
"nomulus.golden.sql", "soyutils_usegoog.js"
"node_modules/", "LocalStorageHelper.java", "FakeStorageRpc.java",
"registrar_bin.", "registrar_dbg.", "google-java-format-diff.py",
"nomulus.golden.sql", "soyutils_usegoog.js", "javascript/checks.js"
}, REQUIRED):
"File did not include the license header.",
@@ -202,6 +202,8 @@ PRESUBMITS = {
"java",
# ActivityReportingQueryBuilder deals with Dremel queries
{"src/test", "ActivityReportingQueryBuilder.java",
# This class contains helper method to make queries in Beam.
"RegistryJpaIO.java",
# TODO(b/179158393): Remove everything below, which should be done
# using Criteria
"ForeignKeyIndex.java",
@@ -213,6 +215,8 @@ PRESUBMITS = {
"RdapDomainSearchAction.java",
"RdapNameserverSearchAction.java",
"RdapSearchActionBase.java",
"ReadOnlyCheckingEntityManager.java",
"RegistryQuery",
},
):
"The first String parameter to EntityManager.create(Native)Query "

View File

@@ -44,7 +44,6 @@ def outcastTestPatterns = [
"google/registry/flows/domain/DomainCreateFlowTest.*",
"google/registry/flows/domain/DomainUpdateFlowTest.*",
"google/registry/tools/CreateDomainCommandTest.*",
"google/registry/tools/server/CreatePremiumListActionTest.*",
]
// Tests that fail when running Gradle in a docker container, e. g. when
@@ -70,15 +69,14 @@ def dockerIncompatibleTestPatterns = [
// Nomulus classes, e.g., threads and objects retained by frameworks.
// TODO(weiminyu): identify cause and fix offending tests.
def fragileTestPatterns = [
// Problem seems to lie with AppEngine TaskQueue for test.
"google/registry/cron/TldFanoutActionTest.*",
// Test Datastore inexplicably aborts transaction.
"google/registry/model/tmch/ClaimsListShardTest.*",
// Creates large object (64MBytes), occasionally throws OOM error.
"google/registry/model/server/KmsSecretRevisionTest.*",
// Changes cache timeouts and for some reason appears to have contention
// with other tests.
"google/registry/whois/WhoisCommandFactoryTest.*",
// Currently changes a global configuration parameter that for some reason
// results in timestamp inversions for other tests. TODO(mmuller): fix.
"google/registry/flows/host/HostInfoFlowTest.*",
] + dockerIncompatibleTestPatterns
sourceSets {
@@ -184,6 +182,7 @@ dependencies {
compile deps['com.google.monitoring-client:metrics']
compile deps['com.google.monitoring-client:stackdriver']
compile deps['com.google.api-client:google-api-client-java6']
compile deps['com.google.api.grpc:proto-google-cloud-tasks-v2']
compile deps['com.google.apis:google-api-services-admin-directory']
compile deps['com.google.apis:google-api-services-appengine']
compile deps['com.google.apis:google-api-services-bigquery']
@@ -194,6 +193,7 @@ dependencies {
compile deps['com.google.apis:google-api-services-groupssettings']
compile deps['com.google.apis:google-api-services-monitoring']
compile deps['com.google.apis:google-api-services-sheets']
compile deps['com.google.apis:google-api-services-storage']
testCompile deps['com.google.appengine:appengine-api-stubs']
compile deps['com.google.appengine.tools:appengine-gcs-client']
compile deps['com.google.appengine.tools:appengine-mapreduce']
@@ -215,9 +215,13 @@ dependencies {
compile deps['com.google.flogger:flogger']
runtime deps['com.google.flogger:flogger-system-backend']
compile deps['com.google.guava:guava']
compile deps['com.google.protobuf:protobuf-java']
gradleLint.ignore('unused-dependency') {
compile deps['com.google.gwt:gwt-user']
}
compile deps['com.google.cloud:google-cloud-core']
compile deps['com.google.cloud:google-cloud-storage']
compile deps['com.google.cloud:google-cloud-tasks']
compile deps['com.google.http-client:google-http-client']
compile deps['com.google.http-client:google-http-client-appengine']
compile deps['com.google.http-client:google-http-client-jackson2']
@@ -312,11 +316,13 @@ dependencies {
annotationProcessor project(':processor')
testAnnotationProcessor project(':processor')
testCompile deps['com.google.cloud:google-cloud-nio']
testCompile deps['com.google.appengine:appengine-testing']
testCompile deps['com.google.guava:guava-testlib']
testCompile deps['com.google.monitoring-client:contrib']
testCompile deps['com.google.truth:truth']
testCompile deps['com.google.truth.extensions:truth-java8-extension']
testCompile deps['org.checkerframework:checker-qual']
testCompile deps['org.hamcrest:hamcrest']
testCompile deps['org.hamcrest:hamcrest-core']
testCompile deps['org.hamcrest:hamcrest-library']
@@ -423,7 +429,7 @@ task jaxbToJava {
}
}
execInBash(
'find . -name *.java -exec sed -i /\\*\\ \\<p\\>\\$/d {} +',
"find . -name *.java -exec sed -i -e '/" + /\* <p>$/ + "/d' {} +",
generatedDir)
}
}
@@ -432,12 +438,9 @@ task soyToJava {
// Relative paths of soy directories.
def spec11SoyDir = "google/registry/reporting/spec11/soy"
def toolsSoyDir = "google/registry/tools/soy"
def uiSoyDir = "google/registry/ui/soy"
def registrarSoyDir = "google/registry/ui/soy/registrar"
def soyRelativeDirs = [
spec11SoyDir, toolsSoyDir, uiSoyDir, registrarSoyDir,
]
def soyRelativeDirs = [spec11SoyDir, toolsSoyDir, registrarSoyDir]
soyRelativeDirs.each {
inputs.dir "${resourcesSourceDir}/${it}"
outputs.dir "${generatedDir}/${it}"
@@ -451,7 +454,8 @@ task soyToJava {
"--outputDirectory", "${outputDirectory}",
"--javaClassNameSource", "filename",
"--allowExternalCalls", "true",
"--srcs", "${soyFiles.join(',')}"
"--srcs", "${soyFiles.join(',')}",
"--compileTimeGlobalsFile", "${resourcesSourceDir}/google/registry/ui/globals.txt"
}
}
@@ -468,14 +472,6 @@ task soyToJava {
dir: "${resourcesSourceDir}/${registrarSoyDir}",
include: ['**/*.soy']))
soyToJava('google.registry.ui.soy',
"${generatedDir}/${uiSoyDir}",
files {
file("${resourcesSourceDir}/${uiSoyDir}").listFiles()
}.filter {
it.name.endsWith(".soy")
})
soyToJava('google.registry.reporting.spec11.soy',
"${generatedDir}/${spec11SoyDir}",
fileTree(
@@ -484,42 +480,24 @@ task soyToJava {
}
}
task soyToJS {
def rootSoyDirectory = "${resourcesSourceDir}/google/registry/ui/soy"
def outputSoyDirectory = "${generatedDir}/google/registry/ui/soy"
task soyToJS(type: JavaExec) {
def rootSoyDirectory = "${resourcesSourceDir}/google/registry/ui/soy/registrar"
def outputSoyDirectory = "${generatedDir}/google/registry/ui/soy/registrar"
inputs.dir rootSoyDirectory
outputs.dir outputSoyDirectory
ext.soyToJS = { outputDirectory, soyFiles , deps->
javaexec {
main = "com.google.template.soy.SoyToJsSrcCompiler"
classpath configurations.soy
def inputSoyFiles = files {
file("${rootSoyDirectory}").listFiles()
}.filter {
it.name.endsWith(".soy")
}
args "--outputPathFormat", "${outputDirectory}/{INPUT_FILE_NAME}.js",
classpath configurations.soy
main = "com.google.template.soy.SoyToJsSrcCompiler"
args "--outputPathFormat", "${outputSoyDirectory}/{INPUT_FILE_NAME}.js",
"--allowExternalCalls", "false",
"--srcs", "${soyFiles.join(',')}",
"--shouldProvideRequireSoyNamespaces", "true",
"--srcs", "${inputSoyFiles.join(',')}",
"--compileTimeGlobalsFile", "${resourcesSourceDir}/google/registry/ui/globals.txt"
if (deps != "") {
args "--deps", "${deps.join(',')}"
}
}
}
doLast {
def rootSoyFiles =
fileTree(
dir: "${rootSoyDirectory}",
include: ['*.soy'])
soyToJS("${outputSoyDirectory}", rootSoyFiles, "")
soyToJS("${outputSoyDirectory}/registrar",
files {
file("${rootSoyDirectory}/registrar").listFiles()
}.filter {
it.name.endsWith(".soy")
}, rootSoyFiles)
}
}
task stylesheetsToJavascript {
@@ -602,8 +580,8 @@ task compileProdJS(type: JavaExec) {
closureArgs << "--generate_exports"
// manually include all the required js files
closureArgs << "--js=${nodeModulesDir}/google-closure-library/**.js"
closureArgs << "--js=${jsDir}/soyutils_usegoog.js"
closureArgs << "--js=${nodeModulesDir}/google-closure-library/**/*.js"
closureArgs << "--js=${jsDir}/*.js"
closureArgs << "--js=${cssSourceDir}/registrar_bin.css.js"
closureArgs << "--js=${jsSourceDir}/**.js"
closureArgs << "--js=${externsDir}/json.js"
@@ -630,15 +608,6 @@ compileProdJS.dependsOn processResources
compileProdJS.dependsOn processTestResources
compileProdJS.dependsOn soyToJS
task karmaTest(type: Exec) {
dependsOn ':npmInstall'
workingDir rootProject.projectDir
executable 'node_modules/karma/bin/karma'
args('start', "${project.projectDir}/karma.conf.js")
}
test.dependsOn karmaTest
// Make testing artifacts available to be depended up on by other projects.
// TODO: factor out google.registry.testing to be a separate project.
task testJar(type: Jar) {
@@ -805,6 +774,18 @@ if (environment in ['alpha', 'crash']) {
mainClass: 'google.registry.beam.datastore.BulkDeleteDatastorePipeline',
metaData: 'google/registry/beam/bulk_delete_datastore_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.spec11.Spec11Pipeline',
metaData: 'google/registry/beam/spec11_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.invoicing.InvoicingPipeline',
metaData: 'google/registry/beam/invoicing_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.rde.RdePipeline',
metaData: 'google/registry/beam/rde_pipeline_metadata.json'
],
]
project.tasks.create("stage_beam_pipelines") {
doLast {

View File

@@ -14,14 +14,14 @@ com.google.dagger:dagger-producers:2.33
com.google.dagger:dagger-spi:2.33
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotation:2.3.4
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_check_api:2.3.4
com.google.errorprone:error_prone_core:2.3.4
com.google.errorprone:error_prone_type_annotations:2.3.4
com.google.errorprone:javac-shaded:9-dev-r4023-3
com.google.googlejavaformat:google-java-format:1.5
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.protobuf:protobuf-java:3.4.0
@@ -32,7 +32,7 @@ javax.inject:javax.inject:1
javax.persistence:javax.persistence-api:2.2
net.ltgt.gradle.incap:incap:0.2
org.checkerframework:checker-compat-qual:2.5.3
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.checkerframework:dataflow:3.0.0
org.checkerframework:javacutil:3.0.0
org.jetbrains.kotlin:kotlin-stdlib-common:1.4.20

View File

@@ -1,15 +1,4 @@
# This is a Gradle generated file for dependency locking.
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
args4j:args4j:2.0.26
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.7
com.google.errorprone:error_prone_annotations:2.3.1
com.google.guava:guava:25.1-jre
com.google.j2objc:j2objc-annotations:1.1
com.google.javascript:closure-compiler-externs:v20190301
com.google.javascript:closure-compiler:v20190301
com.google.jsinterop:jsinterop-annotations:1.0.0
com.google.protobuf:protobuf-java:3.0.2
org.checkerframework:checker-qual:2.0.0
org.codehaus.mojo:animal-sniffer-annotations:1.14
com.google.javascript:closure-compiler:v20210505

View File

@@ -54,12 +54,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -76,17 +79,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -97,30 +100,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -132,10 +138,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -149,17 +155,17 @@ commons-logging:commons-logging:1.2
dnsjava:dnsjava:3.3.1
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -220,7 +226,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -253,11 +259,11 @@ org.postgresql:postgresql:42.2.18
org.rnorth.duct-tape:duct-tape:1.0.8
org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.xerial.snappy:snappy-java:1.1.4

View File

@@ -53,12 +53,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -75,17 +78,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -96,29 +99,32 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -130,10 +136,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.14.0
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -147,17 +153,17 @@ commons-logging:commons-logging:1.2
dnsjava:dnsjava:3.3.1
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -214,7 +220,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.conscrypt:conscrypt-openjdk-uber:2.5.1
@@ -246,11 +252,11 @@ org.postgresql:postgresql:42.2.18
org.rnorth.duct-tape:duct-tape:1.0.8
org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.xerial.snappy:snappy-java:1.1.4

View File

@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,17 +83,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -102,30 +105,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -137,10 +143,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -157,17 +163,17 @@ guru.nidi:graphviz-java-all-j2v8:0.17.0
guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -231,7 +237,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -267,11 +273,11 @@ org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.slf4j:slf4j-jdk14:1.7.28
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,17 +83,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -102,30 +105,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -137,10 +143,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -157,17 +163,17 @@ guru.nidi:graphviz-java-all-j2v8:0.17.0
guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -230,7 +236,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -266,11 +272,11 @@ org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.slf4j:slf4j-jdk14:1.7.28
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -54,12 +54,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -76,17 +79,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -97,30 +100,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -132,10 +138,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -149,17 +155,17 @@ commons-logging:commons-logging:1.2
dnsjava:dnsjava:3.3.1
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -220,7 +226,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -253,11 +259,11 @@ org.postgresql:postgresql:42.2.18
org.rnorth.duct-tape:duct-tape:1.0.8
org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.xerial.snappy:snappy-java:1.1.4

View File

@@ -53,12 +53,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -75,17 +78,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -96,29 +99,32 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -130,10 +136,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.14.0
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.15.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -147,17 +153,17 @@ commons-logging:commons-logging:1.2
dnsjava:dnsjava:3.3.1
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -215,7 +221,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.conscrypt:conscrypt-openjdk-uber:2.5.1
@@ -247,11 +253,11 @@ org.postgresql:postgresql:42.2.18
org.rnorth.duct-tape:duct-tape:1.0.8
org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.xerial.snappy:snappy-java:1.1.4

View File

@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,17 +83,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -101,30 +104,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -136,10 +142,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -156,17 +162,17 @@ guru.nidi:graphviz-java-all-j2v8:0.17.0
guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -230,7 +236,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -265,11 +271,11 @@ org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,17 +83,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -101,30 +104,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -136,10 +142,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -156,17 +162,17 @@ guru.nidi:graphviz-java-all-j2v8:0.17.0
guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -230,7 +236,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -265,11 +271,11 @@ org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,17 +83,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -101,30 +104,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -136,10 +142,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -156,17 +162,17 @@ guru.nidi:graphviz-java-all-j2v8:0.17.0
guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -230,7 +236,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -265,11 +271,11 @@ org.rnorth.visible-assertions:visible-assertions:2.1.2
org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.79.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,17 +83,17 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.31.0
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -102,30 +105,33 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.94.1
com.google.cloud:google-cloud-core:1.94.3
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.113.12
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -137,10 +143,10 @@ com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.googlecode.charts4j:charts4j:1.3
com.googlecode.json-simple:json-simple:1.1.1
com.ibm.icu:icu4j:68.2
@@ -157,17 +163,17 @@ guru.nidi:graphviz-java-all-j2v8:0.17.0
guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.65
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -230,7 +236,7 @@ org.bouncycastle:bcpg-jdk15on:1.61
org.bouncycastle:bcpkix-jdk15on:1.61
org.bouncycastle:bcprov-jdk15on:1.61
org.checkerframework:checker-compat-qual:2.5.5
org.checkerframework:checker-qual:3.7.0
org.checkerframework:checker-qual:3.8.0
org.codehaus.jackson:jackson-core-asl:1.9.13
org.codehaus.jackson:jackson-mapper-asl:1.9.13
org.codehaus.mojo:animal-sniffer-annotations:1.20
@@ -266,11 +272,11 @@ org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.slf4j:slf4j-jdk14:1.7.28
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -5,25 +5,25 @@ aopalliance:aopalliance:1.0
args4j:args4j:2.0.23
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.7
com.google.common.html.types:types:1.0.4
com.google.common.html.types:types:1.0.6
com.google.errorprone:error_prone_annotations:2.3.4
com.google.escapevelocity:escapevelocity:0.9.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.8.0-beta1
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:5.0.1
com.google.j2objc:j2objc-annotations:1.3
com.google.jsinterop:jsinterop-annotations:1.0.1
com.google.protobuf:protobuf-java:3.13.0
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.ibm.icu:icu4j:57.1
javax.annotation:jsr250-api:1.0
javax.inject:javax.inject:1
javax.validation:validation-api:1.0.0.GA
org.checkerframework:checker-qual:3.5.0
org.json:json:20160212
org.ow2.asm:asm-analysis:6.0
org.ow2.asm:asm-commons:6.0
org.ow2.asm:asm-tree:6.0
org.ow2.asm:asm-util:6.0
org.ow2.asm:asm:6.0
org.ow2.asm:asm-analysis:7.0
org.ow2.asm:asm-commons:7.0
org.ow2.asm:asm-tree:7.0
org.ow2.asm:asm-util:7.0
org.ow2.asm:asm:7.0

View File

@@ -12,14 +12,14 @@ com.google.dagger:dagger-producers:2.33
com.google.dagger:dagger-spi:2.33
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotation:2.3.4
com.google.errorprone:error_prone_annotations:2.3.4
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_check_api:2.3.4
com.google.errorprone:error_prone_core:2.3.4
com.google.errorprone:error_prone_type_annotations:2.3.4
com.google.errorprone:javac-shaded:9-dev-r4023-3
com.google.googlejavaformat:google-java-format:1.5
com.google.guava:failureaccess:1.0.1
com.google.guava:guava:30.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.j2objc:j2objc-annotations:1.3
com.google.protobuf:protobuf-java:3.4.0
@@ -30,7 +30,7 @@ javax.inject:javax.inject:1
javax.persistence:javax.persistence-api:2.2
net.ltgt.gradle.incap:incap:0.2
org.checkerframework:checker-compat-qual:2.5.3
org.checkerframework:checker-qual:3.5.0
org.checkerframework:checker-qual:3.8.0
org.checkerframework:dataflow:3.0.0
org.checkerframework:javacutil:3.0.0
org.jetbrains.kotlin:kotlin-stdlib-common:1.4.20

View File

@@ -6,10 +6,10 @@ aopalliance:aopalliance:1.0
args4j:args4j:2.0.23
cglib:cglib-nodep:2.2
com.beust:jcommander:1.60
com.fasterxml.jackson.core:jackson-annotations:2.12.1
com.fasterxml.jackson.core:jackson-core:2.12.1
com.fasterxml.jackson.core:jackson-databind:2.12.1
com.fasterxml.jackson:jackson-bom:2.12.1
com.fasterxml.jackson.core:jackson-annotations:2.12.3
com.fasterxml.jackson.core:jackson-core:2.12.3
com.fasterxml.jackson.core:jackson-databind:2.12.3
com.fasterxml.jackson:jackson-bom:2.12.3
com.fasterxml:classmate:1.5.1
com.github.docker-java:docker-java-api:3.2.7
com.github.docker-java:docker-java-transport-zerodep:3.2.7
@@ -28,7 +28,7 @@ com.google.api-client:google-api-client-appengine:1.31.3
com.google.api-client:google-api-client-jackson2:1.30.10
com.google.api-client:google-api-client-java6:1.31.3
com.google.api-client:google-api-client-servlet:1.31.3
com.google.api-client:google-api-client:1.31.3
com.google.api-client:google-api-client:1.32.1
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1:1.5.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta1:0.105.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta2:0.105.5
@@ -54,12 +54,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.83.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -76,7 +79,7 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.32.1
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
@@ -84,10 +87,10 @@ com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-api-stubs:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -98,31 +101,35 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.95.4
com.google.cloud:google-cloud-core:1.95.4
com.google.cloud:google-cloud-nio:0.123.4
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.118.0
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava-testlib:30.1-jre
com.google.guava:guava:30.1-jre
com.google.guava:guava-testlib:30.1.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-apache-v2:1.39.2
com.google.http-client:google-http-client-appengine:1.39.2
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.2
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -134,11 +141,11 @@ com.google.oauth-client:google-oauth-client-appengine:1.31.4
com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.oauth-client:google-oauth-client:1.31.5
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.googlecode.charts4j:charts4j:1.3
@@ -157,17 +164,17 @@ commons-logging:commons-logging:1.2
dnsjava:dnsjava:3.3.1
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.102
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -198,7 +205,7 @@ javax.validation:validation-api:1.0.0.GA
javax.xml.bind:jaxb-api:2.3.1
jline:jline:1.0
joda-time:joda-time:2.10.5
junit:junit:4.13.1
junit:junit:4.13.2
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
net.java.dev.jna:jna:5.5.0
@@ -300,13 +307,13 @@ org.seleniumhq.selenium:selenium-remote-driver:3.141.59
org.seleniumhq.selenium:selenium-safari-driver:3.141.59
org.seleniumhq.selenium:selenium-support:3.141.59
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:junit-jupiter:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:selenium:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:junit-jupiter:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:selenium:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.xerial.snappy:snappy-java:1.1.4

View File

@@ -6,10 +6,10 @@ aopalliance:aopalliance:1.0
args4j:args4j:2.0.23
cglib:cglib-nodep:2.2
com.beust:jcommander:1.60
com.fasterxml.jackson.core:jackson-annotations:2.12.1
com.fasterxml.jackson.core:jackson-core:2.12.1
com.fasterxml.jackson.core:jackson-databind:2.12.1
com.fasterxml.jackson:jackson-bom:2.12.1
com.fasterxml.jackson.core:jackson-annotations:2.12.3
com.fasterxml.jackson.core:jackson-core:2.12.3
com.fasterxml.jackson.core:jackson-databind:2.12.3
com.fasterxml.jackson:jackson-bom:2.12.3
com.fasterxml:classmate:1.5.1
com.github.docker-java:docker-java-api:3.2.7
com.github.docker-java:docker-java-transport-zerodep:3.2.7
@@ -27,7 +27,7 @@ com.google.api-client:google-api-client-appengine:1.31.3
com.google.api-client:google-api-client-jackson2:1.30.10
com.google.api-client:google-api-client-java6:1.31.3
com.google.api-client:google-api-client-servlet:1.31.3
com.google.api-client:google-api-client:1.31.3
com.google.api-client:google-api-client:1.32.1
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1:1.5.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta1:0.105.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta2:0.105.5
@@ -53,12 +53,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.83.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -75,7 +78,7 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.32.1
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
@@ -83,10 +86,10 @@ com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-api-stubs:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -97,30 +100,34 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.95.4
com.google.cloud:google-cloud-core:1.95.4
com.google.cloud:google-cloud-nio:0.123.4
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.118.0
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava-testlib:30.1-jre
com.google.guava:guava:30.1-jre
com.google.guava:guava-testlib:30.1.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-apache-v2:1.39.2
com.google.http-client:google-http-client-appengine:1.39.2
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.2
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -132,11 +139,11 @@ com.google.oauth-client:google-oauth-client-appengine:1.31.4
com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.14.0
com.google.protobuf:protobuf-java:3.15.2
com.google.oauth-client:google-oauth-client:1.31.5
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.googlecode.charts4j:charts4j:1.3
@@ -155,17 +162,17 @@ commons-logging:commons-logging:1.2
dnsjava:dnsjava:3.3.1
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.102
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -193,7 +200,7 @@ javax.validation:validation-api:1.0.0.GA
javax.xml.bind:jaxb-api:2.3.1
jline:jline:1.0
joda-time:joda-time:2.10.5
junit:junit:4.13.1
junit:junit:4.13.2
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
net.java.dev.jna:jna:5.5.0
@@ -294,13 +301,13 @@ org.seleniumhq.selenium:selenium-remote-driver:3.141.59
org.seleniumhq.selenium:selenium-safari-driver:3.141.59
org.seleniumhq.selenium:selenium-support:3.141.59
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:junit-jupiter:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:selenium:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:junit-jupiter:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:selenium:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.xerial.snappy:snappy-java:1.1.4

View File

@@ -10,10 +10,10 @@ com.eclipsesource.j2v8:j2v8_linux_x86_64:4.6.0
com.eclipsesource.j2v8:j2v8_macosx_x86_64:4.6.0
com.eclipsesource.j2v8:j2v8_win32_x86:4.6.0
com.eclipsesource.j2v8:j2v8_win32_x86_64:4.6.0
com.fasterxml.jackson.core:jackson-annotations:2.12.1
com.fasterxml.jackson.core:jackson-core:2.12.1
com.fasterxml.jackson.core:jackson-databind:2.12.1
com.fasterxml.jackson:jackson-bom:2.12.1
com.fasterxml.jackson.core:jackson-annotations:2.12.3
com.fasterxml.jackson.core:jackson-core:2.12.3
com.fasterxml.jackson.core:jackson-databind:2.12.3
com.fasterxml.jackson:jackson-bom:2.12.3
com.fasterxml:classmate:1.5.1
com.github.docker-java:docker-java-api:3.2.7
com.github.docker-java:docker-java-transport-zerodep:3.2.7
@@ -32,7 +32,7 @@ com.google.api-client:google-api-client-appengine:1.31.3
com.google.api-client:google-api-client-jackson2:1.30.10
com.google.api-client:google-api-client-java6:1.31.3
com.google.api-client:google-api-client-servlet:1.31.3
com.google.api-client:google-api-client:1.31.3
com.google.api-client:google-api-client:1.32.1
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1:1.5.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta1:0.105.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta2:0.105.5
@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.83.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,7 +83,7 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.32.1
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
@@ -88,10 +91,10 @@ com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-api-stubs:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -103,31 +106,35 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.95.4
com.google.cloud:google-cloud-core:1.95.4
com.google.cloud:google-cloud-nio:0.123.4
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.118.0
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava-testlib:30.1-jre
com.google.guava:guava:30.1-jre
com.google.guava:guava-testlib:30.1.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-apache-v2:1.39.2
com.google.http-client:google-http-client-appengine:1.39.2
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.2
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -139,11 +146,11 @@ com.google.oauth-client:google-oauth-client-appengine:1.31.4
com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.oauth-client:google-oauth-client:1.31.5
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.googlecode.charts4j:charts4j:1.3
@@ -166,17 +173,17 @@ guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.102
io.github.java-diff-utils:java-diff-utils:4.9
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -207,7 +214,7 @@ javax.validation:validation-api:1.0.0.GA
javax.xml.bind:jaxb-api:2.3.1
jline:jline:1.0
joda-time:joda-time:2.10.5
junit:junit:4.13.1
junit:junit:4.13.2
net.arnx:nashorn-promise:0.1.1
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
@@ -313,13 +320,13 @@ org.seleniumhq.selenium:selenium-support:3.141.59
org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:junit-jupiter:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:selenium:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:junit-jupiter:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:selenium:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -10,10 +10,10 @@ com.eclipsesource.j2v8:j2v8_linux_x86_64:4.6.0
com.eclipsesource.j2v8:j2v8_macosx_x86_64:4.6.0
com.eclipsesource.j2v8:j2v8_win32_x86:4.6.0
com.eclipsesource.j2v8:j2v8_win32_x86_64:4.6.0
com.fasterxml.jackson.core:jackson-annotations:2.12.1
com.fasterxml.jackson.core:jackson-core:2.12.1
com.fasterxml.jackson.core:jackson-databind:2.12.1
com.fasterxml.jackson:jackson-bom:2.12.1
com.fasterxml.jackson.core:jackson-annotations:2.12.3
com.fasterxml.jackson.core:jackson-core:2.12.3
com.fasterxml.jackson.core:jackson-databind:2.12.3
com.fasterxml.jackson:jackson-bom:2.12.3
com.fasterxml:classmate:1.5.1
com.github.docker-java:docker-java-api:3.2.7
com.github.docker-java:docker-java-transport-zerodep:3.2.7
@@ -32,7 +32,7 @@ com.google.api-client:google-api-client-appengine:1.31.3
com.google.api-client:google-api-client-jackson2:1.30.10
com.google.api-client:google-api-client-java6:1.31.3
com.google.api-client:google-api-client-servlet:1.31.3
com.google.api-client:google-api-client:1.31.3
com.google.api-client:google-api-client:1.32.1
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1:1.5.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta1:0.105.5
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta2:0.105.5
@@ -58,12 +58,15 @@ com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:1.4.0
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:2.0.2
com.google.api.grpc:proto-google-cloud-spanner-v1:2.0.2
com.google.api.grpc:proto-google-common-protos:2.1.0
com.google.api.grpc:proto-google-iam-v1:1.0.9
com.google.api:api-common:1.10.1
com.google.api:gax-grpc:1.62.0
com.google.api:gax-httpjson:0.76.1
com.google.api:gax:1.62.0
com.google.api.grpc:proto-google-cloud-tasks-v2:1.33.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.89.2
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.89.2
com.google.api.grpc:proto-google-common-protos:2.3.2
com.google.api.grpc:proto-google-iam-v1:1.0.14
com.google.api:api-common:1.10.4
com.google.api:gax-grpc:1.66.0
com.google.api:gax-httpjson:0.83.0
com.google.api:gax:1.66.0
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0
com.google.apis:google-api-services-appengine:v1-rev130-1.25.0
com.google.apis:google-api-services-bigquery:v2-rev20200916-1.30.10
@@ -80,7 +83,7 @@ com.google.apis:google-api-services-monitoring:v3-rev540-1.25.0
com.google.apis:google-api-services-pubsub:v1-rev20200713-1.30.10
com.google.apis:google-api-services-sheets:v4-rev612-1.25.0
com.google.apis:google-api-services-sqladmin:v1beta4-rev20210119-1.31.0
com.google.apis:google-api-services-storage:v1-rev20200927-1.30.10
com.google.apis:google-api-services-storage:v1-rev20210127-1.32.1
com.google.appengine.tools:appengine-gcs-client:0.8.1
com.google.appengine.tools:appengine-mapreduce:0.9
com.google.appengine.tools:appengine-pipeline:0.2.13
@@ -88,10 +91,10 @@ com.google.appengine:appengine-api-1.0-sdk:1.9.86
com.google.appengine:appengine-api-stubs:1.9.86
com.google.appengine:appengine-remote-api:1.9.86
com.google.appengine:appengine-testing:1.9.86
com.google.auth:google-auth-library-credentials:0.24.1
com.google.auth:google-auth-library-oauth2-http:0.24.1
com.google.auth:google-auth-library-credentials:0.26.0
com.google.auth:google-auth-library-oauth2-http:0.26.0
com.google.auto.service:auto-service-annotations:1.0-rc7
com.google.auto.value:auto-value-annotations:1.7.4
com.google.auto.value:auto-value-annotations:1.8.1
com.google.auto.value:auto-value:1.7.4
com.google.cloud.bigdataoss:gcsio:2.1.6
com.google.cloud.bigdataoss:util:2.1.6
@@ -103,31 +106,35 @@ com.google.cloud:google-cloud-bigquery:1.122.2
com.google.cloud:google-cloud-bigquerystorage:1.5.5
com.google.cloud:google-cloud-bigtable:1.14.0
com.google.cloud:google-cloud-core-grpc:1.93.9
com.google.cloud:google-cloud-core-http:1.93.9
com.google.cloud:google-cloud-core:1.93.9
com.google.cloud:google-cloud-core-http:1.95.4
com.google.cloud:google-cloud-core:1.95.4
com.google.cloud:google-cloud-nio:0.123.4
com.google.cloud:google-cloud-pubsub:1.110.0
com.google.cloud:google-cloud-pubsublite:0.7.0
com.google.cloud:google-cloud-secretmanager:1.4.0
com.google.cloud:google-cloud-spanner:2.0.2
com.google.cloud:google-cloud-storage:1.118.0
com.google.cloud:google-cloud-tasks:1.33.2
com.google.code.findbugs:jsr305:3.0.2
com.google.code.gson:gson:2.8.6
com.google.common.html.types:types:1.0.4
com.google.code.gson:gson:2.8.7
com.google.common.html.types:types:1.0.6
com.google.dagger:dagger:2.33
com.google.errorprone:error_prone_annotations:2.5.1
com.google.errorprone:error_prone_annotations:2.7.1
com.google.escapevelocity:escapevelocity:0.9.1
com.google.flogger:flogger-system-backend:0.5.1
com.google.flogger:flogger:0.5.1
com.google.flogger:google-extensions:0.5.1
com.google.guava:failureaccess:1.0.1
com.google.guava:guava-testlib:30.1-jre
com.google.guava:guava:30.1-jre
com.google.guava:guava-testlib:30.1.1-jre
com.google.guava:guava:30.1.1-jre
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
com.google.gwt:gwt-user:2.9.0
com.google.http-client:google-http-client-apache-v2:1.39.0
com.google.http-client:google-http-client-appengine:1.39.0
com.google.http-client:google-http-client-gson:1.39.0
com.google.http-client:google-http-client-jackson2:1.39.0
com.google.http-client:google-http-client-apache-v2:1.39.2
com.google.http-client:google-http-client-appengine:1.39.2
com.google.http-client:google-http-client-gson:1.39.2
com.google.http-client:google-http-client-jackson2:1.39.2
com.google.http-client:google-http-client-protobuf:1.33.0
com.google.http-client:google-http-client:1.39.0
com.google.http-client:google-http-client:1.39.2
com.google.inject.extensions:guice-multibindings:4.1.0
com.google.inject:guice:4.1.0
com.google.j2objc:j2objc-annotations:1.3
@@ -139,11 +146,11 @@ com.google.oauth-client:google-oauth-client-appengine:1.31.4
com.google.oauth-client:google-oauth-client-java6:1.31.4
com.google.oauth-client:google-oauth-client-jetty:1.31.4
com.google.oauth-client:google-oauth-client-servlet:1.31.4
com.google.oauth-client:google-oauth-client:1.31.4
com.google.protobuf:protobuf-java-util:3.15.2
com.google.protobuf:protobuf-java:3.15.2
com.google.oauth-client:google-oauth-client:1.31.5
com.google.protobuf:protobuf-java-util:3.17.3
com.google.protobuf:protobuf-java:3.17.3
com.google.re2j:re2j:1.6
com.google.template:soy:2018-03-14
com.google.template:soy:2021-02-01
com.google.truth.extensions:truth-java8-extension:1.1.2
com.google.truth:truth:1.1.2
com.googlecode.charts4j:charts4j:1.3
@@ -166,17 +173,17 @@ guru.nidi:graphviz-java:0.17.0
io.dropwizard.metrics:metrics-core:3.2.6
io.github.classgraph:classgraph:4.8.102
io.github.java-diff-utils:java-diff-utils:4.9
io.grpc:grpc-alts:1.36.0
io.grpc:grpc-api:1.36.0
io.grpc:grpc-auth:1.36.0
io.grpc:grpc-context:1.36.0
io.grpc:grpc-core:1.36.0
io.grpc:grpc-grpclb:1.36.0
io.grpc:grpc-netty-shaded:1.36.0
io.grpc:grpc-alts:1.39.0
io.grpc:grpc-api:1.39.0
io.grpc:grpc-auth:1.39.0
io.grpc:grpc-context:1.39.0
io.grpc:grpc-core:1.39.0
io.grpc:grpc-grpclb:1.39.0
io.grpc:grpc-netty-shaded:1.39.0
io.grpc:grpc-netty:1.32.2
io.grpc:grpc-protobuf-lite:1.36.0
io.grpc:grpc-protobuf:1.36.0
io.grpc:grpc-stub:1.36.0
io.grpc:grpc-protobuf-lite:1.39.0
io.grpc:grpc-protobuf:1.39.0
io.grpc:grpc-stub:1.39.0
io.netty:netty-buffer:4.1.51.Final
io.netty:netty-codec-http2:4.1.51.Final
io.netty:netty-codec-http:4.1.51.Final
@@ -207,7 +214,7 @@ javax.validation:validation-api:1.0.0.GA
javax.xml.bind:jaxb-api:2.3.1
jline:jline:1.0
joda-time:joda-time:2.10.5
junit:junit:4.13.1
junit:junit:4.13.2
net.arnx:nashorn-promise:0.1.1
net.bytebuddy:byte-buddy-agent:1.10.19
net.bytebuddy:byte-buddy:1.10.19
@@ -314,13 +321,13 @@ org.slf4j:jcl-over-slf4j:1.7.30
org.slf4j:jul-to-slf4j:1.7.30
org.slf4j:slf4j-api:1.7.30
org.slf4j:slf4j-jdk14:1.7.28
org.testcontainers:database-commons:1.15.1
org.testcontainers:jdbc:1.15.1
org.testcontainers:junit-jupiter:1.15.1
org.testcontainers:postgresql:1.15.1
org.testcontainers:selenium:1.15.1
org.testcontainers:testcontainers:1.15.1
org.threeten:threetenbp:1.5.0
org.testcontainers:database-commons:1.15.2
org.testcontainers:jdbc:1.15.2
org.testcontainers:junit-jupiter:1.15.2
org.testcontainers:postgresql:1.15.2
org.testcontainers:selenium:1.15.2
org.testcontainers:testcontainers:1.15.2
org.threeten:threetenbp:1.5.1
org.tukaani:xz:1.5
org.w3c.css:sac:1.3
org.webjars.npm:viz.js-for-graphviz-java:2.1.3

View File

@@ -1,75 +0,0 @@
// Copyright 2019 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
process.env.CHROME_BIN = require('puppeteer').executablePath()
module.exports = function(config) {
config.set({
basePath: '..',
browsers: ['ChromeHeadlessNoSandbox'],
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: 'ChromeHeadless',
flags: ['--no-sandbox']
}
},
frameworks: ['jasmine', 'closure'],
singleRun: true,
autoWatch: false,
files: [
'node_modules/google-closure-library/closure/goog/base.js',
'core/src/test/javascript/**/*_test.js',
{
pattern: 'core/src/test/javascript/**/!(*_test).js',
included: false
},
{
pattern: 'core/src/main/javascript/**/*.js',
included: false
},
{
pattern: 'core/build/generated/sources/custom/java/main/**/*.soy.js',
included: false
},
{
pattern: 'node_modules/google-closure-library/closure/goog/deps.js',
included: false,
served: false
},
{
pattern: 'node_modules/google-closure-library/closure/goog/**/*.js',
included: false
},
{
pattern: 'core/build/resources/main/google/registry/ui/assets/images/*.png',
included: false
},
{
pattern: 'core/build/resources/main/google/registry/ui/assets/images/icons/svg/*.svg',
included: false
}
],
preprocessors: {
'node_modules/google-closure-library/closure/goog/deps.js': ['closure', 'closure-deps'],
'node_modules/google-closure-library/closure/goog/base.js': ['closure'],
'node_modules/google-closure-library/closure/**/*.js': ['closure'],
'core/src/*/javascript/**/*.js': ['closure'],
'core/build/generated/sources/custom/java/main/**/*.soy.js': ['closure'],
},
proxies: {
"/assets/": "/base/core/build/resources/main/google/registry/ui/assets/"
}
});
};

View File

@@ -18,8 +18,10 @@ import static com.google.appengine.api.ThreadManager.currentRequestThreadFactory
import static com.google.common.util.concurrent.MoreExecutors.listeningDecorator;
import static google.registry.backup.ExportCommitLogDiffAction.LOWER_CHECKPOINT_TIME_PARAM;
import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_TIME_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.BUCKET_OVERRIDE_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.FROM_TIME_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.TO_TIME_PARAM;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredDatetimeParameter;
import static google.registry.request.RequestParameters.extractRequiredParameter;
import static java.util.concurrent.Executors.newFixedThreadPool;
@@ -32,6 +34,9 @@ import google.registry.cron.CommitLogFanoutAction;
import google.registry.request.HttpException.BadRequestException;
import google.registry.request.Parameter;
import java.lang.annotation.Documented;
import java.util.Optional;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import javax.inject.Qualifier;
import javax.servlet.http.HttpServletRequest;
import org.joda.time.DateTime;
@@ -75,6 +80,12 @@ public final class BackupModule {
return extractRequiredDatetimeParameter(req, UPPER_CHECKPOINT_TIME_PARAM);
}
@Provides
@Parameter(BUCKET_OVERRIDE_PARAM)
static Optional<String> provideBucketOverride(HttpServletRequest req) {
return extractOptionalParameter(req, BUCKET_OVERRIDE_PARAM);
}
@Provides
@Parameter(FROM_TIME_PARAM)
static DateTime provideFromTime(HttpServletRequest req) {
@@ -92,4 +103,9 @@ public final class BackupModule {
static ListeningExecutorService provideListeningExecutorService() {
return listeningDecorator(newFixedThreadPool(NUM_THREADS, currentRequestThreadFactory()));
}
@Provides
static ScheduledExecutorService provideScheduledExecutorService() {
return Executors.newSingleThreadScheduledExecutor();
}
}

View File

@@ -14,7 +14,7 @@
package google.registry.backup;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import com.google.appengine.api.datastore.EntityTranslator;
import com.google.common.collect.AbstractIterator;
@@ -45,7 +45,7 @@ public class BackupUtils {
* {@link OutputStream} in delimited protocol buffer format.
*/
static void serializeEntity(ImmutableObject entity, OutputStream stream) throws IOException {
EntityTranslator.convertToPb(ofy().save().toEntity(entity)).writeDelimitedTo(stream);
EntityTranslator.convertToPb(auditedOfy().save().toEntity(entity)).writeDelimitedTo(stream);
}
/**
@@ -56,19 +56,25 @@ public class BackupUtils {
*
* <p>The iterator reads from the stream on demand, and as such will fail if the stream is closed.
*/
public static Iterator<ImmutableObject> createDeserializingIterator(final InputStream input) {
public static Iterator<ImmutableObject> createDeserializingIterator(
final InputStream input, boolean withAppIdOverride) {
return new AbstractIterator<ImmutableObject>() {
@Override
protected ImmutableObject computeNext() {
EntityProto proto = new EntityProto();
if (proto.parseDelimitedFrom(input)) { // False means end of stream; other errors throw.
return ofy().load().fromEntity(EntityTranslator.createFromPb(proto));
if (proto.parseDelimitedFrom(input)) { // False means end of stream; other errors throw.
if (withAppIdOverride) {
proto = EntityImports.fixEntity(proto);
}
return auditedOfy().load().fromEntity(EntityTranslator.createFromPb(proto));
}
return endOfData();
}};
}
};
}
public static ImmutableList<ImmutableObject> deserializeEntities(byte[] bytes) {
return ImmutableList.copyOf(createDeserializingIterator(new ByteArrayInputStream(bytes)));
return ImmutableList.copyOf(
createDeserializingIterator(new ByteArrayInputStream(bytes), false));
}
}

View File

@@ -18,7 +18,7 @@ import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static google.registry.backup.ExportCommitLogDiffAction.LOWER_CHECKPOINT_TIME_PARAM;
import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_TIME_PARAM;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
@@ -64,8 +64,7 @@ public final class CommitLogCheckpointAction implements Runnable {
final CommitLogCheckpoint checkpoint = strategy.computeCheckpoint();
logger.atInfo().log(
"Generated candidate checkpoint for time: %s", checkpoint.getCheckpointTime());
tm()
.transact(
tm().transact(
() -> {
DateTime lastWrittenTime = CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
@@ -73,7 +72,7 @@ public final class CommitLogCheckpointAction implements Runnable {
"Newer checkpoint already written at time: %s", lastWrittenTime);
return;
}
ofy()
auditedOfy()
.saveWithoutBackup()
.entities(
checkpoint, CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));

View File

@@ -59,7 +59,7 @@ public final class CommitLogImports {
InputStream inputStream) {
try (AppEngineEnvironment appEngineEnvironment = new AppEngineEnvironment();
InputStream input = new BufferedInputStream(inputStream)) {
Iterator<ImmutableObject> commitLogs = createDeserializingIterator(input);
Iterator<ImmutableObject> commitLogs = createDeserializingIterator(input, false);
checkState(commitLogs.hasNext());
checkState(commitLogs.next() instanceof CommitLogCheckpoint);

View File

@@ -17,7 +17,7 @@ package google.registry.backup;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.mapreduce.MapreduceRunner.PARAM_DRY_RUN;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static java.lang.Boolean.FALSE;
import static java.lang.Boolean.TRUE;
@@ -66,6 +66,8 @@ import org.joda.time.Duration;
service = Action.Service.BACKEND,
path = "/_dr/task/deleteOldCommitLogs",
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
// No longer needed in SQL. Subject to future removal.
@Deprecated
public final class DeleteOldCommitLogsAction implements Runnable {
private static final int NUM_MAP_SHARDS = 20;
@@ -75,9 +77,17 @@ public final class DeleteOldCommitLogsAction implements Runnable {
@Inject MapreduceRunner mrRunner;
@Inject Response response;
@Inject Clock clock;
@Inject @Config("commitLogDatastoreRetention") Duration maxAge;
@Inject @Parameter(PARAM_DRY_RUN) boolean isDryRun;
@Inject DeleteOldCommitLogsAction() {}
@Inject
@Config("commitLogDatastoreRetention")
Duration maxAge;
@Inject
@Parameter(PARAM_DRY_RUN)
boolean isDryRun;
@Inject
DeleteOldCommitLogsAction() {}
@Override
public void run() {
@@ -138,12 +148,12 @@ public final class DeleteOldCommitLogsAction implements Runnable {
// If it isn't a Key<CommitLogManifest> then it should be an EppResource, which we need to
// load to emit the revisions.
//
Object object = ofy().load().key(key).now();
Object object = auditedOfy().load().key(key).now();
checkNotNull(object, "Received a key to a missing object. key: %s", key);
checkState(
object instanceof EppResource,
"Received a key to an object that isn't EppResource nor CommitLogManifest."
+ " Key: %s object type: %s",
+ " Key: %s object type: %s",
key,
object.getClass().getName());
@@ -224,8 +234,7 @@ public final class DeleteOldCommitLogsAction implements Runnable {
* OK to delete this manifestKey. If even one source returns "false" (meaning "it's not OK to
* delete this manifest") then it won't be deleted.
*/
static class DeleteOldCommitLogsReducer
extends Reducer<Key<CommitLogManifest>, Boolean, Void> {
static class DeleteOldCommitLogsReducer extends Reducer<Key<CommitLogManifest>, Boolean, Void> {
private static final long serialVersionUID = -4918760187627937268L;
@@ -241,12 +250,12 @@ public final class DeleteOldCommitLogsAction implements Runnable {
}
public abstract Status status();
public abstract int numDeleted();
static DeletionResult create(Status status, int numDeleted) {
return
new AutoValue_DeleteOldCommitLogsAction_DeleteOldCommitLogsReducer_DeletionResult(
status, numDeleted);
return new AutoValue_DeleteOldCommitLogsAction_DeleteOldCommitLogsReducer_DeletionResult(
status, numDeleted);
}
}
@@ -257,8 +266,7 @@ public final class DeleteOldCommitLogsAction implements Runnable {
@Override
public void reduce(
final Key<CommitLogManifest> manifestKey,
ReducerInput<Boolean> canDeleteVerdicts) {
final Key<CommitLogManifest> manifestKey, ReducerInput<Boolean> canDeleteVerdicts) {
ImmutableMultiset<Boolean> canDeleteMultiset = ImmutableMultiset.copyOf(canDeleteVerdicts);
if (canDeleteMultiset.count(TRUE) > 1) {
getContext().incrementCounter("commit log manifests incorrectly mapped multiple times");
@@ -267,47 +275,54 @@ public final class DeleteOldCommitLogsAction implements Runnable {
getContext().incrementCounter("commit log manifests referenced multiple times");
}
if (canDeleteMultiset.contains(FALSE)) {
getContext().incrementCounter(
canDeleteMultiset.contains(TRUE)
? "old commit log manifests still referenced"
: "new (or nonexistent) commit log manifests referenced");
getContext().incrementCounter(
"EPP resource revisions handled",
canDeleteMultiset.count(FALSE));
getContext()
.incrementCounter(
canDeleteMultiset.contains(TRUE)
? "old commit log manifests still referenced"
: "new (or nonexistent) commit log manifests referenced");
getContext()
.incrementCounter("EPP resource revisions handled", canDeleteMultiset.count(FALSE));
return;
}
DeletionResult deletionResult = tm().transactNew(() -> {
CommitLogManifest manifest = ofy().load().key(manifestKey).now();
// It is possible that the same manifestKey was run twice, if a shard had to be restarted
// or some weird failure. If this happens, we want to exit immediately.
// Note that this can never happen in dryRun.
if (manifest == null) {
return DeletionResult.create(DeletionResult.Status.ALREADY_DELETED, 0);
}
// Doing a sanity check on the date. This is the only place we use the CommitLogManifest,
// so maybe removing this test will improve performance. However, unless it's proven that
// the performance boost is significant (and we've tested this enough to be sure it never
// happens)- the safty of "let's not delete stuff we need from prod" is more important.
if (manifest.getCommitTime().isAfter(deletionThreshold)) {
return DeletionResult.create(DeletionResult.Status.AFTER_THRESHOLD, 0);
}
Iterable<Key<CommitLogMutation>> commitLogMutationKeys = ofy().load()
.type(CommitLogMutation.class)
.ancestor(manifestKey)
.keys()
.iterable();
ImmutableList<Key<?>> keysToDelete = ImmutableList.<Key<?>>builder()
.addAll(commitLogMutationKeys)
.add(manifestKey)
.build();
// Normally in a dry run we would log the entities that would be deleted, but those can
// number in the millions so we skip the logging.
if (!isDryRun) {
ofy().deleteWithoutBackup().keys(keysToDelete);
}
return DeletionResult.create(DeletionResult.Status.SUCCESS, keysToDelete.size());
});
DeletionResult deletionResult =
tm().transactNew(
() -> {
CommitLogManifest manifest = auditedOfy().load().key(manifestKey).now();
// It is possible that the same manifestKey was run twice, if a shard had to be
// restarted or some weird failure. If this happens, we want to exit
// immediately. Note that this can never happen in dryRun.
if (manifest == null) {
return DeletionResult.create(DeletionResult.Status.ALREADY_DELETED, 0);
}
// Doing a sanity check on the date. This is the only place we use the
// CommitLogManifest, so maybe removing this test will improve performance.
// However, unless it's proven that the performance boost is significant (and
// we've tested this enough to be sure it never happens)- the safety of "let's
// not delete stuff we need from prod" is more important.
if (manifest.getCommitTime().isAfter(deletionThreshold)) {
return DeletionResult.create(DeletionResult.Status.AFTER_THRESHOLD, 0);
}
Iterable<Key<CommitLogMutation>> commitLogMutationKeys =
auditedOfy()
.load()
.type(CommitLogMutation.class)
.ancestor(manifestKey)
.keys()
.iterable();
ImmutableList<Key<?>> keysToDelete =
ImmutableList.<Key<?>>builder()
.addAll(commitLogMutationKeys)
.add(manifestKey)
.build();
// Normally in a dry run we would log the entities that would be deleted, but
// those can number in the millions so we skip the logging.
if (!isDryRun) {
auditedOfy().deleteWithoutBackup().keys(keysToDelete);
}
return DeletionResult.create(
DeletionResult.Status.SUCCESS, keysToDelete.size());
});
switch (deletionResult.status()) {
case SUCCESS:

View File

@@ -0,0 +1,115 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.backup;
import com.google.apphosting.api.ApiProxy;
import com.google.storage.onestore.v3.OnestoreEntity;
import com.google.storage.onestore.v3.OnestoreEntity.EntityProto;
import com.google.storage.onestore.v3.OnestoreEntity.Path;
import com.google.storage.onestore.v3.OnestoreEntity.Property.Meaning;
import com.google.storage.onestore.v3.OnestoreEntity.PropertyValue.ReferenceValue;
import java.nio.charset.StandardCharsets;
import java.util.Objects;
/** Utilities for handling imported Datastore entities. */
public class EntityImports {
/**
* Transitively sets the {@code appId} of all keys in a foreign entity to that of the current
* system.
*/
public static EntityProto fixEntity(EntityProto entityProto) {
String currentAappId = ApiProxy.getCurrentEnvironment().getAppId();
if (Objects.equals(currentAappId, entityProto.getKey().getApp())) {
return entityProto;
}
return fixEntity(entityProto, currentAappId);
}
private static EntityProto fixEntity(EntityProto entityProto, String appId) {
if (entityProto.hasKey()) {
fixKey(entityProto, appId);
}
for (OnestoreEntity.Property property : entityProto.mutablePropertys()) {
fixProperty(property, appId);
}
for (OnestoreEntity.Property property : entityProto.mutableRawPropertys()) {
fixProperty(property, appId);
}
// CommitLogMutation embeds an entity as bytes, which needs additional fixes.
if (isCommitLogMutation(entityProto)) {
fixMutationEntityProtoBytes(entityProto, appId);
}
return entityProto;
}
private static boolean isCommitLogMutation(EntityProto entityProto) {
if (!entityProto.hasKey()) {
return false;
}
Path path = entityProto.getKey().getPath();
if (path.elementSize() == 0) {
return false;
}
return Objects.equals(
path.getElement(path.elementSize() - 1).getType(StandardCharsets.UTF_8),
"CommitLogMutation");
}
private static void fixMutationEntityProtoBytes(EntityProto entityProto, String appId) {
for (OnestoreEntity.Property property : entityProto.mutableRawPropertys()) {
if (Objects.equals(property.getName(), "entityProtoBytes")) {
OnestoreEntity.PropertyValue value = property.getValue();
EntityProto fixedProto =
fixEntity(bytesToEntityProto(value.getStringValueAsBytes()), appId);
value.setStringValueAsBytes(fixedProto.toByteArray());
return;
}
}
}
private static void fixKey(EntityProto entityProto, String appId) {
entityProto.getMutableKey().setApp(appId);
}
private static void fixKey(ReferenceValue referenceValue, String appId) {
referenceValue.setApp(appId);
}
private static void fixProperty(OnestoreEntity.Property property, String appId) {
OnestoreEntity.PropertyValue value = property.getMutableValue();
if (value.hasReferenceValue()) {
fixKey(value.getMutableReferenceValue(), appId);
return;
}
if (property.getMeaningEnum().equals(Meaning.ENTITY_PROTO)) {
EntityProto embeddedProto = bytesToEntityProto(value.getStringValueAsBytes());
fixEntity(embeddedProto, appId);
value.setStringValueAsBytes(embeddedProto.toByteArray());
}
}
private static EntityProto bytesToEntityProto(byte[] bytes) {
EntityProto entityProto = new EntityProto();
boolean isParsed = entityProto.parseFrom(bytes);
if (!isParsed) {
throw new IllegalStateException("Failed to parse raw bytes as EntityProto.");
}
return entityProto;
}
}

View File

@@ -25,21 +25,20 @@ import static google.registry.backup.BackupUtils.GcsMetadataKeys.NUM_TRANSACTION
import static google.registry.backup.BackupUtils.GcsMetadataKeys.UPPER_BOUND_CHECKPOINT;
import static google.registry.backup.BackupUtils.serializeEntity;
import static google.registry.model.ofy.CommitLogBucket.getBucketKey;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static google.registry.util.DateTimeUtils.isAtOrAfter;
import static java.nio.channels.Channels.newOutputStream;
import static java.util.Comparator.comparingLong;
import com.google.appengine.tools.cloudstorage.GcsFileOptions;
import com.google.appengine.tools.cloudstorage.GcsFilename;
import com.google.appengine.tools.cloudstorage.GcsService;
import com.google.cloud.storage.BlobId;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import google.registry.model.ImmutableObject;
import google.registry.model.ofy.CommitLogBucket;
import google.registry.model.ofy.CommitLogCheckpoint;
@@ -74,7 +73,8 @@ public final class ExportCommitLogDiffAction implements Runnable {
public static final String DIFF_FILE_PREFIX = "commit_diff_until_";
@Inject GcsService gcsService;
@Inject GcsUtils gcsUtils;
@Inject @Config("commitLogGcsBucket") String gcsBucket;
@Inject @Config("commitLogDiffExportBatchSize") int batchSize;
@Inject @Parameter(LOWER_CHECKPOINT_TIME_PARAM) DateTime lowerCheckpointTime;
@@ -89,23 +89,26 @@ public final class ExportCommitLogDiffAction implements Runnable {
checkArgument(lowerCheckpointTime.isBefore(upperCheckpointTime));
// Load the boundary checkpoints - lower is exclusive and may not exist (on the first export,
// when lowerCheckpointTime is START_OF_TIME), whereas the upper is inclusive and must exist.
CommitLogCheckpoint lowerCheckpoint = lowerCheckpointTime.isAfter(START_OF_TIME)
? verifyNotNull(ofy().load().key(CommitLogCheckpoint.createKey(lowerCheckpointTime)).now())
: null;
CommitLogCheckpoint lowerCheckpoint =
lowerCheckpointTime.isAfter(START_OF_TIME)
? verifyNotNull(
auditedOfy().load().key(CommitLogCheckpoint.createKey(lowerCheckpointTime)).now())
: null;
CommitLogCheckpoint upperCheckpoint =
verifyNotNull(ofy().load().key(CommitLogCheckpoint.createKey(upperCheckpointTime)).now());
verifyNotNull(
auditedOfy().load().key(CommitLogCheckpoint.createKey(upperCheckpointTime)).now());
// Load the keys of all the manifests to include in this diff.
List<Key<CommitLogManifest>> sortedKeys = loadAllDiffKeys(lowerCheckpoint, upperCheckpoint);
logger.atInfo().log("Found %d manifests to export", sortedKeys.size());
// Open an output channel to GCS, wrapped in a stream for convenience.
try (OutputStream gcsStream = newOutputStream(gcsService.createOrReplace(
new GcsFilename(gcsBucket, DIFF_FILE_PREFIX + upperCheckpointTime),
new GcsFileOptions.Builder()
.addUserMetadata(LOWER_BOUND_CHECKPOINT, lowerCheckpointTime.toString())
.addUserMetadata(UPPER_BOUND_CHECKPOINT, upperCheckpointTime.toString())
.addUserMetadata(NUM_TRANSACTIONS, Integer.toString(sortedKeys.size()))
.build()))) {
try (OutputStream gcsStream =
gcsUtils.openOutputStream(
BlobId.of(gcsBucket, DIFF_FILE_PREFIX + upperCheckpointTime),
ImmutableMap.of(
LOWER_BOUND_CHECKPOINT, lowerCheckpointTime.toString(),
UPPER_BOUND_CHECKPOINT, upperCheckpointTime.toString(),
NUM_TRANSACTIONS, Integer.toString(sortedKeys.size())))) {
// Export the upper checkpoint itself.
serializeEntity(upperCheckpoint, gcsStream);
// If there are no manifests to export, stop early, now that we've written out the file with
@@ -117,7 +120,7 @@ public final class ExportCommitLogDiffAction implements Runnable {
// asynchronously load the entities for the next one.
List<List<Key<CommitLogManifest>>> keyChunks = partition(sortedKeys, batchSize);
// Objectify's map return type is asynchronous. Calling .values() will block until it loads.
Map<?, CommitLogManifest> nextChunkToExport = ofy().load().keys(keyChunks.get(0));
Map<?, CommitLogManifest> nextChunkToExport = auditedOfy().load().keys(keyChunks.get(0));
for (int i = 0; i < keyChunks.size(); i++) {
// Force the async load to finish.
Collection<CommitLogManifest> chunkValues = nextChunkToExport.values();
@@ -125,10 +128,10 @@ public final class ExportCommitLogDiffAction implements Runnable {
// Since there is no hard bound on how much data this might be, take care not to let the
// Objectify session cache fill up and potentially run out of memory. This is the only safe
// point to do this since at this point there is no async load in progress.
ofy().clearSessionCache();
auditedOfy().clearSessionCache();
// Kick off the next async load, which can happen in parallel to the current GCS export.
if (i + 1 < keyChunks.size()) {
nextChunkToExport = ofy().load().keys(keyChunks.get(i + 1));
nextChunkToExport = auditedOfy().load().keys(keyChunks.get(i + 1));
}
exportChunk(gcsStream, chunkValues);
logger.atInfo().log("Exported %d manifests", chunkValues.size());
@@ -192,7 +195,8 @@ public final class ExportCommitLogDiffAction implements Runnable {
return ImmutableSet.of();
}
Key<CommitLogBucket> bucketKey = getBucketKey(bucketNum);
return ofy().load()
return auditedOfy()
.load()
.type(CommitLogManifest.class)
.ancestor(bucketKey)
.filterKey(">=", CommitLogManifest.createKey(bucketKey, lowerBound))
@@ -208,7 +212,7 @@ public final class ExportCommitLogDiffAction implements Runnable {
new ImmutableList.Builder<>();
for (CommitLogManifest manifest : chunk) {
entities.add(ImmutableList.of(manifest));
entities.add(ofy().load().type(CommitLogMutation.class).ancestor(manifest));
entities.add(auditedOfy().load().type(CommitLogMutation.class).ancestor(manifest));
}
for (ImmutableObject entity : concat(entities.build())) {
serializeEntity(entity, gcsStream);

View File

@@ -15,30 +15,32 @@
package google.registry.backup;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.backup.BackupUtils.GcsMetadataKeys.LOWER_BOUND_CHECKPOINT;
import static google.registry.backup.ExportCommitLogDiffAction.DIFF_FILE_PREFIX;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
import static google.registry.util.DateTimeUtils.latestOf;
import com.google.appengine.tools.cloudstorage.GcsFileMetadata;
import com.google.appengine.tools.cloudstorage.GcsFilename;
import com.google.appengine.tools.cloudstorage.GcsService;
import com.google.appengine.tools.cloudstorage.ListItem;
import com.google.appengine.tools.cloudstorage.ListOptions;
import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.ListeningExecutorService;
import com.google.common.util.concurrent.UncheckedExecutionException;
import google.registry.backup.BackupModule.Backups;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import java.io.IOException;
import java.util.Iterator;
import java.time.Duration;
import java.util.Map;
import java.util.TreeMap;
import java.util.concurrent.ScheduledExecutorService;
import javax.annotation.Nullable;
import javax.inject.Inject;
import javax.inject.Provider;
import org.joda.time.DateTime;
/** Utility class to list commit logs diff files stored on GCS. */
@@ -46,33 +48,49 @@ class GcsDiffFileLister {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject GcsService gcsService;
@Inject @Config("commitLogGcsBucket") String gcsBucket;
@Inject @Backups ListeningExecutorService executor;
@Inject GcsDiffFileLister() {}
/** Timeout for retrieving per-file information from GCS. */
private static final Duration FILE_INFO_TIMEOUT_DURATION = Duration.ofMinutes(1);
@Inject GcsUtils gcsUtils;
@Inject @Backups Provider<ListeningExecutorService> executorProvider;
@Inject ScheduledExecutorService scheduledExecutorService;
@Inject
GcsDiffFileLister() {}
/**
* Traverses the sequence of diff files backwards from checkpointTime and inserts the file
* metadata into "sequence". Returns true if a complete sequence was discovered, false if one or
* metadata into "sequence". Returns true if a complete sequence was discovered, false if one or
* more files are missing.
*
* @throws UncheckedExecutionException wrapping a {@link java.util.concurrent.TimeoutException} if
* the GCS call fails to finish within one minute, or wrapping any other exception if
* something else goes wrong.
*/
private boolean constructDiffSequence(
Map<DateTime, ListenableFuture<GcsFileMetadata>> upperBoundTimesToMetadata,
String gcsBucket,
Map<DateTime, ListenableFuture<BlobInfo>> upperBoundTimesToBlobInfo,
DateTime fromTime,
DateTime lastTime,
TreeMap<DateTime, GcsFileMetadata> sequence) {
TreeMap<DateTime, BlobInfo> sequence) {
DateTime checkpointTime = lastTime;
while (isBeforeOrAt(fromTime, checkpointTime)) {
GcsFileMetadata metadata;
if (upperBoundTimesToMetadata.containsKey(checkpointTime)) {
metadata = Futures.getUnchecked(upperBoundTimesToMetadata.get(checkpointTime));
BlobInfo blobInfo;
if (upperBoundTimesToBlobInfo.containsKey(checkpointTime)) {
blobInfo =
Futures.getUnchecked(
Futures.withTimeout(
upperBoundTimesToBlobInfo.get(checkpointTime),
FILE_INFO_TIMEOUT_DURATION,
scheduledExecutorService));
} else {
String filename = DIFF_FILE_PREFIX + checkpointTime;
logger.atInfo().log("Patching GCS list; discovered file: %s", filename);
metadata = getMetadata(filename);
blobInfo = getBlobInfo(gcsBucket, filename);
// If we hit a gap, quit.
if (metadata == null) {
if (blobInfo == null) {
logger.atInfo().log(
"Gap discovered in sequence terminating at %s, missing file: %s",
sequence.lastKey(), filename);
@@ -80,14 +98,15 @@ class GcsDiffFileLister {
return false;
}
}
sequence.put(checkpointTime, metadata);
checkpointTime = getLowerBoundTime(metadata);
sequence.put(checkpointTime, blobInfo);
checkpointTime = getLowerBoundTime(blobInfo);
}
logger.atInfo().log("Found sequence from %s to %s", checkpointTime, lastTime);
return true;
}
ImmutableList<GcsFileMetadata> listDiffFiles(DateTime fromTime, @Nullable DateTime toTime) {
ImmutableList<BlobInfo> listDiffFiles(
String gcsBucket, DateTime fromTime, @Nullable DateTime toTime) {
logger.atInfo().log("Requested restore from time: %s", fromTime);
if (toTime != null) {
logger.atInfo().log(" Until time: %s", toTime);
@@ -95,66 +114,74 @@ class GcsDiffFileLister {
// List all of the diff files on GCS and build a map from each file's upper checkpoint time
// (extracted from the filename) to its asynchronously-loaded metadata, keeping only files with
// an upper checkpoint time > fromTime.
TreeMap<DateTime, ListenableFuture<GcsFileMetadata>> upperBoundTimesToMetadata
= new TreeMap<>();
Iterator<ListItem> listItems;
TreeMap<DateTime, ListenableFuture<BlobInfo>> upperBoundTimesToBlobInfo = new TreeMap<>();
String commitLogDiffPrefix = getCommitLogDiffPrefix(fromTime, toTime);
ImmutableList<String> filenames;
try {
// TODO(b/23554360): Use a smarter prefixing strategy to speed this up.
listItems = gcsService.list(
gcsBucket,
new ListOptions.Builder().setPrefix(DIFF_FILE_PREFIX).build());
filenames =
gcsUtils.listFolderObjects(gcsBucket, commitLogDiffPrefix).stream()
.map(s -> commitLogDiffPrefix + s)
.collect(toImmutableList());
} catch (IOException e) {
throw new RuntimeException(e);
}
DateTime lastUpperBoundTime = START_OF_TIME;
while (listItems.hasNext()) {
final String filename = listItems.next().getName();
DateTime upperBoundTime = DateTime.parse(filename.substring(DIFF_FILE_PREFIX.length()));
if (isInRange(upperBoundTime, fromTime, toTime)) {
upperBoundTimesToMetadata.put(upperBoundTime, executor.submit(() -> getMetadata(filename)));
lastUpperBoundTime = latestOf(upperBoundTime, lastUpperBoundTime);
}
}
if (upperBoundTimesToMetadata.isEmpty()) {
logger.atInfo().log("No files found");
return ImmutableList.of();
}
// Reconstruct the sequence of files by traversing backwards from "lastUpperBoundTime" (i.e. the
// last file that we found) and finding its previous file until we either run out of files or
// get to one that precedes "fromTime".
//
// GCS file listing is eventually consistent, so it's possible that we are missing a file. The
// metadata of a file is sufficient to identify the preceding file, so if we start from the
// last file and work backwards we can verify that we have no holes in our chain (although we
// may be missing files at the end).
TreeMap<DateTime, GcsFileMetadata> sequence = new TreeMap<>();
logger.atInfo().log("Restoring until: %s", lastUpperBoundTime);
boolean inconsistentFileSet = !constructDiffSequence(
upperBoundTimesToMetadata, fromTime, lastUpperBoundTime, sequence);
// Verify that all of the elements in the original set are represented in the sequence. If we
// find anything that's not represented, construct a sequence for it.
boolean checkForMoreExtraDiffs = true; // Always loop at least once.
while (checkForMoreExtraDiffs) {
checkForMoreExtraDiffs = false;
for (DateTime key : upperBoundTimesToMetadata.descendingKeySet()) {
if (!isInRange(key, fromTime, toTime)) {
break;
}
if (!sequence.containsKey(key)) {
constructDiffSequence(upperBoundTimesToMetadata, fromTime, key, sequence);
checkForMoreExtraDiffs = true;
inconsistentFileSet = true;
break;
TreeMap<DateTime, BlobInfo> sequence = new TreeMap<>();
ListeningExecutorService executor = executorProvider.get();
try {
for (String filename : filenames) {
String strippedFilename = filename.replaceFirst(DIFF_FILE_PREFIX, "");
DateTime upperBoundTime = DateTime.parse(strippedFilename);
if (isInRange(upperBoundTime, fromTime, toTime)) {
upperBoundTimesToBlobInfo.put(
upperBoundTime, executor.submit(() -> getBlobInfo(gcsBucket, filename)));
lastUpperBoundTime = latestOf(upperBoundTime, lastUpperBoundTime);
}
}
}
if (upperBoundTimesToBlobInfo.isEmpty()) {
logger.atInfo().log("No files found");
return ImmutableList.of();
}
checkState(
!inconsistentFileSet,
"Unable to compute commit diff history, there are either gaps or forks in the history "
+ "file set. Check log for details.");
// Reconstruct the sequence of files by traversing backwards from "lastUpperBoundTime" (i.e.
// the last file that we found) and finding its previous file until we either run out of files
// or get to one that precedes "fromTime".
//
// GCS file listing is eventually consistent, so it's possible that we are missing a file. The
// metadata of a file is sufficient to identify the preceding file, so if we start from the
// last file and work backwards we can verify that we have no holes in our chain (although we
// may be missing files at the end).
logger.atInfo().log("Restoring until: %s", lastUpperBoundTime);
boolean inconsistentFileSet =
!constructDiffSequence(
gcsBucket, upperBoundTimesToBlobInfo, fromTime, lastUpperBoundTime, sequence);
// Verify that all of the elements in the original set are represented in the sequence. If we
// find anything that's not represented, construct a sequence for it.
boolean checkForMoreExtraDiffs = true; // Always loop at least once.
while (checkForMoreExtraDiffs) {
checkForMoreExtraDiffs = false;
for (DateTime key : upperBoundTimesToBlobInfo.descendingKeySet()) {
if (!isInRange(key, fromTime, toTime)) {
break;
}
if (!sequence.containsKey(key)) {
constructDiffSequence(gcsBucket, upperBoundTimesToBlobInfo, fromTime, key, sequence);
checkForMoreExtraDiffs = true;
inconsistentFileSet = true;
break;
}
}
}
checkState(
!inconsistentFileSet,
"Unable to compute commit diff history, there are either gaps or forks in the history "
+ "file set. Check log for details.");
} finally {
executor.shutdown();
}
logger.atInfo().log(
"Actual restore from time: %s", getLowerBoundTime(sequence.firstEntry().getValue()));
@@ -171,15 +198,43 @@ class GcsDiffFileLister {
return isBeforeOrAt(start, time) && (end == null || isBeforeOrAt(time, end));
}
private DateTime getLowerBoundTime(GcsFileMetadata metadata) {
return DateTime.parse(metadata.getOptions().getUserMetadata().get(LOWER_BOUND_CHECKPOINT));
private DateTime getLowerBoundTime(BlobInfo blobInfo) {
return DateTime.parse(blobInfo.getMetadata().get(LOWER_BOUND_CHECKPOINT));
}
private GcsFileMetadata getMetadata(String filename) {
try {
return gcsService.getMetadata(new GcsFilename(gcsBucket, filename));
} catch (IOException e) {
throw new RuntimeException(e);
private BlobInfo getBlobInfo(String gcsBucket, String filename) {
return gcsUtils.getBlobInfo(BlobId.of(gcsBucket, filename));
}
/**
* Returns a prefix guaranteed to cover all commit log diff files in the given range.
*
* <p>The listObjects call can be fairly slow if we search over many thousands or tens of
* thousands of files, so we restrict the search space. The commit logs have a file format of
* "commit_diff_until_2021-05-11T06:48:00.070Z" so we can often filter down as far as the hour.
*
* <p>Here, we get the longest prefix possible based on which fields (year, month, day, hour) the
* times in question have in common.
*/
@VisibleForTesting
static String getCommitLogDiffPrefix(DateTime from, @Nullable DateTime to) {
StringBuilder result = new StringBuilder(DIFF_FILE_PREFIX);
if (to == null || from.getYear() != to.getYear()) {
return result.toString();
}
result.append(from.getYear()).append('-');
if (from.getMonthOfYear() != to.getMonthOfYear()) {
return result.toString();
}
result.append(String.format("%02d-", from.getMonthOfYear()));
if (from.getDayOfMonth() != to.getDayOfMonth()) {
return result.toString();
}
result.append(String.format("%02dT", from.getDayOfMonth()));
if (from.getHourOfDay() != to.getHourOfDay()) {
return result.toString();
}
result.append(String.format("%02d:", from.getHourOfDay()));
return result.toString();
}
}

View File

@@ -14,70 +14,97 @@
package google.registry.backup;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.backup.ExportCommitLogDiffAction.DIFF_FILE_PREFIX;
import static google.registry.backup.RestoreCommitLogsAction.DRY_RUN_PARAM;
import static google.registry.model.ofy.EntityWritePriorities.getEntityPriority;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.util.DateTimeUtils.isAtOrAfter;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
import static javax.servlet.http.HttpServletResponse.SC_NO_CONTENT;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import static org.joda.time.Duration.standardHours;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.api.datastore.Key;
import com.google.appengine.tools.cloudstorage.GcsFileMetadata;
import com.google.appengine.tools.cloudstorage.GcsService;
import com.google.cloud.storage.BlobInfo;
import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
import google.registry.model.replay.DatastoreEntity;
import google.registry.model.replay.DatastoreOnlyEntity;
import google.registry.model.replay.NonReplicatedEntity;
import google.registry.model.replay.ReplaySpecializer;
import google.registry.model.replay.SqlReplayCheckpoint;
import google.registry.model.server.Lock;
import google.registry.model.translators.VKeyTranslatorFactory;
import google.registry.persistence.VKey;
import google.registry.request.Action;
import google.registry.request.Action.Method;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.schema.replay.DatastoreEntity;
import google.registry.schema.replay.DatastoreOnlyEntity;
import google.registry.schema.replay.NonReplicatedEntity;
import google.registry.schema.replay.ReplaySpecializer;
import google.registry.schema.replay.SqlReplayCheckpoint;
import google.registry.util.Clock;
import google.registry.util.RequestStatusChecker;
import java.io.IOException;
import java.io.InputStream;
import java.nio.channels.Channels;
import java.util.Optional;
import javax.inject.Inject;
import javax.servlet.http.HttpServletResponse;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.joda.time.Seconds;
/** Action that replays commit logs to Cloud SQL to keep it up to date. */
@Action(
service = Action.Service.BACKEND,
path = ReplayCommitLogsToSqlAction.PATH,
method = Action.Method.POST,
method = Method.POST,
automaticallyPrintOk = true,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class ReplayCommitLogsToSqlAction implements Runnable {
static final String PATH = "/_dr/task/replayCommitLogsToSql";
private static final int BLOCK_SIZE =
1024 * 1024; // Buffer 1mb at a time, for no particular reason.
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final Duration LEASE_LENGTH = standardHours(1);
@Inject GcsService gcsService;
private static final Duration LEASE_LENGTH = standardHours(1);
// Stop / pause where we are if we've been replaying for more than five minutes to avoid GAE
// request timeouts
private static final Duration REPLAY_TIMEOUT_DURATION = Duration.standardMinutes(5);
@Inject GcsUtils gcsUtils;
@Inject Response response;
@Inject RequestStatusChecker requestStatusChecker;
@Inject GcsDiffFileLister diffLister;
@Inject Clock clock;
@Inject
@Config("commitLogGcsBucket")
String gcsBucket;
/** If true, will exit after logging the commit log files that would otherwise be replayed. */
@Inject
@Parameter(DRY_RUN_PARAM)
boolean dryRun;
@Inject
ReplayCommitLogsToSqlAction() {}
@Override
public void run() {
if (!RegistryConfig.getCloudSqlReplayCommitLogs()) {
String message = "ReplayCommitLogsToSqlAction was called but disabled in the config.";
logger.atWarning().log(message);
DateTime startTime = clock.nowUtc();
MigrationState state = DatabaseMigrationStateSchedule.getValueAtTime(startTime);
if (!state.getReplayDirection().equals(ReplayDirection.DATASTORE_TO_SQL)) {
String message =
String.format(
"Skipping ReplayCommitLogsToSqlAction because we are in migration phase %s.", state);
logger.atInfo().log(message);
// App Engine will retry on any non-2xx status code, which we don't want in this case.
response.setStatus(SC_NO_CONTENT);
response.setPayload(message);
@@ -96,44 +123,117 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
return;
}
try {
replayFiles();
response.setStatus(HttpServletResponse.SC_OK);
logger.atInfo().log("ReplayCommitLogsToSqlAction completed successfully.");
logger.atInfo().log("Beginning replay of commit logs.");
String resultMessage;
if (dryRun) {
resultMessage = executeDryRun();
} else {
resultMessage = replayFiles(startTime);
}
response.setStatus(SC_OK);
response.setPayload(resultMessage);
logger.atInfo().log(resultMessage);
} catch (Throwable t) {
String message = "Errored out replaying files.";
logger.atSevere().withCause(t).log(message);
response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
response.setPayload(message);
} finally {
lock.ifPresent(Lock::release);
}
}
private void replayFiles() {
private String executeDryRun() {
// Start at the first millisecond we haven't seen yet
DateTime fromTime = jpaTm().transact(() -> SqlReplayCheckpoint.get().plusMillis(1));
// If there's an inconsistent file set, this will throw IllegalStateException and the job
// will try later -- this is likely because an export hasn't finished yet.
ImmutableList<GcsFileMetadata> commitLogFiles =
diffLister.listDiffFiles(fromTime, /* current time */ null);
for (GcsFileMetadata metadata : commitLogFiles) {
// One transaction per GCS file
jpaTm().transact(() -> processFile(metadata));
}
logger.atInfo().log("Replayed %d commit log files to SQL successfully.", commitLogFiles.size());
DateTime searchStartTime = jpaTm().transact(() -> SqlReplayCheckpoint.get().plusMillis(1));
// Search through the end of the hour
DateTime searchEndTime =
searchStartTime.withMinuteOfHour(59).withSecondOfMinute(59).withMillisOfSecond(999);
ImmutableList<String> fileBatch =
diffLister.listDiffFiles(gcsBucket, searchStartTime, searchEndTime).stream()
.map(BlobInfo::getName)
.collect(toImmutableList());
return String.format(
"Running in dry-run mode, the first set of commit log files processed would be from "
+ "searching from %s to %s and would contain %d file(s). They are (limit 10): \n%s",
searchStartTime,
searchEndTime,
fileBatch.size(),
fileBatch.stream().limit(10).collect(toImmutableList()));
}
private void processFile(GcsFileMetadata metadata) {
try (InputStream input =
Channels.newInputStream(
gcsService.openPrefetchingReadChannel(metadata.getFilename(), 0, BLOCK_SIZE))) {
private String replayFiles(DateTime startTime) {
DateTime replayTimeoutTime = startTime.plus(REPLAY_TIMEOUT_DURATION);
DateTime searchStartTime = jpaTm().transact(() -> SqlReplayCheckpoint.get().plusMillis(1));
int filesProcessed = 0;
int transactionsProcessed = 0;
// Starting from one millisecond after the last file we processed, search for and import files
// one hour at a time until we catch up to the current time or we hit the replay timeout (in
// which case the next run will pick up from where we leave off).
//
// We use hour-long batches because GCS supports filename prefix-based searches.
while (true) {
if (isAtOrAfter(clock.nowUtc(), replayTimeoutTime)) {
return createResponseString(
"Reached max execution time", startTime, filesProcessed, transactionsProcessed);
}
if (isBeforeOrAt(clock.nowUtc(), searchStartTime)) {
return createResponseString(
"Caught up to current time", startTime, filesProcessed, transactionsProcessed);
}
// Search through the end of the hour
DateTime searchEndTime =
searchStartTime.withMinuteOfHour(59).withSecondOfMinute(59).withMillisOfSecond(999);
ImmutableList<BlobInfo> fileBatch =
diffLister.listDiffFiles(gcsBucket, searchStartTime, searchEndTime);
if (fileBatch.isEmpty()) {
logger.atInfo().log(
"No remaining files found in hour %s, continuing search in the next hour.",
searchStartTime.toString("yyyy-MM-dd HH"));
}
for (BlobInfo file : fileBatch) {
transactionsProcessed += processFile(file);
filesProcessed++;
if (clock.nowUtc().isAfter(replayTimeoutTime)) {
return createResponseString(
"Reached max execution time", startTime, filesProcessed, transactionsProcessed);
}
}
searchStartTime = searchEndTime.plusMillis(1);
}
}
private String createResponseString(
String msg, DateTime startTime, int filesProcessed, int transactionsProcessed) {
double tps =
(double) transactionsProcessed
/ (double) Seconds.secondsBetween(startTime, clock.nowUtc()).getSeconds();
return String.format(
"%s after replaying %d file(s) containing %d total transaction(s) (%.2f tx/s).",
msg, filesProcessed, transactionsProcessed, tps);
}
/**
* Replays the commit logs in the given commit log file and returns the number of transactions
* committed.
*/
private int processFile(BlobInfo metadata) {
try (InputStream input = gcsUtils.openInputStream(metadata.getBlobId())) {
// Load and process the Datastore transactions one at a time
ImmutableList<ImmutableList<VersionedEntity>> allTransactions =
CommitLogImports.loadEntitiesByTransaction(input);
allTransactions.forEach(this::replayTransaction);
allTransactions.forEach(
transaction -> jpaTm().transact(() -> replayTransaction(transaction)));
// if we succeeded, set the last-seen time
DateTime checkpoint =
DateTime.parse(
metadata.getFilename().getObjectName().substring(DIFF_FILE_PREFIX.length()));
SqlReplayCheckpoint.set(checkpoint);
logger.atInfo().log("Replayed %d transactions from commit log file.", allTransactions.size());
DateTime checkpoint = DateTime.parse(metadata.getName().substring(DIFF_FILE_PREFIX.length()));
jpaTm().transact(() -> SqlReplayCheckpoint.set(checkpoint));
logger.atInfo().log(
"Replayed %d transactions from commit log file %s with size %d B.",
allTransactions.size(), metadata.getName(), metadata.getSize());
return allTransactions.size();
} catch (IOException e) {
throw new RuntimeException(e);
throw new RuntimeException(
"Errored out while replaying commit log file " + metadata.getName(), e);
}
}
@@ -151,15 +251,26 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
}
private void handleEntityPut(Entity entity) {
Object ofyPojo = ofy().toPojo(entity);
if (ofyPojo instanceof DatastoreEntity) {
DatastoreEntity datastoreEntity = (DatastoreEntity) ofyPojo;
datastoreEntity.toSqlEntity().ifPresent(jpaTm()::put);
} else {
// this should never happen, but we shouldn't fail on it
logger.atSevere().log(
"%s does not implement DatastoreEntity, which is necessary for SQL replay.",
ofyPojo.getClass());
Object ofyPojo = auditedOfy().toPojo(entity);
try {
if (ofyPojo instanceof DatastoreEntity) {
DatastoreEntity datastoreEntity = (DatastoreEntity) ofyPojo;
datastoreEntity
.toSqlEntity()
.ifPresent(
sqlEntity -> {
sqlEntity.beforeSqlSaveOnReplay();
jpaTm().putIgnoringReadOnly(sqlEntity);
});
} else {
// this should never happen, but we shouldn't fail on it
logger.atSevere().log(
"%s does not implement DatastoreEntity, which is necessary for SQL replay.",
ofyPojo.getClass());
}
} catch (Throwable t) {
logger.atSevere().log("Error when replaying object %s", ofyPojo);
throw t;
}
}
@@ -175,13 +286,18 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
"Skipping SQL delete for kind %s since it is not convertible.", key.getKind());
return;
}
Class<?> entityClass = entityVKey.getKind();
// Delete the key iff the class represents a JPA entity that is replicated
if (!NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !DatastoreOnlyEntity.class.isAssignableFrom(entityClass)
&& entityClass.getAnnotation(javax.persistence.Entity.class) != null) {
ReplaySpecializer.beforeSqlDelete(entityVKey);
jpaTm().delete(entityVKey);
try {
Class<?> entityClass = entityVKey.getKind();
// Delete the key iff the class represents a JPA entity that is replicated
if (!NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !DatastoreOnlyEntity.class.isAssignableFrom(entityClass)
&& entityClass.getAnnotation(javax.persistence.Entity.class) != null) {
ReplaySpecializer.beforeSqlDelete(entityVKey);
jpaTm().deleteIgnoringReadOnly(entityVKey);
}
} catch (Throwable t) {
logger.atSevere().log("Error when deleting key %s", entityVKey);
throw t;
}
}

View File

@@ -18,14 +18,14 @@ import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.Iterators.peekingIterator;
import static google.registry.backup.BackupUtils.createDeserializingIterator;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.api.datastore.EntityTranslator;
import com.google.appengine.tools.cloudstorage.GcsFileMetadata;
import com.google.appengine.tools.cloudstorage.GcsService;
import com.google.cloud.storage.BlobInfo;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Lists;
import com.google.common.collect.PeekingIterator;
import com.google.common.collect.Streams;
@@ -33,7 +33,9 @@ import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.Result;
import com.googlecode.objectify.util.ResultNow;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.gcs.GcsUtils;
import google.registry.model.ImmutableObject;
import google.registry.model.ofy.CommitLogBucket;
import google.registry.model.ofy.CommitLogCheckpoint;
@@ -46,10 +48,10 @@ import google.registry.request.auth.Auth;
import google.registry.util.Retrier;
import java.io.IOException;
import java.io.InputStream;
import java.nio.channels.Channels;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Stream;
import javax.inject.Inject;
@@ -66,45 +68,57 @@ public class RestoreCommitLogsAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
static final int BLOCK_SIZE = 1024 * 1024; // Buffer 1mb at a time, for no particular reason.
public static final String PATH = "/_dr/task/restoreCommitLogs";
static final String DRY_RUN_PARAM = "dryRun";
static final String FROM_TIME_PARAM = "fromTime";
static final String TO_TIME_PARAM = "toTime";
static final String BUCKET_OVERRIDE_PARAM = "gcsBucket";
private static final ImmutableSet<RegistryEnvironment> FORBIDDEN_ENVIRONMENTS =
ImmutableSet.of(RegistryEnvironment.PRODUCTION, RegistryEnvironment.SANDBOX);
@Inject GcsUtils gcsUtils;
@Inject GcsService gcsService;
@Inject @Parameter(DRY_RUN_PARAM) boolean dryRun;
@Inject @Parameter(FROM_TIME_PARAM) DateTime fromTime;
@Inject @Parameter(TO_TIME_PARAM) DateTime toTime;
@Inject
@Parameter(BUCKET_OVERRIDE_PARAM)
Optional<String> gcsBucketOverride;
@Inject DatastoreService datastoreService;
@Inject GcsDiffFileLister diffLister;
@Inject
@Config("commitLogGcsBucket")
String defaultGcsBucket;
@Inject Retrier retrier;
@Inject RestoreCommitLogsAction() {}
@Override
public void run() {
checkArgument(
RegistryEnvironment.get() == RegistryEnvironment.ALPHA
|| RegistryEnvironment.get() == RegistryEnvironment.CRASH
|| RegistryEnvironment.get() == RegistryEnvironment.UNITTEST,
"DO NOT RUN ANYWHERE ELSE EXCEPT ALPHA, CRASH OR TESTS.");
!FORBIDDEN_ENVIRONMENTS.contains(RegistryEnvironment.get()),
"DO NOT RUN IN PRODUCTION OR SANDBOX.");
if (dryRun) {
logger.atInfo().log("Running in dryRun mode");
}
List<GcsFileMetadata> diffFiles = diffLister.listDiffFiles(fromTime, toTime);
String gcsBucket = gcsBucketOverride.orElse(defaultGcsBucket);
logger.atInfo().log("Restoring from %s.", gcsBucket);
List<BlobInfo> diffFiles = diffLister.listDiffFiles(gcsBucket, fromTime, toTime);
if (diffFiles.isEmpty()) {
logger.atInfo().log("Nothing to restore");
return;
}
Map<Integer, DateTime> bucketTimestamps = new HashMap<>();
CommitLogCheckpoint lastCheckpoint = null;
for (GcsFileMetadata metadata : diffFiles) {
logger.atInfo().log("Restoring: %s", metadata.getFilename().getObjectName());
try (InputStream input = Channels.newInputStream(
gcsService.openPrefetchingReadChannel(metadata.getFilename(), 0, BLOCK_SIZE))) {
for (BlobInfo metadata : diffFiles) {
logger.atInfo().log("Restoring: %s", metadata.getName());
try (InputStream input = gcsUtils.openInputStream(metadata.getBlobId())) {
PeekingIterator<ImmutableObject> commitLogs =
peekingIterator(createDeserializingIterator(input));
peekingIterator(createDeserializingIterator(input, true));
lastCheckpoint = (CommitLogCheckpoint) commitLogs.next();
saveOfy(ImmutableList.of(lastCheckpoint)); // Save the checkpoint itself.
while (commitLogs.hasNext()) {
@@ -146,10 +160,10 @@ public class RestoreCommitLogsAction implements Runnable {
private CommitLogManifest restoreOneTransaction(PeekingIterator<ImmutableObject> commitLogs) {
final CommitLogManifest manifest = (CommitLogManifest) commitLogs.next();
Result<?> deleteResult = deleteAsync(manifest.getDeletions());
List<Entity> entitiesToSave = Lists.newArrayList(ofy().save().toEntity(manifest));
List<Entity> entitiesToSave = Lists.newArrayList(auditedOfy().save().toEntity(manifest));
while (commitLogs.hasNext() && commitLogs.peek() instanceof CommitLogMutation) {
CommitLogMutation mutation = (CommitLogMutation) commitLogs.next();
entitiesToSave.add(ofy().save().toEntity(mutation));
entitiesToSave.add(auditedOfy().save().toEntity(mutation));
entitiesToSave.add(EntityTranslator.createFromPbBytes(mutation.getEntityProtoBytes()));
}
saveRaw(entitiesToSave);
@@ -176,7 +190,8 @@ public class RestoreCommitLogsAction implements Runnable {
return;
}
retrier.callWithRetry(
() -> ofy().saveWithoutBackup().entities(objectsToSave).now(), RuntimeException.class);
() -> auditedOfy().saveWithoutBackup().entities(objectsToSave).now(),
RuntimeException.class);
}
private Result<?> deleteAsync(Set<Key<?>> keysToDelete) {
@@ -185,7 +200,7 @@ public class RestoreCommitLogsAction implements Runnable {
}
return dryRun || keysToDelete.isEmpty()
? new ResultNow<Void>(null)
: ofy().deleteWithoutBackup().keys(keysToDelete);
: auditedOfy().deleteWithoutBackup().keys(keysToDelete);
}
}

View File

@@ -47,7 +47,7 @@ import javax.annotation.Nullable;
*
* <ul>
* <li>Convert an Objectify entity to a Datastore {@link Entity}: {@code
* ofy().save().toEntity(..)}
* auditedOfy().save().toEntity(..)}
* <li>Entity is serializable, but the more efficient approach is to convert an Entity to a
* ProtocolBuffer ({@link com.google.storage.onestore.v3.OnestoreEntity.EntityProto}) and then
* to raw bytes.

View File

@@ -28,10 +28,10 @@ import com.googlecode.objectify.Key;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.domain.RegistryLock;
import google.registry.model.eppcommon.Trid;
import google.registry.model.host.HostResource;
import google.registry.persistence.VKey;
import google.registry.schema.domain.RegistryLock;
import google.registry.util.AppEngineServiceUtils;
import google.registry.util.Retrier;
import javax.inject.Inject;
@@ -57,8 +57,6 @@ public final class AsyncTaskEnqueuer {
public static final String QUEUE_ASYNC_DELETE = "async-delete-pull";
public static final String QUEUE_ASYNC_HOST_RENAME = "async-host-rename-pull";
public static final String PATH_RESAVE_ENTITY = "/_dr/task/resaveEntity";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final Duration MAX_ASYNC_ETA = Duration.standardDays(30);
@@ -112,7 +110,7 @@ public final class AsyncTaskEnqueuer {
logger.atInfo().log("Enqueuing async re-save of %s to run at %s.", entityKey, whenToResave);
String backendHostname = appEngineServiceUtils.getServiceHostname("backend");
TaskOptions task =
TaskOptions.Builder.withUrl(PATH_RESAVE_ENTITY)
TaskOptions.Builder.withUrl(ResaveEntityAction.PATH)
.method(Method.POST)
.header("Host", backendHostname)
.countdownMillis(etaDuration.getMillis())

View File

@@ -34,7 +34,7 @@ import static google.registry.model.ResourceTransferUtils.denyPendingTransfer;
import static google.registry.model.ResourceTransferUtils.handlePendingTransferOnDelete;
import static google.registry.model.ResourceTransferUtils.updateForeignKeyIndexDeletionTime;
import static google.registry.model.eppcommon.StatusValue.PENDING_DELETE;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.model.reporting.HistoryEntry.Type.CONTACT_DELETE;
import static google.registry.model.reporting.HistoryEntry.Type.CONTACT_DELETE_FAILURE;
import static google.registry.model.reporting.HistoryEntry.Type.HOST_DELETE;
@@ -109,6 +109,7 @@ import org.joda.time.Duration;
* A mapreduce that processes batch asynchronous deletions of contact and host resources by mapping
* over all domains and checking for any references to the contacts/hosts in pending deletion.
*/
@Deprecated
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/deleteContactsAndHosts",
@@ -335,7 +336,7 @@ public class DeleteContactsAndHostsAction implements Runnable {
DeletionRequest deletionRequest, boolean hasNoActiveReferences) {
DateTime now = tm().getTransactionTime();
EppResource resource =
ofy().load().key(deletionRequest.key()).now().cloneProjectedAtTime(now);
auditedOfy().load().key(deletionRequest.key()).now().cloneProjectedAtTime(now);
// Double-check transactionally that the resource is still active and in PENDING_DELETE.
if (!doesResourceStateAllowDeletion(resource, now)) {
return DeletionResult.create(Type.ERRORED, "");
@@ -369,11 +370,10 @@ public class DeleteContactsAndHostsAction implements Runnable {
: "it was transferred prior to deletion");
HistoryEntry historyEntry =
new HistoryEntry.Builder()
HistoryEntry.createBuilderForResource(resource)
.setClientId(deletionRequest.requestingClientId())
.setModificationTime(now)
.setType(getHistoryEntryType(resource, deleteAllowed))
.setParent(deletionRequest.key())
.build();
PollMessage.OneTime pollMessage =
@@ -408,7 +408,9 @@ public class DeleteContactsAndHostsAction implements Runnable {
} else {
resourceToSave = resource.asBuilder().removeStatusValue(PENDING_DELETE).build();
}
ofy().save().<ImmutableObject>entities(resourceToSave, historyEntry, pollMessage);
auditedOfy()
.save()
.<ImmutableObject>entities(resourceToSave, historyEntry.asHistoryEntry(), pollMessage);
return DeletionResult.create(
deleteAllowed ? Type.DELETED : Type.NOT_DELETED, pollMessageText);
}
@@ -525,7 +527,8 @@ public class DeleteContactsAndHostsAction implements Runnable {
Key.create(
checkNotNull(params.get(PARAM_RESOURCE_KEY), "Resource to delete not specified"));
EppResource resource =
checkNotNull(ofy().load().key(resourceKey).now(), "Resource to delete doesn't exist");
checkNotNull(
auditedOfy().load().key(resourceKey).now(), "Resource to delete doesn't exist");
checkState(
resource instanceof ContactResource || resource instanceof HostResource,
"Cannot delete a %s via this action",

View File

@@ -17,8 +17,8 @@ package google.registry.batch;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.net.MediaType.PLAIN_TEXT_UTF_8;
import static google.registry.flows.FlowUtils.marshalWithLenientRetry;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import static google.registry.util.ResourceUtils.readResourceUtf8;
import static java.nio.charset.StandardCharsets.UTF_8;
@@ -36,6 +36,7 @@ import google.registry.flows.StatelessRequestSessionMetadata;
import google.registry.model.domain.DomainBase;
import google.registry.model.eppcommon.ProtocolDefinition;
import google.registry.model.eppoutput.EppOutput;
import google.registry.persistence.transaction.QueryComposer.Comparator;
import google.registry.request.Action;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
@@ -128,12 +129,15 @@ public class DeleteExpiredDomainsAction implements Runnable {
logger.atInfo().log(
"Deleting non-renewing domains with autorenew end times up through %s.", runTime);
// Note: This query is (and must be) non-transactional, and thus, is only eventually consistent.
// Note: in Datastore, this query is (and must be) non-transactional, and thus, is only
// eventually consistent.
ImmutableList<DomainBase> domainsToDelete =
ofy().load().type(DomainBase.class).filter("autorenewEndTime <=", runTime).list().stream()
// Datastore can't do two inequalities in one query, so the second happens in-memory.
.filter(d -> d.getDeletionTime().isEqual(END_OF_TIME))
.collect(toImmutableList());
transactIfJpaTm(
() ->
tm().createQueryComposer(DomainBase.class)
.where("autorenewEndTime", Comparator.LTE, runTime)
.where("deletionTime", Comparator.EQ, END_OF_TIME)
.list());
if (domainsToDelete.isEmpty()) {
logger.atInfo().log("Found 0 domains to delete.");
response.setPayload("Found 0 domains to delete.");

View File

@@ -15,12 +15,14 @@
package google.registry.batch;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.config.RegistryEnvironment.PRODUCTION;
import static google.registry.mapreduce.MapreduceRunner.PARAM_DRY_RUN;
import static google.registry.mapreduce.inputs.EppResourceInputs.createEntityInput;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.POST;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.appengine.tools.mapreduce.Mapper;
import com.google.common.collect.ImmutableList;
@@ -28,16 +30,24 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.config.RegistryEnvironment;
import google.registry.flows.poll.PollFlowUtils;
import google.registry.mapreduce.MapreduceRunner;
import google.registry.model.EppResource;
import google.registry.model.EppResourceUtils;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.host.HostResource;
import google.registry.model.index.EppResourceIndex;
import google.registry.model.index.ForeignKeyIndex;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.reporting.HistoryEntryDao;
import google.registry.persistence.VKey;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.List;
import javax.inject.Inject;
@@ -46,8 +56,8 @@ import javax.inject.Inject;
* the associated ForeignKey and EppResourceIndex entities.
*
* <p>This only deletes contacts and hosts, NOT domains. To delete domains, use {@link
* DeleteLoadTestDataAction} and pass it the TLD(s) that the load test domains were created on. Note
* that DeleteLoadTestDataAction is safe enough to run in production whereas this mapreduce is not,
* DeleteProberDataAction} and pass it the TLD(s) that the load test domains were created on. Note
* that DeleteProberDataAction is safe enough to run in production whereas this mapreduce is not,
* but this one does not need to be runnable in production because load testing isn't run against
* production.
*/
@@ -68,15 +78,22 @@ public class DeleteLoadTestDataAction implements Runnable {
*/
private static final ImmutableSet<String> LOAD_TEST_REGISTRARS = ImmutableSet.of("proxy");
@Inject
@Parameter(PARAM_DRY_RUN)
boolean isDryRun;
@Inject MapreduceRunner mrRunner;
@Inject Response response;
private final boolean isDryRun;
private final MapreduceRunner mrRunner;
private final Response response;
private final Clock clock;
@Inject
DeleteLoadTestDataAction() {}
DeleteLoadTestDataAction(
@Parameter(PARAM_DRY_RUN) boolean isDryRun,
MapreduceRunner mrRunner,
Response response,
Clock clock) {
this.isDryRun = isDryRun;
this.mrRunner = mrRunner;
this.response = response;
this.clock = clock;
}
@Override
public void run() {
@@ -87,14 +104,85 @@ public class DeleteLoadTestDataAction implements Runnable {
!RegistryEnvironment.get().equals(PRODUCTION),
"This mapreduce is not safe to run on PRODUCTION.");
mrRunner
.setJobName("Delete load test data")
.setModuleName("backend")
.runMapOnly(
new DeleteLoadTestDataMapper(isDryRun),
ImmutableList.of(
createEntityInput(ContactResource.class), createEntityInput(HostResource.class)))
.sendLinkToMapreduceConsole(response);
if (tm().isOfy()) {
mrRunner
.setJobName("Delete load test data")
.setModuleName("backend")
.runMapOnly(
new DeleteLoadTestDataMapper(isDryRun),
ImmutableList.of(
createEntityInput(ContactResource.class), createEntityInput(HostResource.class)))
.sendLinkToMapreduceConsole(response);
} else {
tm().transact(
() -> {
LOAD_TEST_REGISTRARS.forEach(this::deletePollMessages);
tm().loadAllOfStream(ContactResource.class).forEach(this::deleteContact);
tm().loadAllOfStream(HostResource.class).forEach(this::deleteHost);
});
}
}
private void deletePollMessages(String registrarId) {
ImmutableList<PollMessage> pollMessages =
PollFlowUtils.createPollMessageQuery(registrarId, END_OF_TIME).list();
if (isDryRun) {
logger.atInfo().log(
"Would delete %d poll messages for registrar %s.", pollMessages.size(), registrarId);
} else {
pollMessages.forEach(tm()::delete);
}
}
private void deleteContact(ContactResource contact) {
if (!LOAD_TEST_REGISTRARS.contains(contact.getPersistedCurrentSponsorClientId())) {
return;
}
// We cannot remove contacts from domains in the general case, so we cannot delete contacts
// that are linked to domains (since it would break the foreign keys)
if (EppResourceUtils.isLinked(contact.createVKey(), clock.nowUtc())) {
logger.atWarning().log(
"Cannot delete contact with repo ID %s since it is referenced from a domain",
contact.getRepoId());
return;
}
deleteResource(contact);
}
private void deleteHost(HostResource host) {
if (!LOAD_TEST_REGISTRARS.contains(host.getPersistedCurrentSponsorClientId())) {
return;
}
VKey<HostResource> hostVKey = host.createVKey();
// We can remove hosts from linked domains, so we should do so then delete the hosts
ImmutableSet<VKey<DomainBase>> linkedDomains =
EppResourceUtils.getLinkedDomainKeys(hostVKey, clock.nowUtc(), null);
tm().loadByKeys(linkedDomains)
.values()
.forEach(
domain -> {
ImmutableSet<VKey<HostResource>> remainingHosts =
domain.getNsHosts().stream()
.filter(vkey -> !vkey.equals(hostVKey))
.collect(toImmutableSet());
tm().put(domain.asBuilder().setNameservers(remainingHosts).build());
});
deleteResource(host);
}
private void deleteResource(EppResource eppResource) {
// In SQL, the only objects parented on the resource are poll messages (deleted above) and
// history objects.
ImmutableList<HistoryEntry> historyObjects =
HistoryEntryDao.loadHistoryObjectsForResource(eppResource.createVKey());
if (isDryRun) {
logger.atInfo().log(
"Would delete repo ID %s along with %d history objects",
eppResource.getRepoId(), historyObjects.size());
} else {
historyObjects.forEach(tm()::delete);
tm().delete(eppResource);
}
}
/** Provides the map method that runs for each existing contact and host entity. */
@@ -125,12 +213,11 @@ public class DeleteLoadTestDataAction implements Runnable {
Key.create(EppResourceIndex.create(Key.create(resource)));
final Key<? extends ForeignKeyIndex<?>> fki = ForeignKeyIndex.createKey(resource);
int numEntitiesDeleted =
tm()
.transact(
tm().transact(
() -> {
// This ancestor query selects all descendant entities.
List<Key<Object>> resourceAndDependentKeys =
ofy().load().ancestor(resource).keys().list();
auditedOfy().load().ancestor(resource).keys().list();
ImmutableSet<Key<?>> allKeys =
new ImmutableSet.Builder<Key<?>>()
.add(fki)
@@ -140,7 +227,7 @@ public class DeleteLoadTestDataAction implements Runnable {
if (isDryRun) {
logger.atInfo().log("Would hard-delete the following entities: %s", allKeys);
} else {
ofy().deleteWithoutBackup().keys(allKeys);
auditedOfy().deleteWithoutBackup().keys(allKeys);
}
return allKeys.size();
});

View File

@@ -20,9 +20,10 @@ import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.config.RegistryEnvironment.PRODUCTION;
import static google.registry.mapreduce.MapreduceRunner.PARAM_DRY_RUN;
import static google.registry.model.ResourceTransferUtils.updateForeignKeyIndexDeletionTime;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.registry.Registries.getTldsOfType;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_DELETE;
import static google.registry.model.tld.Registries.getTldsOfType;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.POST;
import static google.registry.request.RequestParameters.PARAM_TLDS;
@@ -42,27 +43,31 @@ import google.registry.config.RegistryEnvironment;
import google.registry.dns.DnsQueue;
import google.registry.mapreduce.MapreduceRunner;
import google.registry.mapreduce.inputs.EppResourceInputs;
import google.registry.model.CreateAutoTimestamp;
import google.registry.model.EppResourceUtils;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.index.EppResourceIndex;
import google.registry.model.index.ForeignKeyIndex;
import google.registry.model.registry.Registry;
import google.registry.model.registry.Registry.TldType;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldType;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import javax.inject.Inject;
import org.hibernate.CacheMode;
import org.hibernate.ScrollMode;
import org.hibernate.ScrollableResults;
import org.hibernate.query.Query;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Deletes all prober DomainBases and their subordinate history entries, poll messages, and
* billing events, along with their ForeignKeyDomainIndex and EppResourceIndex entities.
*
* <p>See: https://www.youtube.com/watch?v=xuuv0syoHnM
* Deletes all prober DomainBases and their subordinate history entries, poll messages, and billing
* events, along with their ForeignKeyDomainIndex and EppResourceIndex entities.
*/
@Action(
service = Action.Service.BACKEND,
@@ -73,10 +78,51 @@ public class DeleteProberDataAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/**
* The maximum amount of time we allow a prober domain to be in use.
*
* <p>In practice, the prober's connection will time out well before this duration. This includes
* a decent buffer.
*/
private static final Duration DOMAIN_USED_DURATION = Duration.standardHours(1);
/**
* The minimum amount of time we want a domain to be "soft deleted".
*
* <p>The domain has to remain soft deleted for at least enough time for the DNS task to run and
* remove it from DNS itself. This is probably on the order of minutes.
*/
private static final Duration SOFT_DELETE_DELAY = Duration.standardHours(1);
private static final DnsQueue dnsQueue = DnsQueue.create();
// Domains to delete must:
// 1. Be in one of the prober TLDs
// 2. Not be a nic domain
// 3. Have no subordinate hosts
// 4. Not still be used (within an hour of creation time)
// 5. Either be active (creationTime <= now < deletionTime) or have been deleted a while ago (this
// prevents accidental double-map with the same key from immediately deleting active domains)
//
// Note: creationTime must be compared to a Java object (CreateAutoTimestamp) but deletionTime can
// be compared directly to the SQL timestamp (it's a DateTime)
private static final String DOMAIN_QUERY_STRING =
"FROM Domain d WHERE d.tld IN :tlds AND d.fullyQualifiedDomainName NOT LIKE 'nic.%' AND"
+ " (d.subordinateHosts IS EMPTY OR d.subordinateHosts IS NULL) AND d.creationTime <"
+ " :creationTimeCutoff AND ((d.creationTime <= :nowAutoTimestamp AND d.deletionTime >"
+ " current_timestamp()) OR d.deletionTime < :nowMinusSoftDeleteDelay) ORDER BY d.repoId";
/** Number of domains to retrieve and delete per SQL transaction. */
private static final int BATCH_SIZE = 1000;
@Inject @Parameter(PARAM_DRY_RUN) boolean isDryRun;
/** List of TLDs to work on. If empty - will work on all TLDs that end with .test. */
@Inject @Parameter(PARAM_TLDS) ImmutableSet<String> tlds;
@Inject @Config("registryAdminClientId") String registryAdminClientId;
@Inject
@Config("registryAdminClientId")
String registryAdminRegistrarId;
@Inject MapreduceRunner mrRunner;
@Inject Response response;
@Inject DeleteProberDataAction() {}
@@ -84,25 +130,14 @@ public class DeleteProberDataAction implements Runnable {
@Override
public void run() {
checkState(
!Strings.isNullOrEmpty(registryAdminClientId),
!Strings.isNullOrEmpty(registryAdminRegistrarId),
"Registry admin client ID must be configured for prober data deletion to work");
mrRunner
.setJobName("Delete prober data")
.setModuleName("backend")
.runMapOnly(
new DeleteProberDataMapper(getProberRoidSuffixes(), isDryRun, registryAdminClientId),
ImmutableList.of(EppResourceInputs.createKeyInput(DomainBase.class)))
.sendLinkToMapreduceConsole(response);
}
private ImmutableSet<String> getProberRoidSuffixes() {
checkArgument(
!PRODUCTION.equals(RegistryEnvironment.get())
|| tlds.stream().allMatch(tld -> tld.endsWith(".test")),
"On production, can only work on TLDs that end with .test");
ImmutableSet<String> deletableTlds =
getTldsOfType(TldType.TEST)
.stream()
getTldsOfType(TldType.TEST).stream()
.filter(tld -> tlds.isEmpty() ? tld.endsWith(".test") : tlds.contains(tld))
.collect(toImmutableSet());
checkArgument(
@@ -110,10 +145,161 @@ public class DeleteProberDataAction implements Runnable {
"If tlds are given, they must all exist and be TEST tlds. Given: %s, not found: %s",
tlds,
Sets.difference(tlds, deletableTlds));
return deletableTlds
.stream()
.map(tld -> Registry.get(tld).getRoidSuffix())
.collect(toImmutableSet());
ImmutableSet<String> proberRoidSuffixes =
deletableTlds.stream()
.map(tld -> Registry.get(tld).getRoidSuffix())
.collect(toImmutableSet());
if (tm().isOfy()) {
mrRunner
.setJobName("Delete prober data")
.setModuleName("backend")
.runMapOnly(
new DeleteProberDataMapper(proberRoidSuffixes, isDryRun, registryAdminRegistrarId),
ImmutableList.of(EppResourceInputs.createKeyInput(DomainBase.class)))
.sendLinkToMapreduceConsole(response);
} else {
runSqlJob(deletableTlds);
}
}
private void runSqlJob(ImmutableSet<String> deletableTlds) {
AtomicInteger softDeletedDomains = new AtomicInteger();
AtomicInteger hardDeletedDomains = new AtomicInteger();
jpaTm().transact(() -> processDomains(deletableTlds, softDeletedDomains, hardDeletedDomains));
logger.atInfo().log(
"%s %d domains.",
isDryRun ? "Would have soft-deleted" : "Soft-deleted", softDeletedDomains.get());
logger.atInfo().log(
"%s %d domains.",
isDryRun ? "Would have hard-deleted" : "Hard-deleted", hardDeletedDomains.get());
}
private void processDomains(
ImmutableSet<String> deletableTlds,
AtomicInteger softDeletedDomains,
AtomicInteger hardDeletedDomains) {
DateTime now = tm().getTransactionTime();
// Scroll through domains, soft-deleting as necessary (very few will be soft-deleted) and
// keeping track of which domains to hard-delete (there can be many, so we batch them up)
ScrollableResults scrollableResult =
jpaTm()
.query(DOMAIN_QUERY_STRING, DomainBase.class)
.setParameter("tlds", deletableTlds)
.setParameter(
"creationTimeCutoff", CreateAutoTimestamp.create(now.minus(DOMAIN_USED_DURATION)))
.setParameter("nowMinusSoftDeleteDelay", now.minus(SOFT_DELETE_DELAY))
.setParameter("nowAutoTimestamp", CreateAutoTimestamp.create(now))
.unwrap(Query.class)
.setCacheMode(CacheMode.IGNORE)
.scroll(ScrollMode.FORWARD_ONLY);
ImmutableList.Builder<String> domainRepoIdsToHardDelete = new ImmutableList.Builder<>();
ImmutableList.Builder<String> hostNamesToHardDelete = new ImmutableList.Builder<>();
for (int i = 1; scrollableResult.next(); i = (i + 1) % BATCH_SIZE) {
DomainBase domain = (DomainBase) scrollableResult.get(0);
processDomain(
domain,
domainRepoIdsToHardDelete,
hostNamesToHardDelete,
softDeletedDomains,
hardDeletedDomains);
// Batch the deletion and DB flush + session clearing so we don't OOM
if (i == 0) {
hardDeleteDomainsAndHosts(domainRepoIdsToHardDelete.build(), hostNamesToHardDelete.build());
domainRepoIdsToHardDelete = new ImmutableList.Builder<>();
hostNamesToHardDelete = new ImmutableList.Builder<>();
jpaTm().getEntityManager().flush();
jpaTm().getEntityManager().clear();
}
}
// process the remainder
hardDeleteDomainsAndHosts(domainRepoIdsToHardDelete.build(), hostNamesToHardDelete.build());
}
private void processDomain(
DomainBase domain,
ImmutableList.Builder<String> domainRepoIdsToHardDelete,
ImmutableList.Builder<String> hostNamesToHardDelete,
AtomicInteger softDeletedDomains,
AtomicInteger hardDeletedDomains) {
// If the domain is still active, that means that the prober encountered a failure and did not
// successfully soft-delete the domain (thus leaving its DNS entry published). We soft-delete
// it now so that the DNS entry can be handled. The domain will then be hard-deleted the next
// time the job is run.
if (EppResourceUtils.isActive(domain, tm().getTransactionTime())) {
if (isDryRun) {
logger.atInfo().log(
"Would soft-delete the active domain: %s (%s)",
domain.getDomainName(), domain.getRepoId());
} else {
softDeleteDomain(domain, registryAdminRegistrarId, dnsQueue);
}
softDeletedDomains.incrementAndGet();
} else {
if (isDryRun) {
logger.atInfo().log(
"Would hard-delete the non-active domain: %s (%s) and its dependents",
domain.getDomainName(), domain.getRepoId());
} else {
domainRepoIdsToHardDelete.add(domain.getRepoId());
hostNamesToHardDelete.addAll(domain.getSubordinateHosts());
}
hardDeletedDomains.incrementAndGet();
}
}
private void hardDeleteDomainsAndHosts(
ImmutableList<String> domainRepoIds, ImmutableList<String> hostNames) {
jpaTm()
.query("DELETE FROM Host WHERE fullyQualifiedHostName IN :hostNames")
.setParameter("hostNames", hostNames)
.executeUpdate();
jpaTm()
.query("DELETE FROM BillingEvent WHERE domainRepoId IN :repoIds")
.setParameter("repoIds", domainRepoIds)
.executeUpdate();
jpaTm()
.query("DELETE FROM BillingRecurrence WHERE domainRepoId IN :repoIds")
.setParameter("repoIds", domainRepoIds)
.executeUpdate();
jpaTm()
.query("DELETE FROM BillingCancellation WHERE domainRepoId IN :repoIds")
.setParameter("repoIds", domainRepoIds)
.executeUpdate();
jpaTm()
.query("DELETE FROM DomainHistory WHERE domainRepoId IN :repoIds")
.setParameter("repoIds", domainRepoIds)
.executeUpdate();
jpaTm()
.query("DELETE FROM PollMessage WHERE domainRepoId IN :repoIds")
.setParameter("repoIds", domainRepoIds)
.executeUpdate();
jpaTm()
.query("DELETE FROM Domain WHERE repoId IN :repoIds")
.setParameter("repoIds", domainRepoIds)
.executeUpdate();
}
// Take a DNS queue + admin registrar id as input so that it can be called from the mapper as well
private static void softDeleteDomain(
DomainBase domain, String registryAdminRegistrarId, DnsQueue localDnsQueue) {
DomainBase deletedDomain =
domain.asBuilder().setDeletionTime(tm().getTransactionTime()).setStatusValues(null).build();
DomainHistory historyEntry =
new DomainHistory.Builder()
.setDomain(domain)
.setType(DOMAIN_DELETE)
.setModificationTime(tm().getTransactionTime())
.setBySuperuser(true)
.setReason("Deletion of prober data")
.setClientId(registryAdminRegistrarId)
.build();
// Note that we don't bother handling grace periods, billing events, pending transfers, poll
// messages, or auto-renews because those will all be hard-deleted the next time the job runs
// anyway.
tm().putAllWithoutBackup(ImmutableList.of(deletedDomain, historyEntry));
// updating foreign keys is a no-op in SQL
updateForeignKeyIndexDeletionTime(deletedDomain);
localDnsQueue.addDomainRefreshTask(deletedDomain.getDomainName());
}
/** Provides the map method that runs for each existing DomainBase entity. */
@@ -122,32 +308,17 @@ public class DeleteProberDataAction implements Runnable {
private static final DnsQueue dnsQueue = DnsQueue.create();
private static final long serialVersionUID = -7724537393697576369L;
/**
* The maximum amount of time we allow a prober domain to be in use.
*
* In practice, the prober's connection will time out well before this duration. This includes a
* decent buffer.
*
*/
private static final Duration DOMAIN_USED_DURATION = Duration.standardHours(1);
/**
* The minimum amount of time we want a domain to be "soft deleted".
*
* The domain has to remain soft deleted for at least enough time for the DNS task to run and
* remove it from DNS itself. This is probably on the order of minutes.
*/
private static final Duration SOFT_DELETE_DELAY = Duration.standardHours(1);
private final ImmutableSet<String> proberRoidSuffixes;
private final Boolean isDryRun;
private final String registryAdminClientId;
private final String registryAdminRegistrarId;
public DeleteProberDataMapper(
ImmutableSet<String> proberRoidSuffixes, Boolean isDryRun, String registryAdminClientId) {
ImmutableSet<String> proberRoidSuffixes,
Boolean isDryRun,
String registryAdminRegistrarId) {
this.proberRoidSuffixes = proberRoidSuffixes;
this.isDryRun = isDryRun;
this.registryAdminClientId = registryAdminClientId;
this.registryAdminRegistrarId = registryAdminRegistrarId;
}
@Override
@@ -166,7 +337,7 @@ public class DeleteProberDataAction implements Runnable {
}
private void deleteDomain(final Key<DomainBase> domainKey) {
final DomainBase domain = ofy().load().key(domainKey).now();
final DomainBase domain = auditedOfy().load().key(domainKey).now();
DateTime now = DateTime.now(UTC);
@@ -203,7 +374,7 @@ public class DeleteProberDataAction implements Runnable {
logger.atInfo().log(
"Would soft-delete the active domain: %s (%s)", domainName, domainKey);
} else {
softDeleteDomain(domain);
tm().transact(() -> softDeleteDomain(domain, registryAdminRegistrarId, dnsQueue));
}
getContext().incrementCounter("domains soft-deleted");
return;
@@ -220,14 +391,12 @@ public class DeleteProberDataAction implements Runnable {
final Key<? extends ForeignKeyIndex<?>> fki = ForeignKeyIndex.createKey(domain);
int entitiesDeleted =
tm()
.transact(
tm().transact(
() -> {
// This ancestor query selects all descendant HistoryEntries, BillingEvents,
// PollMessages,
// and TLD-specific entities, as well as the domain itself.
// PollMessages, and TLD-specific entities, as well as the domain itself.
List<Key<Object>> domainAndDependentKeys =
ofy().load().ancestor(domainKey).keys().list();
auditedOfy().load().ancestor(domainKey).keys().list();
ImmutableSet<Key<?>> allKeys =
new ImmutableSet.Builder<Key<?>>()
.add(fki)
@@ -237,41 +406,12 @@ public class DeleteProberDataAction implements Runnable {
if (isDryRun) {
logger.atInfo().log("Would hard-delete the following entities: %s", allKeys);
} else {
ofy().deleteWithoutBackup().keys(allKeys);
auditedOfy().deleteWithoutBackup().keys(allKeys);
}
return allKeys.size();
});
getContext().incrementCounter("domains hard-deleted");
getContext().incrementCounter("total entities hard-deleted", entitiesDeleted);
}
private void softDeleteDomain(final DomainBase domain) {
tm().transactNew(
() -> {
DomainBase deletedDomain =
domain
.asBuilder()
.setDeletionTime(tm().getTransactionTime())
.setStatusValues(null)
.build();
HistoryEntry historyEntry =
new HistoryEntry.Builder()
.setParent(domain)
.setType(DOMAIN_DELETE)
.setModificationTime(tm().getTransactionTime())
.setBySuperuser(true)
.setReason("Deletion of prober data")
.setClientId(registryAdminClientId)
.build();
// Note that we don't bother handling grace periods, billing events, pending
// transfers,
// poll messages, or auto-renews because these will all be hard-deleted the next
// time the
// mapreduce runs anyway.
ofy().save().entities(deletedDomain, historyEntry);
updateForeignKeyIndexDeletionTime(deletedDomain);
dnsQueue.addDomainRefreshTask(deletedDomain.getDomainName());
});
}
}
}

View File

@@ -21,9 +21,12 @@ import static google.registry.mapreduce.MapreduceRunner.PARAM_DRY_RUN;
import static google.registry.mapreduce.inputs.EppResourceInputs.createChildEntityInput;
import static google.registry.model.common.Cursor.CursorType.RECURRING_BILLING;
import static google.registry.model.domain.Period.Unit.YEARS;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_AUTORENEW;
import static google.registry.persistence.transaction.QueryComposer.Comparator.EQ;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static google.registry.pricing.PricingEngineProxy.getDomainRenewCost;
import static google.registry.util.CollectionUtils.union;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
@@ -38,10 +41,8 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Range;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.mapreduce.MapreduceRunner;
import google.registry.mapreduce.inputs.NullInput;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Flag;
@@ -49,11 +50,12 @@ import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.common.Cursor;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.registry.Registry;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import google.registry.persistence.VKey;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.Response;
@@ -91,30 +93,88 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
@Override
public void run() {
Cursor cursor = ofy().load().key(Cursor.createGlobalKey(RECURRING_BILLING)).now();
DateTime executeTime = clock.nowUtc();
DateTime persistedCursorTime = (cursor == null ? START_OF_TIME : cursor.getCursorTime());
DateTime persistedCursorTime =
transactIfJpaTm(
() ->
tm().loadByKeyIfPresent(Cursor.createGlobalVKey(RECURRING_BILLING))
.orElse(Cursor.createGlobal(RECURRING_BILLING, START_OF_TIME))
.getCursorTime());
DateTime cursorTime = cursorTimeParam.orElse(persistedCursorTime);
checkArgument(
cursorTime.isBefore(executeTime),
"Cursor time must be earlier than execution time.");
cursorTime.isBefore(executeTime), "Cursor time must be earlier than execution time.");
logger.atInfo().log(
"Running Recurring billing event expansion for billing time range [%s, %s).",
cursorTime, executeTime);
mrRunner
.setJobName("Expand Recurring billing events into synthetic OneTime events.")
.setModuleName("backend")
.runMapreduce(
new ExpandRecurringBillingEventsMapper(isDryRun, cursorTime, clock.nowUtc()),
new ExpandRecurringBillingEventsReducer(isDryRun, persistedCursorTime),
// Add an extra shard that maps over a null recurring event (see the mapper for why).
ImmutableList.of(
new NullInput<>(),
createChildEntityInput(
ImmutableSet.of(DomainBase.class), ImmutableSet.of(Recurring.class))))
.sendLinkToMapreduceConsole(response);
}
if (tm().isOfy()) {
mrRunner
.setJobName("Expand Recurring billing events into synthetic OneTime events.")
.setModuleName("backend")
.runMapreduce(
new ExpandRecurringBillingEventsMapper(isDryRun, cursorTime, clock.nowUtc()),
new ExpandRecurringBillingEventsReducer(isDryRun, persistedCursorTime),
// Add an extra shard that maps over a null recurring event (see the mapper for why).
ImmutableList.of(
new NullInput<>(),
createChildEntityInput(
ImmutableSet.of(DomainBase.class), ImmutableSet.of(Recurring.class))))
.sendLinkToMapreduceConsole(response);
} else {
int numBillingEventsSaved =
jpaTm()
.transact(
() ->
jpaTm()
.query(
"FROM BillingRecurrence "
+ "WHERE eventTime <= :executeTime "
+ "AND eventTime < recurrenceEndTime "
+ "ORDER BY id ASC",
Recurring.class)
.setParameter("executeTime", executeTime)
// Need to get a list from the transaction and then convert it to a stream
// for further processing. If we get a stream directly, each elements gets
// processed downstream eagerly but Hibernate returns a
// ScrollableResultsIterator that cannot be advanced outside the
// transaction, resulting in an exception.
.getResultList())
.stream()
.map(
recurring ->
jpaTm()
.transact(
() ->
expandBillingEvent(recurring, executeTime, cursorTime, isDryRun)))
.reduce(0, Integer::sum);
if (!isDryRun) {
logger.atInfo().log("Saved OneTime billing events", numBillingEventsSaved);
} else {
logger.atInfo().log("Generated OneTime billing events (dry run)", numBillingEventsSaved);
}
logger.atInfo().log(
"Recurring event expansion %s complete for billing event range [%s, %s).",
isDryRun ? "(dry run) " : "", cursorTime, executeTime);
tm().transact(
() -> {
// Check for the unlikely scenario where the cursor has been altered during the
// expansion.
DateTime currentCursorTime =
tm().loadByKeyIfPresent(Cursor.createGlobalVKey(RECURRING_BILLING))
.orElse(Cursor.createGlobal(RECURRING_BILLING, START_OF_TIME))
.getCursorTime();
if (!currentCursorTime.equals(persistedCursorTime)) {
throw new IllegalStateException(
String.format(
"Current cursor position %s does not match persisted cursor position %s.",
currentCursorTime, persistedCursorTime));
}
if (!isDryRun) {
tm().put(Cursor.createGlobal(RECURRING_BILLING, executeTime));
}
});
}
}
/** Mapper to expand {@link Recurring} billing events into synthetic {@link OneTime} events. */
public static class ExpandRecurringBillingEventsMapper
extends Mapper<Recurring, DateTime, DateTime> {
@@ -153,98 +213,7 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
try {
numBillingEventsSaved =
tm().transactNew(
() -> {
ImmutableSet.Builder<OneTime> syntheticOneTimesBuilder =
new ImmutableSet.Builder<>();
final Registry tld =
Registry.get(getTldFromDomainName(recurring.getTargetId()));
// Determine the complete set of times at which this recurring event should
// occur (up to and including the runtime of the mapreduce).
Iterable<DateTime> eventTimes =
recurring
.getRecurrenceTimeOfYear()
.getInstancesInRange(
Range.closed(
recurring.getEventTime(),
earliestOf(recurring.getRecurrenceEndTime(), executeTime)));
// Convert these event times to billing times
final ImmutableSet<DateTime> billingTimes =
getBillingTimesInScope(eventTimes, cursorTime, executeTime, tld);
Key<? extends EppResource> domainKey = recurring.getParentKey().getParent();
Iterable<OneTime> oneTimesForDomain =
ofy().load().type(OneTime.class).ancestor(domainKey);
// Determine the billing times that already have OneTime events persisted.
ImmutableSet<DateTime> existingBillingTimes =
getExistingBillingTimes(oneTimesForDomain, recurring);
ImmutableSet.Builder<HistoryEntry> historyEntriesBuilder =
new ImmutableSet.Builder<>();
// Create synthetic OneTime events for all billing times that do not yet have
// an event persisted.
for (DateTime billingTime : difference(billingTimes, existingBillingTimes)) {
// Construct a new HistoryEntry that parents over the OneTime
HistoryEntry historyEntry =
new HistoryEntry.Builder()
.setBySuperuser(false)
.setClientId(recurring.getClientId())
.setModificationTime(tm().getTransactionTime())
.setParent(domainKey)
.setPeriod(Period.create(1, YEARS))
.setReason(
"Domain autorenewal by ExpandRecurringBillingEventsAction")
.setRequestedByRegistrar(false)
.setType(DOMAIN_AUTORENEW)
// Don't write a domain transaction record if the recurrence was
// ended prior to the billing time (i.e. a domain was deleted
// during the autorenew grace period).
.setDomainTransactionRecords(
recurring.getRecurrenceEndTime().isBefore(billingTime)
? ImmutableSet.of()
: ImmutableSet.of(
DomainTransactionRecord.create(
tld.getTldStr(),
// We report this when the autorenew grace period
// ends
billingTime,
TransactionReportField.netRenewsFieldFromYears(1),
1)))
.build();
historyEntriesBuilder.add(historyEntry);
DateTime eventTime = billingTime.minus(tld.getAutoRenewGracePeriodLength());
// Determine the cost for a one-year renewal.
Money renewCost = getDomainRenewCost(recurring.getTargetId(), eventTime, 1);
syntheticOneTimesBuilder.add(
new OneTime.Builder()
.setBillingTime(billingTime)
.setClientId(recurring.getClientId())
.setCost(renewCost)
.setEventTime(eventTime)
.setFlags(union(recurring.getFlags(), Flag.SYNTHETIC))
.setParent(historyEntry)
.setPeriodYears(1)
.setReason(recurring.getReason())
.setSyntheticCreationTime(executeTime)
.setCancellationMatchingBillingEvent(recurring.createVKey())
.setTargetId(recurring.getTargetId())
.build());
}
Set<HistoryEntry> historyEntries = historyEntriesBuilder.build();
Set<OneTime> syntheticOneTimes = syntheticOneTimesBuilder.build();
if (!isDryRun) {
ImmutableSet<ImmutableObject> entitiesToSave =
new ImmutableSet.Builder<ImmutableObject>()
.addAll(historyEntries)
.addAll(syntheticOneTimes)
.build();
ofy().save().entities(entitiesToSave).now();
}
return syntheticOneTimes.size();
});
() -> expandBillingEvent(recurring, executeTime, cursorTime, isDryRun));
} catch (Throwable t) {
getContext().incrementCounter("error: " + t.getClass().getSimpleName());
getContext().incrementCounter(ERROR_COUNTER);
@@ -256,45 +225,12 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
if (!isDryRun) {
getContext().incrementCounter("Saved OneTime billing events", numBillingEventsSaved);
} else {
getContext().incrementCounter(
"Generated OneTime billing events (dry run)", numBillingEventsSaved);
getContext()
.incrementCounter("Generated OneTime billing events (dry run)", numBillingEventsSaved);
}
}
/**
* Filters a set of {@link DateTime}s down to event times that are in scope for a particular
* mapreduce run, given the cursor time and the mapreduce execution time.
*/
private ImmutableSet<DateTime> getBillingTimesInScope(
Iterable<DateTime> eventTimes,
DateTime cursorTime,
DateTime executeTime,
final Registry tld) {
return Streams.stream(eventTimes)
.map(eventTime -> eventTime.plus(tld.getAutoRenewGracePeriodLength()))
.filter(Range.closedOpen(cursorTime, executeTime))
.collect(toImmutableSet());
}
/**
* Determines an {@link ImmutableSet} of {@link DateTime}s that have already been persisted
* for a given recurring billing event.
*/
private ImmutableSet<DateTime> getExistingBillingTimes(
Iterable<BillingEvent.OneTime> oneTimesForDomain,
final BillingEvent.Recurring recurringEvent) {
return Streams.stream(oneTimesForDomain)
.filter(
billingEvent ->
recurringEvent
.createVKey()
.equals(billingEvent.getCancellationMatchingBillingEvent()))
.map(OneTime::getBillingTime)
.collect(toImmutableSet());
}
}
/**
* "Reducer" to advance the cursor after all map jobs have been completed. The NullInput into the
* mapper will cause the mapper to emit one timestamp pair (current cursor and execution time),
@@ -327,7 +263,8 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
isDryRun ? "(dry run) " : "", cursorTime, executionTime);
tm().transact(
() -> {
Cursor cursor = ofy().load().key(Cursor.createGlobalKey(RECURRING_BILLING)).now();
Cursor cursor =
auditedOfy().load().key(Cursor.createGlobalKey(RECURRING_BILLING)).now();
DateTime currentCursorTime =
(cursor == null ? START_OF_TIME : cursor.getCursorTime());
if (!currentCursorTime.equals(expectedPersistedCursorTime)) {
@@ -342,4 +279,135 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
});
}
}
private static int expandBillingEvent(
Recurring recurring, DateTime executeTime, DateTime cursorTime, boolean isDryRun) {
ImmutableSet.Builder<OneTime> syntheticOneTimesBuilder = new ImmutableSet.Builder<>();
final Registry tld = Registry.get(getTldFromDomainName(recurring.getTargetId()));
// Determine the complete set of times at which this recurring event should
// occur (up to and including the runtime of the mapreduce).
Iterable<DateTime> eventTimes =
recurring
.getRecurrenceTimeOfYear()
.getInstancesInRange(
Range.closed(
recurring.getEventTime(),
earliestOf(recurring.getRecurrenceEndTime(), executeTime)));
// Convert these event times to billing times
final ImmutableSet<DateTime> billingTimes =
getBillingTimesInScope(eventTimes, cursorTime, executeTime, tld);
VKey<DomainBase> domainKey =
VKey.create(
DomainBase.class, recurring.getDomainRepoId(), recurring.getParentKey().getParent());
Iterable<OneTime> oneTimesForDomain;
if (tm().isOfy()) {
oneTimesForDomain = auditedOfy().load().type(OneTime.class).ancestor(domainKey.getOfyKey());
} else {
oneTimesForDomain =
tm().createQueryComposer(OneTime.class)
.where("domainRepoId", EQ, recurring.getDomainRepoId())
.list();
}
// Determine the billing times that already have OneTime events persisted.
ImmutableSet<DateTime> existingBillingTimes =
getExistingBillingTimes(oneTimesForDomain, recurring);
ImmutableSet.Builder<DomainHistory> historyEntriesBuilder = new ImmutableSet.Builder<>();
// Create synthetic OneTime events for all billing times that do not yet have
// an event persisted.
for (DateTime billingTime : difference(billingTimes, existingBillingTimes)) {
// Construct a new HistoryEntry that parents over the OneTime
DomainHistory historyEntry =
new DomainHistory.Builder()
.setBySuperuser(false)
.setClientId(recurring.getClientId())
.setModificationTime(tm().getTransactionTime())
.setDomain(tm().loadByKey(domainKey))
.setPeriod(Period.create(1, YEARS))
.setReason("Domain autorenewal by ExpandRecurringBillingEventsAction")
.setRequestedByRegistrar(false)
.setType(DOMAIN_AUTORENEW)
// Don't write a domain transaction record if the recurrence was
// ended prior to the billing time (i.e. a domain was deleted
// during the autorenew grace period).
.setDomainTransactionRecords(
recurring.getRecurrenceEndTime().isBefore(billingTime)
? ImmutableSet.of()
: ImmutableSet.of(
DomainTransactionRecord.create(
tld.getTldStr(),
// We report this when the autorenew grace period
// ends
billingTime,
TransactionReportField.netRenewsFieldFromYears(1),
1)))
.build();
historyEntriesBuilder.add(historyEntry);
DateTime eventTime = billingTime.minus(tld.getAutoRenewGracePeriodLength());
// Determine the cost for a one-year renewal.
Money renewCost = getDomainRenewCost(recurring.getTargetId(), eventTime, 1);
syntheticOneTimesBuilder.add(
new OneTime.Builder()
.setBillingTime(billingTime)
.setClientId(recurring.getClientId())
.setCost(renewCost)
.setEventTime(eventTime)
.setFlags(union(recurring.getFlags(), Flag.SYNTHETIC))
.setParent(historyEntry)
.setPeriodYears(1)
.setReason(recurring.getReason())
.setSyntheticCreationTime(executeTime)
.setCancellationMatchingBillingEvent(recurring.createVKey())
.setTargetId(recurring.getTargetId())
.build());
}
Set<DomainHistory> historyEntries = historyEntriesBuilder.build();
Set<OneTime> syntheticOneTimes = syntheticOneTimesBuilder.build();
if (!isDryRun) {
ImmutableSet<ImmutableObject> entitiesToSave =
new ImmutableSet.Builder<ImmutableObject>()
.addAll(historyEntries)
.addAll(syntheticOneTimes)
.build();
tm().putAll(entitiesToSave);
}
return syntheticOneTimes.size();
}
/**
* Filters a set of {@link DateTime}s down to event times that are in scope for a particular
* mapreduce run, given the cursor time and the mapreduce execution time.
*/
protected static ImmutableSet<DateTime> getBillingTimesInScope(
Iterable<DateTime> eventTimes,
DateTime cursorTime,
DateTime executeTime,
final Registry tld) {
return Streams.stream(eventTimes)
.map(eventTime -> eventTime.plus(tld.getAutoRenewGracePeriodLength()))
.filter(Range.closedOpen(cursorTime, executeTime))
.collect(toImmutableSet());
}
/**
* Determines an {@link ImmutableSet} of {@link DateTime}s that have already been persisted for a
* given recurring billing event.
*/
private static ImmutableSet<DateTime> getExistingBillingTimes(
Iterable<BillingEvent.OneTime> oneTimesForDomain,
final BillingEvent.Recurring recurringEvent) {
return Streams.stream(oneTimesForDomain)
.filter(
billingEvent ->
recurringEvent
.createVKey()
.equals(billingEvent.getCancellationMatchingBillingEvent()))
.map(OneTime::getBillingTime)
.collect(toImmutableSet());
}
}

View File

@@ -23,9 +23,10 @@ import static google.registry.batch.AsyncTaskEnqueuer.PARAM_REQUESTED_TIME;
import static google.registry.batch.AsyncTaskEnqueuer.QUEUE_ASYNC_HOST_RENAME;
import static google.registry.batch.AsyncTaskMetrics.OperationType.DNS_REFRESH;
import static google.registry.mapreduce.inputs.EppResourceInputs.createEntityInput;
import static google.registry.model.EppResourceUtils.getLinkedDomainKeys;
import static google.registry.model.EppResourceUtils.isActive;
import static google.registry.model.EppResourceUtils.isDeleted;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.latestOf;
import static java.util.concurrent.TimeUnit.DAYS;
import static java.util.concurrent.TimeUnit.SECONDS;
@@ -44,7 +45,6 @@ import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.batch.AsyncTaskMetrics.OperationResult;
import google.registry.dns.DnsQueue;
import google.registry.mapreduce.MapreduceRunner;
@@ -64,11 +64,13 @@ import google.registry.util.SystemClock;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.NoSuchElementException;
import java.util.Optional;
import java.util.logging.Level;
import javax.annotation.Nullable;
import javax.inject.Inject;
import javax.inject.Named;
import org.apache.http.HttpStatus;
import org.joda.time.DateTime;
import org.joda.time.Duration;
@@ -86,6 +88,8 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
@Inject Clock clock;
@Inject MapreduceRunner mrRunner;
@Inject @Named(QUEUE_ASYNC_HOST_RENAME) Queue pullQueue;
@Inject DnsQueue dnsQueue;
@Inject RequestStatusChecker requestStatusChecker;
@Inject Response response;
@Inject Retrier retrier;
@@ -123,7 +127,7 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
}
ImmutableList.Builder<DnsRefreshRequest> requestsBuilder = new ImmutableList.Builder<>();
ImmutableList.Builder<Key<HostResource>> hostKeys = new ImmutableList.Builder<>();
ImmutableList.Builder<VKey<HostResource>> hostKeys = new ImmutableList.Builder<>();
final List<DnsRefreshRequest> requestsToDelete = new ArrayList<>();
for (TaskHandle task : tasks) {
@@ -153,7 +157,39 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
} else {
logger.atInfo().log(
"Processing asynchronous DNS refresh for renamed hosts: %s", hostKeys.build());
runMapreduce(refreshRequests, lock);
if (tm().isOfy()) {
runMapreduce(refreshRequests, lock);
} else {
try {
refreshRequests.stream()
.flatMap(
request ->
getLinkedDomainKeys(request.hostKey(), request.lastUpdateTime(), null)
.stream())
.distinct()
.map(domainKey -> tm().transact(() -> tm().loadByKey(domainKey).getDomainName()))
.forEach(
domainName -> {
retrier.callWithRetry(
() -> dnsQueue.addDomainRefreshTask(domainName),
TransientFailureException.class);
logger.atInfo().log("Enqueued DNS refresh for domain %s.", domainName);
});
deleteTasksWithRetry(
refreshRequests,
getQueue(QUEUE_ASYNC_HOST_RENAME),
asyncTaskMetrics,
retrier,
OperationResult.SUCCESS);
} catch (Throwable t) {
String message = "Error refreshing DNS on host rename.";
logger.atSevere().withCause(t).log(message);
response.setPayload(message);
response.setStatus(HttpStatus.SC_INTERNAL_SERVER_ERROR);
} finally {
lock.get().release();
}
}
}
}
@@ -204,10 +240,10 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
emit(true, true);
return;
}
Key<HostResource> referencingHostKey = null;
VKey<HostResource> referencingHostKey = null;
for (DnsRefreshRequest request : refreshRequests) {
if (isActive(domain, request.lastUpdateTime())
&& domain.getNameservers().contains(VKey.from(request.hostKey()))) {
&& domain.getNameservers().contains(request.hostKey())) {
referencingHostKey = request.hostKey();
break;
}
@@ -293,7 +329,8 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
private static final long serialVersionUID = 1772812852271288622L;
abstract Key<HostResource> hostKey();
abstract VKey<HostResource> hostKey();
abstract DateTime lastUpdateTime();
abstract DateTime requestedTime();
abstract boolean isRefreshNeeded();
@@ -301,7 +338,8 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
@AutoValue.Builder
abstract static class Builder {
abstract Builder setHostKey(Key<HostResource> hostKey);
abstract Builder setHostKey(VKey<HostResource> hostKey);
abstract Builder setLastUpdateTime(DateTime lastUpdateTime);
abstract Builder setRequestedTime(DateTime requestedTime);
abstract Builder setIsRefreshNeeded(boolean isRefreshNeeded);
@@ -314,10 +352,12 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
*/
static DnsRefreshRequest createFromTask(TaskHandle task, DateTime now) throws Exception {
ImmutableMap<String, String> params = ImmutableMap.copyOf(task.extractParams());
Key<HostResource> hostKey =
Key.create(checkNotNull(params.get(PARAM_HOST_KEY), "Host to refresh not specified"));
VKey<HostResource> hostKey =
VKey.fromWebsafeKey(
checkNotNull(params.get(PARAM_HOST_KEY), "Host to refresh not specified"));
HostResource host =
checkNotNull(ofy().load().key(hostKey).now(), "Host to refresh doesn't exist");
tm().transact(() -> tm().loadByKeyIfPresent(hostKey))
.orElseThrow(() -> new NoSuchElementException("Host to refresh doesn't exist"));
boolean isHostDeleted =
isDeleted(host, latestOf(now, host.getUpdateTimestamp().getTimestamp()));
if (isHostDeleted) {

View File

@@ -16,7 +16,6 @@ package google.registry.batch;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.POST;
@@ -29,15 +28,16 @@ import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.RegistryLock;
import google.registry.model.eppcommon.StatusValue;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.registry.RegistryLockDao;
import google.registry.model.tld.RegistryLockDao;
import google.registry.persistence.VKey;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.schema.domain.RegistryLock;
import google.registry.tools.DomainLockUtils;
import google.registry.util.DateTimeUtils;
import google.registry.util.EmailMessage;
@@ -125,6 +125,7 @@ public class RelockDomainAction implements Runnable {
response.setContentType(MediaType.PLAIN_TEXT_UTF_8);
// nb: DomainLockUtils relies on the JPA transaction being the outermost transaction
// if we have Datastore as the primary DB (if SQL is the primary DB, it's irrelevant)
jpaTm().transact(() -> tm().transact(this::relockDomain));
}
@@ -139,12 +140,8 @@ public class RelockDomainAction implements Runnable {
new IllegalArgumentException(
String.format("Unknown revision ID %d", oldUnlockRevisionId)));
domain =
ofy()
.load()
.type(DomainBase.class)
.id(oldLock.getRepoId())
.now()
.cloneProjectedAtTime(jpaTm().getTransactionTime());
tm().loadByKey(VKey.create(DomainBase.class, oldLock.getRepoId()))
.cloneProjectedAtTime(tm().getTransactionTime());
} catch (Throwable t) {
handleTransientFailure(Optional.ofNullable(oldLock), t);
return;

View File

@@ -15,7 +15,7 @@
package google.registry.batch;
import static google.registry.mapreduce.MapreduceRunner.PARAM_FAST;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.google.appengine.tools.mapreduce.Mapper;
@@ -54,6 +54,8 @@ import javax.inject.Inject;
service = Action.Service.BACKEND,
path = "/_dr/task/resaveAllEppResources",
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
// No longer needed in SQL. Subject to future removal.
@Deprecated
public class ResaveAllEppResourcesAction implements Runnable {
@Inject MapreduceRunner mrRunner;
@@ -104,13 +106,13 @@ public class ResaveAllEppResourcesAction implements Runnable {
boolean resaved =
tm().transact(
() -> {
EppResource originalResource = ofy().load().key(resourceKey).now();
EppResource originalResource = auditedOfy().load().key(resourceKey).now();
EppResource projectedResource =
originalResource.cloneProjectedAtTime(tm().getTransactionTime());
if (isFast && originalResource.equals(projectedResource)) {
return false;
} else {
ofy().save().entity(projectedResource).now();
auditedOfy().save().entity(projectedResource).now();
return true;
}
});

View File

@@ -17,7 +17,6 @@ package google.registry.batch;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_REQUESTED_TIME;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESAVE_TIMES;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESOURCE_KEY;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.google.common.collect.ImmutableSet;
@@ -26,6 +25,7 @@ import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.persistence.VKey;
import google.registry.request.Action;
import google.registry.request.Action.Method;
import google.registry.request.Parameter;
@@ -74,16 +74,17 @@ public class ResaveEntityAction implements Runnable {
public void run() {
logger.atInfo().log(
"Re-saving entity %s which was enqueued at %s.", resourceKey, requestedTime);
tm().transact(() -> {
ImmutableObject entity = ofy().load().key(resourceKey).now();
ofy().save().entity(
(entity instanceof EppResource)
? ((EppResource) entity).cloneProjectedAtTime(tm().getTransactionTime()) : entity
);
if (!resaveTimes.isEmpty()) {
asyncTaskEnqueuer.enqueueAsyncResave(entity, requestedTime, resaveTimes);
}
});
tm().transact(
() -> {
ImmutableObject entity = tm().loadByKey(VKey.from(resourceKey));
tm().put(
(entity instanceof EppResource)
? ((EppResource) entity).cloneProjectedAtTime(tm().getTransactionTime())
: entity);
if (!resaveTimes.isEmpty()) {
asyncTaskEnqueuer.enqueueAsyncResave(entity, requestedTime, resaveTimes);
}
});
response.setPayload("Entity re-saved.");
}
}

View File

@@ -0,0 +1,341 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import static org.apache.http.HttpStatus.SC_INTERNAL_SERVER_ERROR;
import static org.apache.http.HttpStatus.SC_OK;
import static org.joda.time.DateTimeZone.UTC;
import com.google.auto.value.AutoValue;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.flows.certs.CertificateChecker;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.registrar.RegistrarContact.Type;
import google.registry.request.Action;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.EmailMessage;
import google.registry.util.SendEmailService;
import java.util.Date;
import java.util.Optional;
import javax.inject.Inject;
import javax.mail.internet.AddressException;
import javax.mail.internet.InternetAddress;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.joda.time.format.DateTimeFormat;
import org.joda.time.format.DateTimeFormatter;
/** An action that sends notification emails to registrars whose certificates are expiring soon. */
@Action(
service = Action.Service.BACKEND,
path = SendExpiringCertificateNotificationEmailAction.PATH,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class SendExpiringCertificateNotificationEmailAction implements Runnable {
public static final String PATH = "/_dr/task/sendExpiringCertificateNotificationEmail";
/**
* Used as an offset when storing the last notification email sent date.
*
* <p>This is used to handle edges cases when the update happens in between the day switch. For
* instance,if the job starts at 2:00 am every day and it finishes at 2:03 of the same day, then
* next day at 2am, the date difference will be less than a day, which will lead to the date
* difference between two successive email sent date being the expected email interval days + 1;
*/
protected static final Duration UPDATE_TIME_OFFSET = Duration.standardMinutes(10);
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final DateTimeFormatter DATE_FORMATTER = DateTimeFormat.forPattern("yyyy-MM-dd");
private final CertificateChecker certificateChecker;
private final String expirationWarningEmailBodyText;
private final SendEmailService sendEmailService;
private final String expirationWarningEmailSubjectText;
private final InternetAddress gSuiteOutgoingEmailAddress;
private final Response response;
@Inject
public SendExpiringCertificateNotificationEmailAction(
@Config("expirationWarningEmailBodyText") String expirationWarningEmailBodyText,
@Config("expirationWarningEmailSubjectText") String expirationWarningEmailSubjectText,
@Config("gSuiteOutgoingEmailAddress") InternetAddress gSuiteOutgoingEmailAddress,
SendEmailService sendEmailService,
CertificateChecker certificateChecker,
Response response) {
this.certificateChecker = certificateChecker;
this.expirationWarningEmailSubjectText = expirationWarningEmailSubjectText;
this.sendEmailService = sendEmailService;
this.gSuiteOutgoingEmailAddress = gSuiteOutgoingEmailAddress;
this.expirationWarningEmailBodyText = expirationWarningEmailBodyText;
this.response = response;
}
@Override
public void run() {
response.setContentType(MediaType.PLAIN_TEXT_UTF_8);
try {
sendNotificationEmails();
response.setStatus(SC_OK);
} catch (Exception e) {
logger.atWarning().withCause(e).log(
"Exception thrown when sending expiring certificate notification emails.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(String.format("Exception thrown with cause: %s", e));
}
}
/**
* Returns a list of registrars that should receive expiring notification emails. There are two
* certificates that should be considered (the main certificate and failOver certificate). The
* registrars should receive notifications if one of the certificate checks returns true.
*/
@VisibleForTesting
ImmutableList<RegistrarInfo> getRegistrarsWithExpiringCertificates() {
logger.atInfo().log(
"Getting a list of registrars that should receive expiring notification emails.");
return Streams.stream(Registrar.loadAllCached())
.map(
registrar ->
RegistrarInfo.create(
registrar,
registrar.getClientCertificate().isPresent()
&& certificateChecker.shouldReceiveExpiringNotification(
registrar.getLastExpiringCertNotificationSentDate(),
registrar.getClientCertificate().get()),
registrar.getFailoverClientCertificate().isPresent()
&& certificateChecker.shouldReceiveExpiringNotification(
registrar.getLastExpiringFailoverCertNotificationSentDate(),
registrar.getFailoverClientCertificate().get())))
.filter(
registrarInfo ->
registrarInfo.isCertExpiring() || registrarInfo.isFailOverCertExpiring())
.collect(toImmutableList());
}
/**
* Sends a notification email to the registrar regarding the expiring certificate and returns true
* if it's sent successfully.
*/
@VisibleForTesting
boolean sendNotificationEmail(
Registrar registrar,
DateTime lastExpiringCertNotificationSentDate,
CertificateType certificateType,
Optional<String> certificate) {
if (!certificate.isPresent()
|| !certificateChecker.shouldReceiveExpiringNotification(
lastExpiringCertNotificationSentDate, certificate.get())) {
return false;
}
try {
ImmutableSet<InternetAddress> recipients = getEmailAddresses(registrar, Type.TECH);
if (recipients.isEmpty()) {
logger.atWarning().log(
"Registrar %s contains no email addresses to receive notification email.",
registrar.getRegistrarName());
return false;
}
sendEmailService.sendEmail(
EmailMessage.newBuilder()
.setFrom(gSuiteOutgoingEmailAddress)
.setSubject(expirationWarningEmailSubjectText)
.setBody(
getEmailBody(
registrar.getRegistrarName(),
certificateType,
certificateChecker.getCertificate(certificate.get()).getNotAfter(),
registrar.getClientId()))
.setRecipients(recipients)
.setCcs(getEmailAddresses(registrar, Type.ADMIN))
.build());
logger.atInfo().log(
"Sent an email to inform registrar %s that its %s SSL certificate will expire on %s.",
registrar.getRegistrarName(),
certificateType.getDisplayName(),
DATE_FORMATTER.print(lastExpiringCertNotificationSentDate));
/*
* A duration time offset is used here to ensure that date comparison between two
* successive dates is always greater than 1 day. This date is set as last updated date,
* for applicable certificate.
*/
updateLastNotificationSentDate(
registrar,
DateTime.now(UTC).minusMinutes((int) UPDATE_TIME_OFFSET.getStandardMinutes()),
certificateType);
return true;
} catch (Exception e) {
throw new RuntimeException(
String.format(
"Failed to send expiring certificate notification email to registrar %s.",
registrar.getRegistrarName()));
}
}
/** Updates the last notification sent date in database. */
@VisibleForTesting
void updateLastNotificationSentDate(
Registrar registrar, DateTime now, CertificateType certificateType) {
try {
tm().transact(
() -> {
Registrar.Builder newRegistrar = tm().loadByEntity(registrar).asBuilder();
switch (certificateType) {
case PRIMARY:
newRegistrar.setLastExpiringCertNotificationSentDate(now);
tm().put(newRegistrar.build());
logger.atInfo().log(
"Updated last notification email sent date to %s for %s certificate of "
+ "registrar %s.",
DATE_FORMATTER.print(now),
certificateType.getDisplayName(),
registrar.getRegistrarName());
break;
case FAILOVER:
newRegistrar.setLastExpiringFailoverCertNotificationSentDate(now);
tm().put(newRegistrar.build());
logger.atInfo().log(
"Updated last notification email sent date to %s for %s certificate of "
+ "registrar %s.",
DATE_FORMATTER.print(now),
certificateType.getDisplayName(),
registrar.getRegistrarName());
break;
default:
throw new IllegalArgumentException(
String.format(
"Unsupported certificate type: %s being passed in when updating "
+ "the last notification sent date to registrar %s.",
certificateType.toString(), registrar.getRegistrarName()));
}
});
} catch (Exception e) {
throw new RuntimeException(
String.format(
"Failed to update the last notification sent date to Registrar %s for the %s "
+ "certificate.",
registrar.getRegistrarName(), certificateType.getDisplayName()));
}
}
/** Sends notification emails to registrars with expiring certificates. */
@VisibleForTesting
int sendNotificationEmails() {
int emailsSent = 0;
for (RegistrarInfo registrarInfo : getRegistrarsWithExpiringCertificates()) {
Registrar registrar = registrarInfo.registrar();
if (registrarInfo.isCertExpiring()) {
sendNotificationEmail(
registrar,
registrar.getLastExpiringCertNotificationSentDate(),
CertificateType.PRIMARY,
registrar.getClientCertificate());
emailsSent++;
}
if (registrarInfo.isFailOverCertExpiring()) {
sendNotificationEmail(
registrar,
registrar.getLastExpiringFailoverCertNotificationSentDate(),
CertificateType.FAILOVER,
registrar.getFailoverClientCertificate());
emailsSent++;
}
}
logger.atInfo().log(
"Attempted to send %d expiring certificate notification emails.", emailsSent);
return emailsSent;
}
/** Returns a list of email addresses of the registrar that should receive a notification email */
@VisibleForTesting
ImmutableSet<InternetAddress> getEmailAddresses(Registrar registrar, Type contactType) {
ImmutableSortedSet<RegistrarContact> contacts = registrar.getContactsOfType(contactType);
ImmutableSet.Builder<InternetAddress> recipientEmails = new ImmutableSet.Builder<>();
for (RegistrarContact contact : contacts) {
try {
recipientEmails.add(new InternetAddress(contact.getEmailAddress()));
} catch (AddressException e) {
logger.atWarning().withCause(e).log(
"Registrar Contact email address %s of Registrar %s is invalid; skipping.",
contact.getEmailAddress(), registrar.getRegistrarName());
}
}
return recipientEmails.build();
}
/**
* Generates email content by taking registrar name, certificate type and expiration date as
* parameters.
*/
@VisibleForTesting
@SuppressWarnings("lgtm[java/dereferenced-value-may-be-null]")
String getEmailBody(
String registrarName, CertificateType type, Date expirationDate, String registrarId) {
checkArgumentNotNull(expirationDate, "Expiration date cannot be null");
checkArgumentNotNull(type, "Certificate type cannot be null");
checkArgumentNotNull(registrarId, "Registrar Id cannot be null");
return String.format(
expirationWarningEmailBodyText,
registrarName,
type.getDisplayName(),
DATE_FORMATTER.print(new DateTime(expirationDate)),
registrarId);
}
/**
* Certificate types for X509Certificate.
*
* <p><b>Note:</b> These types are only used to indicate the type of expiring certificate in
* notification emails.
*/
protected enum CertificateType {
PRIMARY("primary"),
FAILOVER("fail-over");
private final String displayName;
CertificateType(String displayName) {
this.displayName = displayName;
}
public String getDisplayName() {
return displayName;
}
}
@AutoValue
public abstract static class RegistrarInfo {
static RegistrarInfo create(
Registrar registrar, boolean isCertExpiring, boolean isFailOverCertExpiring) {
return new AutoValue_SendExpiringCertificateNotificationEmailAction_RegistrarInfo(
registrar, isCertExpiring, isFailOverCertExpiring);
}
public abstract Registrar registrar();
public abstract boolean isCertExpiring();
public abstract boolean isFailOverCertExpiring();
}
}

View File

@@ -19,19 +19,21 @@ import static javax.servlet.http.HttpServletResponse.SC_FORBIDDEN;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.persistence.PersistenceModule.SchemaManagerConnection;
import google.registry.request.Action;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Retrier;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.function.Supplier;
import javax.inject.Inject;
import org.flywaydb.core.api.FlywayException;
/**
* Wipes out all Cloud SQL data in a Nomulus GCP environment.
@@ -46,22 +48,18 @@ import org.flywaydb.core.api.FlywayException;
public class WipeOutCloudSqlAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
// As a short-lived class, hardcode allowed projects here instead of using config files.
private static final ImmutableSet<String> ALLOWED_PROJECTS =
ImmutableSet.of("domain-registry-qa");
private static final ImmutableSet<RegistryEnvironment> FORBIDDEN_ENVIRONMENTS =
ImmutableSet.of(RegistryEnvironment.PRODUCTION, RegistryEnvironment.SANDBOX);
private final String projectId;
private final Supplier<Connection> connectionSupplier;
private final Response response;
private final Retrier retrier;
@Inject
WipeOutCloudSqlAction(
@Config("projectId") String projectId,
@SchemaManagerConnection Supplier<Connection> connectionSupplier,
Response response,
Retrier retrier) {
this.projectId = projectId;
this.connectionSupplier = connectionSupplier;
this.response = response;
this.retrier = retrier;
@@ -71,28 +69,93 @@ public class WipeOutCloudSqlAction implements Runnable {
public void run() {
response.setContentType(PLAIN_TEXT_UTF_8);
if (!ALLOWED_PROJECTS.contains(projectId)) {
if (FORBIDDEN_ENVIRONMENTS.contains(RegistryEnvironment.get())) {
response.setStatus(SC_FORBIDDEN);
response.setPayload("Wipeout is not allowed in " + projectId);
response.setPayload("Wipeout is not allowed in " + RegistryEnvironment.get());
return;
}
try {
retrier.callWithRetry(
() -> {
try (Connection conn = connectionSupplier.get();
Statement statement = conn.createStatement()) {
statement.execute("drop owned by schema_deployer;");
try (Connection conn = connectionSupplier.get()) {
dropAllTables(conn, listTables(conn));
dropAllSequences(conn, listSequences(conn));
}
return null;
},
e -> !(e instanceof FlywayException));
e -> !(e instanceof SQLException));
response.setStatus(SC_OK);
response.setPayload("Wiped out Cloud SQL in " + projectId);
response.setPayload("Wiped out Cloud SQL in " + RegistryEnvironment.get());
} catch (RuntimeException e) {
logger.atSevere().withCause(e).log("Failed to wipe out Cloud SQL data.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload("Failed to wipe out Cloud SQL in " + projectId);
response.setPayload("Failed to wipe out Cloud SQL in " + RegistryEnvironment.get());
}
}
/** Returns a list of all tables in the public schema of a Postgresql database. */
static ImmutableList<String> listTables(Connection connection) throws SQLException {
try (ResultSet resultSet =
connection.getMetaData().getTables(null, null, null, new String[] {"TABLE"})) {
ImmutableList.Builder<String> tables = new ImmutableList.Builder<>();
while (resultSet.next()) {
String schema = resultSet.getString("TABLE_SCHEM");
if (schema == null || !schema.equalsIgnoreCase("public")) {
continue;
}
String tableName = resultSet.getString("TABLE_NAME");
tables.add("public.\"" + tableName + "\"");
}
return tables.build();
}
}
static void dropAllTables(Connection conn, ImmutableList<String> tables) throws SQLException {
if (tables.isEmpty()) {
return;
}
try (Statement statement = conn.createStatement()) {
for (String table : tables) {
statement.addBatch(String.format("DROP TABLE IF EXISTS %s CASCADE;", table));
}
for (int code : statement.executeBatch()) {
if (code == Statement.EXECUTE_FAILED) {
throw new RuntimeException("Failed to drop some tables. Please check.");
}
}
}
}
/** Returns a list of all sequences in a Postgresql database. */
static ImmutableList<String> listSequences(Connection conn) throws SQLException {
try (Statement statement = conn.createStatement();
ResultSet resultSet =
statement.executeQuery("SELECT c.relname FROM pg_class c WHERE c.relkind = 'S';")) {
ImmutableList.Builder<String> sequences = new ImmutableList.Builder<>();
while (resultSet.next()) {
sequences.add('\"' + resultSet.getString(1) + '\"');
}
return sequences.build();
}
}
static void dropAllSequences(Connection conn, ImmutableList<String> sequences)
throws SQLException {
if (sequences.isEmpty()) {
return;
}
try (Statement statement = conn.createStatement()) {
for (String sequence : sequences) {
statement.addBatch(String.format("DROP SEQUENCE IF EXISTS %s CASCADE;", sequence));
}
for (int code : statement.executeBatch()) {
if (code == Statement.EXECUTE_FAILED) {
throw new RuntimeException("Failed to drop some sequences. Please check.");
}
}
}
}
}

View File

@@ -0,0 +1,115 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch;
import static com.google.common.net.MediaType.PLAIN_TEXT_UTF_8;
import static google.registry.beam.BeamUtils.createJobName;
import static javax.servlet.http.HttpServletResponse.SC_FORBIDDEN;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.api.services.dataflow.Dataflow;
import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.request.Action;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import javax.inject.Inject;
/**
* Wipes out all Cloud Datastore data in a Nomulus GCP environment.
*
* <p>This class is created for the QA environment, where migration testing with production data
* will happen. A regularly scheduled wipeout is a prerequisite to using production data there.
*/
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/wipeOutDatastore",
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class WipeoutDatastoreAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final String PIPELINE_NAME = "bulk_delete_datastore_pipeline";
private static final ImmutableSet<RegistryEnvironment> FORBIDDEN_ENVIRONMENTS =
ImmutableSet.of(RegistryEnvironment.PRODUCTION, RegistryEnvironment.SANDBOX);
private final String projectId;
private final String jobRegion;
private final Response response;
private final Dataflow dataflow;
private final String stagingBucketUrl;
private final Clock clock;
@Inject
WipeoutDatastoreAction(
@Config("projectId") String projectId,
@Config("defaultJobRegion") String jobRegion,
@Config("beamStagingBucketUrl") String stagingBucketUrl,
Clock clock,
Response response,
Dataflow dataflow) {
this.projectId = projectId;
this.jobRegion = jobRegion;
this.stagingBucketUrl = stagingBucketUrl;
this.clock = clock;
this.response = response;
this.dataflow = dataflow;
}
@Override
public void run() {
response.setContentType(PLAIN_TEXT_UTF_8);
if (FORBIDDEN_ENVIRONMENTS.contains(RegistryEnvironment.get())) {
response.setStatus(SC_FORBIDDEN);
response.setPayload("Wipeout is not allowed in " + RegistryEnvironment.get());
return;
}
try {
LaunchFlexTemplateParameter parameters =
new LaunchFlexTemplateParameter()
.setJobName(createJobName("bulk-delete-datastore-", clock))
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(ImmutableMap.of("kindsToDelete", "*"));
LaunchFlexTemplateResponse launchResponse =
dataflow
.projects()
.locations()
.flexTemplates()
.launch(
projectId,
jobRegion,
new LaunchFlexTemplateRequest().setLaunchParameter(parameters))
.execute();
response.setStatus(SC_OK);
response.setPayload("Launched " + launchResponse.getJob().getName());
} catch (Exception e) {
String msg = String.format("Failed to launch %s.", PIPELINE_NAME);
logger.atSevere().withCause(e).log(msg);
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(msg);
}
}
}

View File

@@ -14,10 +14,14 @@
package google.registry.beam;
import static com.google.common.base.Preconditions.checkArgument;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableList;
import com.google.common.io.Resources;
import google.registry.util.Clock;
import google.registry.util.ResourceUtils;
import java.util.regex.Pattern;
import org.apache.avro.generic.GenericRecord;
import org.apache.beam.sdk.io.gcp.bigquery.SchemaAndRecord;
@@ -41,8 +45,7 @@ public class BeamUtils {
ImmutableList<String> fieldNames, SchemaAndRecord schemaAndRecord) {
GenericRecord record = schemaAndRecord.getRecord();
ImmutableList<String> nullFields =
fieldNames
.stream()
fieldNames.stream()
.filter(fieldName -> record.get(fieldName) == null)
.collect(ImmutableList.toImmutableList());
String missingFieldList = Joiner.on(", ").join(nullFields);
@@ -61,4 +64,19 @@ public class BeamUtils {
public static String getQueryFromFile(Class<?> clazz, String filename) {
return ResourceUtils.readResourceUtf8(Resources.getResource(clazz, "sql/" + filename));
}
/** Creates a beam job name and validates that it conforms to the requirements. */
public static String createJobName(String prefix, Clock clock) {
// Flex template job name must be unique and consists of only characters [-a-z0-9], starting
// with a letter and ending with a letter or number. So we replace the "T" and "Z" in ISO 8601
// with lowercase letters.
String jobName =
String.format("%s-%s", prefix, clock.nowUtc().toString("yyyy-MM-dd't'HH-mm-ss'z'"));
checkArgument(
Pattern.compile("^[a-z][-a-z0-9]*[a-z0-9]*").matcher(jobName).matches(),
"The job name %s is illegal, it consists of only characters [-a-z0-9], "
+ "starting with a letter and ending with a letter or number,",
jobName);
return jobName;
}
}

View File

@@ -18,17 +18,23 @@ import static com.google.common.base.Verify.verify;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import google.registry.backup.AppEngineEnvironment;
import google.registry.model.contact.ContactResource;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.persistence.transaction.JpaTransactionManager;
import java.io.Serializable;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
/** Toy pipeline that demonstrates how to use {@link JpaTransactionManager} in BEAM pipelines. */
/**
* Toy pipeline that demonstrates how to use {@link JpaTransactionManager} in BEAM pipelines.
*
* <p>This pipeline may also be used as an integration test for {@link RegistryJpaIO.Read} in a
* project with realistic data.
*/
public class JpaDemoPipeline implements Serializable {
public static void main(String[] args) {
@@ -38,23 +44,16 @@ public class JpaDemoPipeline implements Serializable {
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply("Start", Create.of((Void) null))
.apply(
"Generate Elements",
ParDo.of(
new DoFn<Void, Void>() {
@ProcessElement
public void processElement(OutputReceiver<Void> output) {
for (int i = 0; i < 500; i++) {
output.output(null);
}
}
}))
"Read contacts",
RegistryJpaIO.read(
() -> CriteriaQueryBuilder.create(ContactResource.class).build(),
ContactResource::getRepoId))
.apply(
"Make Query",
"Count Contacts",
ParDo.of(
new DoFn<Void, Void>() {
private Counter counter = Metrics.counter("Demo", "Read");
new DoFn<String, Void>() {
private Counter counter = Metrics.counter("Contacts", "Read");
@ProcessElement
public void processElement() {

View File

@@ -21,21 +21,32 @@ import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Streams;
import google.registry.backup.AppEngineEnvironment;
import google.registry.beam.common.RegistryQuery.CriteriaQuerySupplier;
import google.registry.model.ofy.ObjectifyService;
import google.registry.model.replay.SqlEntity;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.persistence.transaction.TransactionManagerFactory;
import java.io.Serializable;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.ThreadLocalRandom;
import javax.annotation.Nullable;
import javax.persistence.criteria.CriteriaQuery;
import org.apache.beam.sdk.coders.Coder;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.GroupIntoBatches;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.Reshuffle;
import org.apache.beam.sdk.transforms.SerializableFunction;
import org.apache.beam.sdk.transforms.WithKeys;
import org.apache.beam.sdk.util.ShardedKey;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PBegin;
import org.apache.beam.sdk.values.PCollection;
/**
@@ -51,10 +62,165 @@ public final class RegistryJpaIO {
private RegistryJpaIO() {}
public static <R> Read<R, R> read(CriteriaQuerySupplier<R> query) {
return read(query, x -> x);
}
public static <R, T> Read<R, T> read(
CriteriaQuerySupplier<R> query, SerializableFunction<R, T> resultMapper) {
return Read.<R, T>builder().criteriaQuery(query).resultMapper(resultMapper).build();
}
public static <R, T> Read<R, T> read(
String sql, boolean nativeQuery, SerializableFunction<R, T> resultMapper) {
return read(sql, null, nativeQuery, resultMapper);
}
/**
* Returns a {@link Read} connector based on the given {@code jpql} query string.
*
* <p>User should take care to prevent sql-injection attacks.
*/
public static <R, T> Read<R, T> read(
String sql,
@Nullable Map<String, Object> parameter,
boolean nativeQuery,
SerializableFunction<R, T> resultMapper) {
Read.Builder<R, T> builder = Read.builder();
if (nativeQuery) {
builder.nativeQuery(sql, parameter);
} else {
builder.jpqlQuery(sql, parameter);
}
return builder.resultMapper(resultMapper).build();
}
public static <R, T> Read<R, T> read(
String jpql, Class<R> clazz, SerializableFunction<R, T> resultMapper) {
return read(jpql, null, clazz, resultMapper);
}
/**
* Returns a {@link Read} connector based on the given {@code jpql} typed query string.
*
* <p>User should take care to prevent sql-injection attacks.
*/
public static <R, T> Read<R, T> read(
String jpql,
@Nullable Map<String, Object> parameter,
Class<R> clazz,
SerializableFunction<R, T> resultMapper) {
return Read.<R, T>builder()
.jpqlQuery(jpql, clazz, parameter)
.resultMapper(resultMapper)
.build();
}
public static <T> Write<T> write() {
return Write.<T>builder().build();
}
/**
* A {@link PTransform transform} that transactionally executes a JPA {@link CriteriaQuery} and
* adds the results to the BEAM pipeline. Users have the option to transform the results before
* sending them to the next stages.
*/
@AutoValue
public abstract static class Read<R, T> extends PTransform<PBegin, PCollection<T>> {
public static final String DEFAULT_NAME = "RegistryJpaIO.Read";
abstract String name();
abstract RegistryQuery<R> query();
abstract SerializableFunction<R, T> resultMapper();
abstract Coder<T> coder();
abstract Builder<R, T> toBuilder();
@Override
@SuppressWarnings("deprecation") // Reshuffle still recommended by GCP.
public PCollection<T> expand(PBegin input) {
return input
.apply("Starting " + name(), Create.of((Void) null))
.apply("Run query for " + name(), ParDo.of(new QueryRunner<>(query(), resultMapper())))
.setCoder(coder())
.apply("Reshuffle", Reshuffle.viaRandomKey());
}
public Read<R, T> withName(String name) {
return toBuilder().name(name).build();
}
public Read<R, T> withResultMapper(SerializableFunction<R, T> mapper) {
return toBuilder().resultMapper(mapper).build();
}
public Read<R, T> withCoder(Coder<T> coder) {
return toBuilder().coder(coder).build();
}
static <R, T> Builder<R, T> builder() {
return new AutoValue_RegistryJpaIO_Read.Builder<R, T>()
.name(DEFAULT_NAME)
.coder(SerializableCoder.of(Serializable.class));
}
@AutoValue.Builder
public abstract static class Builder<R, T> {
abstract Builder<R, T> name(String name);
abstract Builder<R, T> query(RegistryQuery<R> query);
abstract Builder<R, T> resultMapper(SerializableFunction<R, T> mapper);
abstract Builder<R, T> coder(Coder coder);
abstract Read<R, T> build();
Builder<R, T> criteriaQuery(CriteriaQuerySupplier<R> criteriaQuery) {
return query(RegistryQuery.createQuery(criteriaQuery));
}
Builder<R, T> nativeQuery(String sql, Map<String, Object> parameters) {
return query(RegistryQuery.createQuery(sql, parameters, true));
}
Builder<R, T> jpqlQuery(String jpql, Map<String, Object> parameters) {
return query(RegistryQuery.createQuery(jpql, parameters, false));
}
Builder<R, T> jpqlQuery(String jpql, Class<R> clazz, Map<String, Object> parameters) {
return query(RegistryQuery.createQuery(jpql, parameters, clazz));
}
}
static class QueryRunner<R, T> extends DoFn<Void, T> {
private final RegistryQuery<R> query;
private final SerializableFunction<R, T> resultMapper;
QueryRunner(RegistryQuery<R> query, SerializableFunction<R, T> resultMapper) {
this.query = query;
this.resultMapper = resultMapper;
}
@ProcessElement
public void processElement(OutputReceiver<T> outputReceiver) {
// AppEngineEnvironment is need for handling VKeys, which involve Ofy keys. Unlike
// SqlBatchWriter, it is unnecessary to initialize ObjectifyService in this class.
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
// TODO(b/187210388): JpaTransactionManager should support non-transactional query.
jpaTm()
.transactNoRetry(
() -> query.stream().map(resultMapper::apply).forEach(outputReceiver::output));
}
}
}
}
/**
* A {@link PTransform transform} that writes a PCollection of entities to the SQL database using
* the {@link JpaTransactionManager}.
@@ -182,8 +348,9 @@ public final class RegistryJpaIO {
@Setup
public void setup() {
// Below is needed as long as Objectify keys are still involved in the handling of SQL
// entities (e.g., in VKeys).
// AppEngineEnvironment is needed as long as Objectify keys are still involved in the handling
// of SQL entities (e.g., in VKeys). ObjectifyService needs to be initialized when conversion
// between Ofy entity and Datastore entity is needed.
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
ObjectifyService.initOfy();
}
@@ -192,17 +359,17 @@ public final class RegistryJpaIO {
@ProcessElement
public void processElement(@Element KV<ShardedKey<Integer>, Iterable<T>> kv) {
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
ImmutableList<Object> ofyEntities =
ImmutableList<Object> entities =
Streams.stream(kv.getValue())
.map(this.jpaConverter::apply)
// TODO(b/177340730): post migration delete the line below.
.filter(Objects::nonNull)
.collect(ImmutableList.toImmutableList());
try {
jpaTm().transact(() -> jpaTm().putAll(ofyEntities));
counter.inc(ofyEntities.size());
jpaTm().transact(() -> jpaTm().putAll(entities));
counter.inc(entities.size());
} catch (RuntimeException e) {
processSingly(ofyEntities);
processSingly(entities);
}
}
}
@@ -211,19 +378,22 @@ public final class RegistryJpaIO {
* Writes entities in a failed batch one by one to identify the first bad entity and throws a
* {@link RuntimeException} on it.
*/
private void processSingly(ImmutableList<Object> ofyEntities) {
for (Object ofyEntity : ofyEntities) {
private void processSingly(ImmutableList<Object> entities) {
for (Object entity : entities) {
try {
jpaTm().transact(() -> jpaTm().put(ofyEntity));
jpaTm().transact(() -> jpaTm().put(entity));
counter.inc();
} catch (RuntimeException e) {
throw new RuntimeException(toOfyKey(ofyEntity).toString(), e);
throw new RuntimeException(toEntityKeyString(entity), e);
}
}
}
private com.googlecode.objectify.Key<?> toOfyKey(Object ofyEntity) {
return com.googlecode.objectify.Key.create(ofyEntity);
private String toEntityKeyString(Object entity) {
if (entity instanceof SqlEntity) {
return ((SqlEntity) entity).getPrimaryKeyString();
}
return "Non-SqlEntity: " + String.valueOf(entity);
}
}
}

View File

@@ -0,0 +1,116 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.common;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import java.io.Serializable;
import java.util.Map;
import java.util.function.Supplier;
import java.util.stream.Stream;
import javax.annotation.Nullable;
import javax.persistence.EntityManager;
import javax.persistence.Query;
import javax.persistence.TypedQuery;
import javax.persistence.criteria.CriteriaQuery;
/** Interface for query instances used by {@link RegistryJpaIO.Read}. */
public interface RegistryQuery<T> extends Serializable {
Stream<T> stream();
interface CriteriaQuerySupplier<T> extends Supplier<CriteriaQuery<T>>, Serializable {}
/**
* Returns a {@link RegistryQuery} that creates a string query from constant text.
*
* @param nativeQuery whether the given string is to be interpreted as a native query or JPQL.
* @param parameters parameters to be substituted in the query.
* @param <T> Type of each row in the result set, {@link Object} in single-select queries, and
* {@code Object[]} in multi-select queries.
*/
static <T> RegistryQuery<T> createQuery(
String sql, @Nullable Map<String, Object> parameters, boolean nativeQuery) {
return () -> {
EntityManager entityManager = jpaTm().getEntityManager();
Query query =
nativeQuery ? entityManager.createNativeQuery(sql) : entityManager.createQuery(sql);
if (parameters != null) {
parameters.forEach(query::setParameter);
}
@SuppressWarnings("unchecked")
Stream<T> resultStream = query.getResultStream();
return nativeQuery ? resultStream : resultStream.map(e -> detach(entityManager, e));
};
}
/**
* Returns a {@link RegistryQuery} that creates a typed JPQL query from constant text.
*
* @param parameters parameters to be substituted in the query.
* @param <T> Type of each row in the result set.
*/
static <T> RegistryQuery<T> createQuery(
String jpql, @Nullable Map<String, Object> parameters, Class<T> clazz) {
return () -> {
TypedQuery<T> query = jpaTm().query(jpql, clazz);
if (parameters != null) {
parameters.forEach(query::setParameter);
}
return query.getResultStream();
};
}
/**
* Returns a {@link RegistryQuery} from a {@link CriteriaQuery} supplier.
*
* <p>A serializable supplier is needed in because {@link CriteriaQuery} itself must be created
* within a transaction, and we are not in a transaction yet when this function is called to set
* up the pipeline.
*
* @param <T> Type of each row in the result set.
*/
static <T> RegistryQuery<T> createQuery(CriteriaQuerySupplier<T> criteriaQuery) {
return () -> jpaTm().query(criteriaQuery.get()).getResultStream();
}
/**
* Removes an object from the JPA session cache if applicable.
*
* @param object An object that represents a row in the result set. It may be a JPA entity, a
* non-entity object, or an array that holds JPA entities and/or non-entities.
*/
static <T> T detach(EntityManager entityManager, T object) {
if (object.getClass().isArray()) {
for (Object arrayElement : (Object[]) object) {
detachObject(entityManager, arrayElement);
}
} else {
detachObject(entityManager, object);
}
return object;
}
static void detachObject(EntityManager entityManager, Object object) {
Class<?> objectClass = object.getClass();
if (objectClass.isPrimitive() || objectClass == String.class) {
return;
}
try {
entityManager.detach(object);
} catch (IllegalArgumentException e) {
// Not an entity. Do nothing.
}
}
}

View File

@@ -87,20 +87,18 @@ public class BulkDeleteDatastorePipeline {
private final BulkDeletePipelineOptions options;
private final Pipeline pipeline;
BulkDeleteDatastorePipeline(BulkDeletePipelineOptions options) {
this.options = options;
pipeline = Pipeline.create(options);
}
public void run() {
setupPipeline();
Pipeline pipeline = Pipeline.create(options);
setupPipeline(pipeline);
pipeline.run();
}
@SuppressWarnings("deprecation") // org.apache.beam.sdk.transforms.Reshuffle
private void setupPipeline() {
private void setupPipeline(Pipeline pipeline) {
checkState(
!FORBIDDEN_PROJECTS.contains(options.getProject()),
"Bulk delete is forbidden in %s",

View File

@@ -505,7 +505,7 @@ public class DatastoreV1 {
}
@StartBundle
public void startBundle(StartBundleContext c) throws Exception {
public void startBundle(StartBundleContext c) {
datastore =
datastoreFactory.getDatastore(
c.getPipelineOptions(), v1Options.getProjectId(), v1Options.getLocalhost());
@@ -548,7 +548,7 @@ public class DatastoreV1 {
}
@StartBundle
public void startBundle(StartBundleContext c) throws Exception {
public void startBundle(StartBundleContext c) {
datastore =
datastoreFactory.getDatastore(
c.getPipelineOptions(), options.getProjectId(), options.getLocalhost());
@@ -556,7 +556,7 @@ public class DatastoreV1 {
}
@ProcessElement
public void processElement(ProcessContext c) throws Exception {
public void processElement(ProcessContext c) {
Query query = c.element();
// If query has a user set limit, then do not split.
@@ -626,7 +626,7 @@ public class DatastoreV1 {
}
@StartBundle
public void startBundle(StartBundleContext c) throws Exception {
public void startBundle(StartBundleContext c) {
datastore =
datastoreFactory.getDatastore(
c.getPipelineOptions(), options.getProjectId(), options.getLocalhost());

View File

@@ -93,7 +93,7 @@ public final class BackupPaths {
checkArgument(!isNullOrEmpty(exportDir), "Null or empty exportDir.");
checkArgument(!isNullOrEmpty(kind), "Null or empty kind.");
checkArgument(shard >= 0, "Negative shard %s not allowed.", shard);
return String.format(EXPORT_PATTERN_TEMPLATE, exportDir, kind, Integer.toString(shard));
return String.format(EXPORT_PATTERN_TEMPLATE, exportDir, kind, shard);
}
/** Returns an {@link ImmutableList} of regex patterns that match all CommitLog files. */

View File

@@ -1,201 +0,0 @@
// Copyright 2020 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.initsql;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Strings.isNullOrEmpty;
import com.google.common.base.Splitter;
import dagger.Component;
import dagger.Lazy;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.keyring.kms.KmsModule;
import google.registry.persistence.PersistenceModule;
import google.registry.persistence.PersistenceModule.JdbcJpaTm;
import google.registry.persistence.PersistenceModule.SocketFactoryJpaTm;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.util.UtilsModule;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.channels.Channels;
import java.nio.charset.StandardCharsets;
import java.util.List;
import javax.annotation.Nullable;
import javax.inject.Singleton;
import org.apache.beam.sdk.io.FileSystems;
import org.apache.beam.sdk.io.fs.ResourceId;
/**
* Provides bindings for {@link JpaTransactionManager} to Cloud SQL.
*
* <p>This module is intended for use in BEAM pipelines, and uses a BEAM utility to access GCS like
* a regular file system.
*/
@Module
public class BeamJpaModule {
private static final String GCS_SCHEME = "gs://";
@Nullable private final String sqlAccessInfoFile;
@Nullable private final String cloudKmsProjectId;
@Nullable private final TransactionIsolationLevel isolationOverride;
/**
* Constructs a new instance of {@link BeamJpaModule}.
*
* <p>Note: it is an unfortunately necessary antipattern to check for the validity of
* sqlAccessInfoFile in {@link #provideCloudSqlAccessInfo} rather than in the constructor.
* Unfortunately, this is a restriction imposed upon us by Dagger. Specifically, because we use
* this in at least one 1 {@link google.registry.tools.RegistryTool} command(s), it must be
* instantiated in {@code google.registry.tools.RegistryToolComponent} for all possible commands;
* Dagger doesn't permit it to ever be null. For the vast majority of commands, it will never be
* used (so a null credential file path is fine in those cases).
*
* @param sqlAccessInfoFile the path to a Cloud SQL credential file. This must refer to either a
* real encrypted file on GCS as returned by {@link
* BackupPaths#getCloudSQLCredentialFilePatterns} or an unencrypted file on local filesystem
* with credentials to a test database.
* @param cloudKmsProjectId the GCP project where the credential decryption key can be found
* @param isolationOverride the desired Transaction Isolation level for all JDBC connections
*/
public BeamJpaModule(
@Nullable String sqlAccessInfoFile,
@Nullable String cloudKmsProjectId,
@Nullable TransactionIsolationLevel isolationOverride) {
this.sqlAccessInfoFile = sqlAccessInfoFile;
this.cloudKmsProjectId = cloudKmsProjectId;
this.isolationOverride = isolationOverride;
}
public BeamJpaModule(@Nullable String sqlAccessInfoFile, @Nullable String cloudKmsProjectId) {
this(sqlAccessInfoFile, cloudKmsProjectId, null);
}
/** Returns true if the credential file is on GCS (and therefore expected to be encrypted). */
private boolean isCloudSqlCredential() {
return sqlAccessInfoFile.startsWith(GCS_SCHEME);
}
@Provides
@Singleton
SqlAccessInfo provideCloudSqlAccessInfo(Lazy<CloudSqlCredentialDecryptor> lazyDecryptor) {
checkArgument(!isNullOrEmpty(sqlAccessInfoFile), "Null or empty credentialFilePath");
String line = readOnlyLineFromCredentialFile();
if (isCloudSqlCredential()) {
line = lazyDecryptor.get().decrypt(line);
}
// See ./BackupPaths.java for explanation of the line format.
List<String> parts = Splitter.on(' ').splitToList(line.trim());
checkState(parts.size() == 3, "Expecting three phrases in %s", line);
if (isCloudSqlCredential()) {
return SqlAccessInfo.createCloudSqlAccessInfo(parts.get(0), parts.get(1), parts.get(2));
} else {
return SqlAccessInfo.createLocalSqlAccessInfo(parts.get(0), parts.get(1), parts.get(2));
}
}
String readOnlyLineFromCredentialFile() {
try {
ResourceId resourceId = FileSystems.matchSingleFileSpec(sqlAccessInfoFile).resourceId();
try (BufferedReader reader =
new BufferedReader(
new InputStreamReader(
Channels.newInputStream(FileSystems.open(resourceId)), StandardCharsets.UTF_8))) {
return reader.readLine();
}
} catch (IOException e) {
throw new RuntimeException(e);
}
}
@Provides
@Config("beamCloudSqlJdbcUrl")
String provideJdbcUrl(SqlAccessInfo sqlAccessInfo) {
return sqlAccessInfo.jdbcUrl();
}
@Provides
@Config("beamCloudSqlInstanceConnectionName")
String provideSqlInstanceName(SqlAccessInfo sqlAccessInfo) {
return sqlAccessInfo
.cloudSqlInstanceName()
.orElseThrow(() -> new IllegalStateException("Cloud SQL not provisioned."));
}
@Provides
@Config("beamCloudSqlUsername")
String provideSqlUsername(SqlAccessInfo sqlAccessInfo) {
return sqlAccessInfo.user();
}
@Provides
@Config("beamCloudSqlPassword")
String provideSqlPassword(SqlAccessInfo sqlAccessInfo) {
return sqlAccessInfo.password();
}
@Provides
@Config("beamCloudKmsProjectId")
String kmsProjectId() {
return cloudKmsProjectId;
}
@Provides
@Config("beamCloudKmsKeyRing")
static String keyRingName() {
return "nomulus-tool-keyring";
}
@Provides
@Config("beamIsolationOverride")
@Nullable
TransactionIsolationLevel providesIsolationOverride() {
return isolationOverride;
}
@Provides
@Config("beamHibernateHikariMaximumPoolSize")
static int getBeamHibernateHikariMaximumPoolSize() {
// TODO(weiminyu): make this configurable. Should be equal to number of cores.
return 4;
}
@Singleton
@Component(
modules = {
ConfigModule.class,
CredentialModule.class,
BeamJpaModule.class,
KmsModule.class,
PersistenceModule.class,
SecretManagerModule.class,
UtilsModule.class
})
public interface JpaTransactionManagerComponent {
@SocketFactoryJpaTm
JpaTransactionManager cloudSqlJpaTransactionManager();
@JdbcJpaTm
JpaTransactionManager localDbJpaTransactionManager();
}
}

View File

@@ -1,50 +0,0 @@
// Copyright 2020 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.initsql;
import static com.google.common.base.Preconditions.checkArgument;
import com.google.api.services.cloudkms.v1.model.DecryptRequest;
import com.google.common.base.Strings;
import google.registry.config.RegistryConfig.Config;
import google.registry.keyring.kms.KmsConnection;
import java.nio.charset.StandardCharsets;
import java.util.Base64;
import javax.inject.Inject;
/**
* Decrypts data using Cloud KMS, with the same crypto key with which Cloud SQL credential files on
* GCS was encrypted. See {@link BackupPaths#getCloudSQLCredentialFilePatterns} for more
* information.
*/
public class CloudSqlCredentialDecryptor {
private static final String CRYPTO_KEY_NAME = "nomulus-tool-key";
private final KmsConnection kmsConnection;
@Inject
CloudSqlCredentialDecryptor(@Config("beamKmsConnection") KmsConnection kmsConnection) {
this.kmsConnection = kmsConnection;
}
public String decrypt(String data) {
checkArgument(!Strings.isNullOrEmpty(data), "Null or empty data.");
byte[] ciphertext = Base64.getDecoder().decode(data);
// Re-encode for Cloud KMS JSON REST API, invoked through kmsConnection.
String urlSafeCipherText = new DecryptRequest().encodeCiphertext(ciphertext).getCiphertext();
return new String(
kmsConnection.decrypt(CRYPTO_KEY_NAME, urlSafeCipherText), StandardCharsets.UTF_8);
}
}

View File

@@ -32,8 +32,8 @@ import google.registry.model.host.HostResource;
import google.registry.model.poll.PollMessage;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.registry.Registry;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import java.io.Serializable;
import java.util.Collection;
@@ -120,26 +120,22 @@ public class InitSqlPipeline implements Serializable {
private final InitSqlPipelineOptions options;
private final Pipeline pipeline;
InitSqlPipeline(InitSqlPipelineOptions options) {
this.options = options;
pipeline = Pipeline.create(options);
}
PipelineResult run() {
return run(Pipeline.create(options));
}
@VisibleForTesting
InitSqlPipeline(InitSqlPipelineOptions options, Pipeline pipeline) {
this.options = options;
this.pipeline = pipeline;
}
public PipelineResult run() {
setupPipeline();
PipelineResult run(Pipeline pipeline) {
setupPipeline(pipeline);
return pipeline.run();
}
@VisibleForTesting
void setupPipeline() {
void setupPipeline(Pipeline pipeline) {
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_UNCOMMITTED);
PCollectionTuple datastoreSnapshot =
pipeline.apply(

View File

@@ -1,60 +0,0 @@
// Copyright 2020 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.initsql;
import google.registry.beam.initsql.BeamJpaModule.JpaTransactionManagerComponent;
import google.registry.beam.initsql.Transforms.SerializableSupplier;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.JpaTransactionManager;
import javax.annotation.Nullable;
import org.apache.beam.sdk.transforms.SerializableFunction;
public class JpaSupplierFactory implements SerializableSupplier<JpaTransactionManager> {
private static final long serialVersionUID = 1L;
private final String credentialFileUrl;
@Nullable private final String cloudKmsProjectId;
private final SerializableFunction<JpaTransactionManagerComponent, JpaTransactionManager>
jpaGetter;
@Nullable private final TransactionIsolationLevel isolationLevelOverride;
public JpaSupplierFactory(
String credentialFileUrl,
@Nullable String cloudKmsProjectId,
SerializableFunction<JpaTransactionManagerComponent, JpaTransactionManager> jpaGetter) {
this(credentialFileUrl, cloudKmsProjectId, jpaGetter, null);
}
public JpaSupplierFactory(
String credentialFileUrl,
@Nullable String cloudKmsProjectId,
SerializableFunction<JpaTransactionManagerComponent, JpaTransactionManager> jpaGetter,
@Nullable TransactionIsolationLevel isolationLevelOverride) {
this.credentialFileUrl = credentialFileUrl;
this.cloudKmsProjectId = cloudKmsProjectId;
this.jpaGetter = jpaGetter;
this.isolationLevelOverride = isolationLevelOverride;
}
@Override
public JpaTransactionManager get() {
return jpaGetter.apply(
DaggerBeamJpaModule_JpaTransactionManagerComponent.builder()
.beamJpaModule(
new BeamJpaModule(credentialFileUrl, cloudKmsProjectId, isolationLevelOverride))
.build());
}
}

View File

@@ -1,45 +0,0 @@
// Copyright 2020 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.initsql;
import com.google.auto.value.AutoValue;
import java.util.Optional;
/**
* Information needed to connect to a database, including JDBC URL, user name, password, and in the
* case of Cloud SQL, the database instance's name.
*/
@AutoValue
abstract class SqlAccessInfo {
abstract String jdbcUrl();
abstract String user();
abstract String password();
abstract Optional<String> cloudSqlInstanceName();
public static SqlAccessInfo createCloudSqlAccessInfo(
String sqlInstanceName, String username, String password) {
return new AutoValue_SqlAccessInfo(
"jdbc:postgresql://google/postgres", username, password, Optional.of(sqlInstanceName));
}
public static SqlAccessInfo createLocalSqlAccessInfo(
String jdbcUrl, String username, String password) {
return new AutoValue_SqlAccessInfo(jdbcUrl, username, password, Optional.empty());
}
}

View File

@@ -19,13 +19,10 @@ import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.beam.initsql.BackupPaths.getCommitLogTimestamp;
import static google.registry.beam.initsql.BackupPaths.getExportFilePatterns;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.setJpaTm;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
import static java.util.Comparator.comparing;
import static org.apache.beam.sdk.values.TypeDescriptors.integers;
import static org.apache.beam.sdk.values.TypeDescriptors.kvs;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
@@ -35,17 +32,16 @@ import com.google.appengine.api.datastore.EntityTranslator;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.googlecode.objectify.Key;
import google.registry.backup.AppEngineEnvironment;
import google.registry.backup.CommitLogImports;
import google.registry.backup.VersionedEntity;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.domain.DomainBase;
import google.registry.model.ofy.ObjectifyService;
import google.registry.model.replay.DatastoreAndSqlEntity;
import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.HistoryEntry;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.schema.replay.DatastoreAndSqlEntity;
import google.registry.schema.replay.SqlEntity;
import google.registry.tools.LevelDbLogReader;
import java.io.Serializable;
import java.util.Collection;
@@ -53,7 +49,6 @@ import java.util.Iterator;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ThreadLocalRandom;
import java.util.function.Supplier;
import javax.annotation.Nullable;
import org.apache.beam.sdk.coders.StringUtf8Coder;
@@ -62,18 +57,14 @@ import org.apache.beam.sdk.io.FileIO;
import org.apache.beam.sdk.io.FileIO.ReadableFile;
import org.apache.beam.sdk.io.fs.EmptyMatchTreatment;
import org.apache.beam.sdk.io.fs.MatchResult.Metadata;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.GroupIntoBatches;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.ProcessFunction;
import org.apache.beam.sdk.transforms.SerializableFunction;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PBegin;
import org.apache.beam.sdk.values.PCollection;
@@ -268,81 +259,58 @@ public final class Transforms {
.iterator()));
}
/**
* Returns a {@link PTransform} that writes a {@link PCollection} of {@link VersionedEntity}s to a
* SQL database. and outputs an empty {@code PCollection<Void>}. This allows other operations to
* {@link org.apache.beam.sdk.transforms.Wait wait} for the completion of this transform.
*
* <p>Errors are handled according to the pipeline runner's default policy. As part of a one-time
* job, we will not add features unless proven necessary.
*
* @param transformId a unique ID for an instance of the returned transform
* @param maxWriters the max number of concurrent writes to SQL, which also determines the max
* number of connection pools created
* @param batchSize the number of entities to write in each operation
* @param jpaSupplier supplier of a {@link JpaTransactionManager}
*/
public static PTransform<PCollection<VersionedEntity>, PCollection<Void>> writeToSql(
String transformId,
int maxWriters,
int batchSize,
SerializableSupplier<JpaTransactionManager> jpaSupplier) {
return writeToSql(
transformId,
maxWriters,
batchSize,
jpaSupplier,
Transforms::convertVersionedEntityToSqlEntity,
TypeDescriptor.of(VersionedEntity.class));
}
// Production data repair configs go below. See b/185954992.
/**
* Returns a {@link PTransform} that writes a {@link PCollection} of entities to a SQL database.
* and outputs an empty {@code PCollection<Void>}. This allows other operations to {@link
* org.apache.beam.sdk.transforms.Wait wait} for the completion of this transform.
*
* <p>The converter and type descriptor are generics so that we can convert any type of entity to
* an object to be placed in SQL.
*
* <p>Errors are handled according to the pipeline runner's default policy. As part of a one-time
* job, we will not add features unless proven necessary.
*
* @param transformId a unique ID for an instance of the returned transform
* @param maxWriters the max number of concurrent writes to SQL, which also determines the max
* number of connection pools created
* @param batchSize the number of entities to write in each operation
* @param jpaSupplier supplier of a {@link JpaTransactionManager}
* @param jpaConverter the function that converts the input object to a JPA entity
* @param objectDescriptor the type descriptor of the input object
*/
public static <T> PTransform<PCollection<T>, PCollection<Void>> writeToSql(
String transformId,
int maxWriters,
int batchSize,
SerializableSupplier<JpaTransactionManager> jpaSupplier,
SerializableFunction<T, Object> jpaConverter,
TypeDescriptor<T> objectDescriptor) {
return new PTransform<PCollection<T>, PCollection<Void>>() {
@Override
public PCollection<Void> expand(PCollection<T> input) {
return input
.apply(
"Shard data for " + transformId,
MapElements.into(kvs(integers(), objectDescriptor))
.via(ve -> KV.of(ThreadLocalRandom.current().nextInt(maxWriters), ve)))
.apply("Batch output by shard " + transformId, GroupIntoBatches.ofSize(batchSize))
.apply(
"Write in batch for " + transformId,
ParDo.of(new SqlBatchWriter<T>(transformId, jpaSupplier, jpaConverter)));
}
};
}
// Prober domains in bad state, without associated contacts, hosts, billings, and history.
// They can be safely ignored.
private static final ImmutableSet<String> IGNORED_DOMAINS =
ImmutableSet.of("6AF6D2-IQCANT", "2-IQANYT");
private static Key toOfyKey(Object ofyEntity) {
return Key.create(ofyEntity);
}
// Prober hosts referencing phantom registrars. They and their associated history entries can be
// safely ignored.
private static final ImmutableSet<String> IGNORED_HOSTS =
ImmutableSet.of(
"4E21_WJ0TEST-GOOGLE",
"4E21_WJ1TEST-GOOGLE",
"4E21_WJ2TEST-GOOGLE",
"4E21_WJ3TEST-GOOGLE");
// Prober contacts referencing phantom registrars. They and their associated history entries can
// be safely ignored.
private static final ImmutableSet IGNORED_CONTACTS =
ImmutableSet.of(
"1_WJ0TEST-GOOGLE", "1_WJ1TEST-GOOGLE", "1_WJ2TEST-GOOGLE", "1_WJ3TEST-GOOGLE");
private static boolean isMigratable(Entity entity) {
// Checks specific to production data. See b/185954992 for details.
// The names of these bad entities in production do not conflict with other environments. For
// simplicities sake we apply them regardless of the source of the data.
if (entity.getKind().equals("DomainBase")
&& IGNORED_DOMAINS.contains(entity.getKey().getName())) {
return false;
}
if (entity.getKind().equals("ContactResource")) {
String roid = entity.getKey().getName();
return !IGNORED_CONTACTS.contains(roid);
}
if (entity.getKind().equals("HostResource")) {
String roid = entity.getKey().getName();
return !IGNORED_HOSTS.contains(roid);
}
if (entity.getKind().equals("HistoryEntry")) {
// Remove production bad data: History of the contacts to be ignored:
com.google.appengine.api.datastore.Key parentKey = entity.getKey().getParent();
if (parentKey.getKind().equals("ContactResource")) {
String contactRoid = parentKey.getName();
return !IGNORED_CONTACTS.contains(contactRoid);
}
if (parentKey.getKind().equals("HostResource")) {
String hostRoid = parentKey.getName();
return !IGNORED_HOSTS.contains(hostRoid);
}
}
// End of production-specific checks.
if (entity.getKind().equals("HistoryEntry")) {
// DOMAIN_APPLICATION_CREATE is deprecated type and should not be migrated.
// The Enum name DOMAIN_APPLICATION_CREATE no longer exists in Java and cannot
@@ -352,6 +320,18 @@ public final class Transforms {
return true;
}
private static Entity repairBadData(Entity entity) {
if (entity.getKind().equals("Cancellation")
&& Objects.equals(entity.getProperty("reason"), "AUTO_RENEW")) {
// AUTO_RENEW has been moved from 'reason' to flags. Change reason to RENEW and add the
// AUTO_RENEW flag. Note: all affected entities have empty flags so we can simply assign
// instead of append. See b/185954992.
entity.setUnindexedProperty("reason", Reason.RENEW.name());
entity.setUnindexedProperty("flags", ImmutableList.of(Flag.AUTO_RENEW.name()));
}
return entity;
}
private static SqlEntity toSqlEntity(Object ofyEntity) {
if (ofyEntity instanceof HistoryEntry) {
HistoryEntry ofyHistory = (HistoryEntry) ofyEntity;
@@ -372,7 +352,8 @@ public final class Transforms {
return dsEntity
.getEntity()
.filter(Transforms::isMigratable)
.map(e -> ofy().toPojo(e))
.map(Transforms::repairBadData)
.map(e -> auditedOfy().toPojo(e))
.map(Transforms::toSqlEntity)
.orElse(null);
}
@@ -458,93 +439,6 @@ public final class Transforms {
}
}
/**
* Writes a batch of entities to a SQL database.
*
* <p>Note that an arbitrary number of instances of this class may be created and freed in
* arbitrary order in a single JVM. Due to the tech debt that forced us to use a static variable
* to hold the {@code JpaTransactionManager} instance, we must ensure that JpaTransactionManager
* is not changed or torn down while being used by some instance.
*/
private static class SqlBatchWriter<T> extends DoFn<KV<Integer, Iterable<T>>, Void> {
private static int instanceCount = 0;
private static JpaTransactionManager originalJpa;
private Counter counter;
private final SerializableSupplier<JpaTransactionManager> jpaSupplier;
private final SerializableFunction<T, Object> jpaConverter;
SqlBatchWriter(
String type,
SerializableSupplier<JpaTransactionManager> jpaSupplier,
SerializableFunction<T, Object> jpaConverter) {
counter = Metrics.counter("SQL_WRITE", type);
this.jpaSupplier = jpaSupplier;
this.jpaConverter = jpaConverter;
}
@Setup
public void setup() {
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
ObjectifyService.initOfy();
}
synchronized (SqlBatchWriter.class) {
if (instanceCount == 0) {
originalJpa = jpaTm();
setJpaTm(jpaSupplier);
}
instanceCount++;
}
}
@Teardown
public void teardown() {
synchronized (SqlBatchWriter.class) {
instanceCount--;
if (instanceCount == 0) {
jpaTm().teardown();
setJpaTm(() -> originalJpa);
}
}
}
@ProcessElement
public void processElement(@Element KV<Integer, Iterable<T>> kv) {
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
ImmutableList<Object> ofyEntities =
Streams.stream(kv.getValue())
.map(this.jpaConverter::apply)
// TODO(b/177340730): post migration delete the line below.
.filter(Objects::nonNull)
.collect(ImmutableList.toImmutableList());
try {
jpaTm().transact(() -> jpaTm().putAll(ofyEntities));
counter.inc(ofyEntities.size());
} catch (RuntimeException e) {
processSingly(ofyEntities);
}
}
}
/**
* Writes entities in a failed batch one by one to identify the first bad entity and throws a
* {@link RuntimeException} on it.
*/
private void processSingly(ImmutableList<Object> ofyEntities) {
for (Object ofyEntity : ofyEntities) {
try {
jpaTm().transact(() -> jpaTm().put(ofyEntity));
counter.inc();
} catch (RuntimeException e) {
throw new RuntimeException(toOfyKey(ofyEntity).toString(), e);
}
}
}
}
/**
* Removes BillingEvents, {@link google.registry.model.poll.PollMessage PollMessages} and {@link
* google.registry.model.host.HostResource} from a {@link DomainBase}. These are circular foreign

View File

@@ -251,7 +251,14 @@ public abstract class BillingEvent implements Serializable {
InvoiceGroupingKey getInvoiceGroupingKey() {
return new AutoValue_BillingEvent_InvoiceGroupingKey(
billingTime().toLocalDate().withDayOfMonth(1).toString(),
billingTime().toLocalDate().withDayOfMonth(1).plusYears(years()).minusDays(1).toString(),
years() == 0
? ""
: billingTime()
.toLocalDate()
.withDayOfMonth(1)
.plusYears(years())
.minusDays(1)
.toString(),
billingId(),
String.format("%s - %s", registrarId(), tld()),
String.format("%s | TLD: %s | TERM: %d-year", action(), tld(), years()),
@@ -260,6 +267,11 @@ public abstract class BillingEvent implements Serializable {
poNumber());
}
/** Returns the grouping key for this {@code BillingEvent}, to generate the detailed report. */
String getDetailedReportGroupingKey() {
return String.format("%s_%s", registrarId(), tld());
}
/** Key for each {@code BillingEvent}, when aggregating for the overall invoice. */
@AutoValue
abstract static class InvoiceGroupingKey implements Serializable {

View File

@@ -14,28 +14,38 @@
package google.registry.beam.invoicing;
import com.google.auth.oauth2.GoogleCredentials;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.beam.BeamUtils.getQueryFromFile;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryJpaIO.Read;
import google.registry.beam.invoicing.BillingEvent.InvoiceGroupingKey;
import google.registry.beam.invoicing.BillingEvent.InvoiceGroupingKey.InvoiceGroupingKeyCoder;
import google.registry.config.CredentialModule.LocalCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.registrar.Registrar;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.reporting.billing.BillingModule;
import google.registry.reporting.billing.GenerateInvoicesAction;
import google.registry.util.GoogleCredentialsBundle;
import google.registry.util.DateTimeUtils;
import google.registry.util.DomainNameUtils;
import google.registry.util.SqlTemplate;
import java.io.Serializable;
import javax.inject.Inject;
import org.apache.beam.runners.dataflow.DataflowRunner;
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import java.time.LocalDateTime;
import java.time.LocalTime;
import java.time.YearMonth;
import java.time.ZoneId;
import java.time.format.DateTimeFormatter;
import java.util.Optional;
import java.util.regex.Pattern;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.io.DefaultFilenamePolicy.Params;
import org.apache.beam.sdk.io.FileBasedSink;
import org.apache.beam.sdk.coders.StringUtf8Coder;
import org.apache.beam.sdk.io.FileIO;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider;
import org.apache.beam.sdk.options.ValueProvider.NestedValueProvider;
import org.apache.beam.sdk.transforms.Contextful;
import org.apache.beam.sdk.transforms.Count;
import org.apache.beam.sdk.transforms.Filter;
import org.apache.beam.sdk.transforms.MapElements;
@@ -43,107 +53,90 @@ import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.TypeDescriptor;
import org.apache.beam.sdk.values.TypeDescriptors;
/**
* Definition of a Dataflow pipeline template, which generates a given month's invoices.
* Definition of a Dataflow Flex pipeline template, which generates a given month's invoices.
*
* <p>To stage this template on GCS, run the {@link
* google.registry.tools.DeployInvoicingPipelineCommand} Nomulus command.
* <p>To stage this template locally, run the {@code stage_beam_pipeline.sh} shell script.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
* For an example using the API client library, see {@link GenerateInvoicesAction}.
*
* @see <a href="https://cloud.google.com/dataflow/docs/templates/overview">Dataflow Templates</a>
* @see <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates">Using
* Flex Templates</a>
*/
public class InvoicingPipeline implements Serializable {
private final String projectId;
private final String beamJobRegion;
private final String beamBucketUrl;
private final String invoiceTemplateUrl;
private final String beamStagingUrl;
private final String billingBucketUrl;
private final String invoiceFilePrefix;
private final GoogleCredentials googleCredentials;
private static final DateTimeFormatter TIMESTAMP_FORMATTER =
DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSSSSS");
@Inject
public InvoicingPipeline(
@Config("projectId") String projectId,
@Config("defaultJobRegion") String beamJobRegion,
@Config("apacheBeamBucketUrl") String beamBucketUrl,
@Config("invoiceTemplateUrl") String invoiceTemplateUrl,
@Config("beamStagingUrl") String beamStagingUrl,
@Config("billingBucketUrl") String billingBucketUrl,
@Config("invoiceFilePrefix") String invoiceFilePrefix,
@LocalCredential GoogleCredentialsBundle googleCredentialsBundle) {
this.projectId = projectId;
this.beamJobRegion = beamJobRegion;
this.beamBucketUrl = beamBucketUrl;
this.invoiceTemplateUrl = invoiceTemplateUrl;
this.beamStagingUrl = beamStagingUrl;
this.billingBucketUrl = billingBucketUrl;
this.invoiceFilePrefix = invoiceFilePrefix;
this.googleCredentials = googleCredentialsBundle.getGoogleCredentials();
private static final Pattern SQL_COMMENT_REGEX =
Pattern.compile("^\\s*--.*\\n", Pattern.MULTILINE);
private final InvoicingPipelineOptions options;
InvoicingPipeline(InvoicingPipelineOptions options) {
this.options = options;
}
/** Custom options for running the invoicing pipeline. */
public interface InvoicingPipelineOptions extends DataflowPipelineOptions {
/** Returns the yearMonth we're generating invoices for, in yyyy-MM format. */
@Description("The yearMonth we generate invoices for, in yyyy-MM format.")
ValueProvider<String> getYearMonth();
/**
* Sets the yearMonth we generate invoices for.
*
* <p>This is implicitly set when executing the Dataflow template, by specifying the 'yearMonth
* parameter.
*/
void setYearMonth(ValueProvider<String> value);
PipelineResult run() {
Pipeline pipeline = Pipeline.create(options);
setupPipeline(pipeline);
return pipeline.run();
}
/** Deploys the invoicing pipeline as a template on GCS, for a given projectID and GCS bucket. */
public void deploy() {
// We can't store options as a member variable due to serialization concerns.
InvoicingPipelineOptions options = PipelineOptionsFactory.as(InvoicingPipelineOptions.class);
options.setProject(projectId);
options.setRegion(beamJobRegion);
options.setRunner(DataflowRunner.class);
// This causes p.run() to stage the pipeline as a template on GCS, as opposed to running it.
options.setTemplateLocation(invoiceTemplateUrl);
options.setStagingLocation(beamStagingUrl);
// This credential is used when Dataflow deploys the template to GCS in target GCP project.
// So, make sure the credential has write permission to GCS in that project.
options.setGcpCredential(googleCredentials);
Pipeline p = Pipeline.create(options);
void setupPipeline(Pipeline pipeline) {
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_COMMITTED);
PCollection<BillingEvent> billingEvents =
p.apply(
"Read BillingEvents from Bigquery",
BigQueryIO.read(BillingEvent::parseFromRecord)
.fromQuery(InvoicingUtils.makeQueryProvider(options.getYearMonth(), projectId))
.withCoder(SerializableCoder.of(BillingEvent.class))
.usingStandardSql()
.withoutValidation()
.withTemplateCompatibility());
applyTerminalTransforms(billingEvents, options.getYearMonth());
p.run();
options.getDatabase().equals("DATASTORE")
? readFromBigQuery(options, pipeline)
: readFromCloudSql(options, pipeline);
saveInvoiceCsv(billingEvents, options);
saveDetailedCsv(billingEvents, options);
}
/**
* Applies output transforms to the {@code BillingEvent} source collection.
*
* <p>This is factored out purely to facilitate testing.
*/
void applyTerminalTransforms(
PCollection<BillingEvent> billingEvents, ValueProvider<String> yearMonthProvider) {
billingEvents
.apply("Generate overall invoice rows", new GenerateInvoiceRows())
.apply("Write overall invoice to CSV", writeInvoice(yearMonthProvider));
static PCollection<BillingEvent> readFromBigQuery(
InvoicingPipelineOptions options, Pipeline pipeline) {
return pipeline.apply(
"Read BillingEvents from Bigquery",
BigQueryIO.read(BillingEvent::parseFromRecord)
.fromQuery(makeQuery(options.getYearMonth(), options.getProject()))
.withCoder(SerializableCoder.of(BillingEvent.class))
.usingStandardSql()
.withoutValidation()
.withTemplateCompatibility());
}
billingEvents.apply(
"Write detail reports to separate CSVs keyed by registrarId_tld pair",
writeDetailReports(yearMonthProvider));
static PCollection<BillingEvent> readFromCloudSql(
InvoicingPipelineOptions options, Pipeline pipeline) {
Read<Object[], BillingEvent> read =
RegistryJpaIO.read(
makeCloudSqlQuery(options.getYearMonth()), false, InvoicingPipeline::parseRow);
return pipeline.apply("Read BillingEvents from Cloud SQL", read);
}
private static BillingEvent parseRow(Object[] row) {
google.registry.model.billing.BillingEvent.OneTime oneTime =
(google.registry.model.billing.BillingEvent.OneTime) row[0];
Registrar registrar = (Registrar) row[1];
return BillingEvent.create(
oneTime.getId(),
DateTimeUtils.toZonedDateTime(oneTime.getBillingTime(), ZoneId.of("UTC")),
DateTimeUtils.toZonedDateTime(oneTime.getEventTime(), ZoneId.of("UTC")),
registrar.getClientId(),
registrar.getBillingIdentifier().toString(),
registrar.getPoNumber().orElse(""),
DomainNameUtils.getTldFromDomainName(oneTime.getTargetId()),
oneTime.getReason().toString(),
oneTime.getTargetId(),
oneTime.getDomainRepoId(),
Optional.ofNullable(oneTime.getPeriodYears()).orElse(0),
oneTime.getCost().getCurrencyUnit().toString(),
oneTime.getCost().getAmount().doubleValue(),
String.join(
" ", oneTime.getFlags().stream().map(Flag::toString).collect(toImmutableSet())));
}
/** Transform that converts a {@code BillingEvent} into an invoice CSV row. */
@@ -156,49 +149,100 @@ public class InvoicingPipeline implements Serializable {
"Map to invoicing key",
MapElements.into(TypeDescriptor.of(InvoiceGroupingKey.class))
.via(BillingEvent::getInvoiceGroupingKey))
.apply(Filter.by((InvoiceGroupingKey key) -> key.unitPrice() != 0))
.apply(
"Filter out free events", Filter.by((InvoiceGroupingKey key) -> key.unitPrice() != 0))
.setCoder(new InvoiceGroupingKeyCoder())
.apply("Count occurrences", Count.perElement())
.apply(
"Format as CSVs",
MapElements.into(TypeDescriptors.strings())
MapElements.into(strings())
.via((KV<InvoiceGroupingKey, Long> kv) -> kv.getKey().toCsv(kv.getValue())));
}
}
/** Returns an IO transform that writes the overall invoice to a single CSV file. */
private TextIO.Write writeInvoice(ValueProvider<String> yearMonthProvider) {
return TextIO.write()
.to(
NestedValueProvider.of(
yearMonthProvider,
yearMonth ->
/** Saves the billing events to a single overall invoice CSV file. */
static void saveInvoiceCsv(
PCollection<BillingEvent> billingEvents, InvoicingPipelineOptions options) {
billingEvents
.apply("Generate overall invoice rows", new GenerateInvoiceRows())
.apply(
"Write overall invoice to CSV",
TextIO.write()
.to(
String.format(
"%s/%s/%s/%s-%s",
billingBucketUrl,
options.getBillingBucketUrl(),
BillingModule.INVOICES_DIRECTORY,
yearMonth,
invoiceFilePrefix,
yearMonth)))
.withHeader(InvoiceGroupingKey.invoiceHeader())
.withoutSharding()
.withSuffix(".csv");
options.getYearMonth(),
options.getInvoiceFilePrefix(),
options.getYearMonth()))
.withHeader(InvoiceGroupingKey.invoiceHeader())
.withoutSharding()
.withSuffix(".csv"));
}
/** Returns an IO transform that writes detail reports to registrar-tld keyed CSV files. */
private TextIO.TypedWrite<BillingEvent, Params> writeDetailReports(
ValueProvider<String> yearMonthProvider) {
return TextIO.<BillingEvent>writeCustomType()
.to(
InvoicingUtils.makeDestinationFunction(
String.format("%s/%s", billingBucketUrl, BillingModule.INVOICES_DIRECTORY),
yearMonthProvider),
InvoicingUtils.makeEmptyDestinationParams(billingBucketUrl + "/errors"))
.withFormatFunction(BillingEvent::toCsv)
.withoutSharding()
.withTempDirectory(
FileBasedSink.convertToFileResourceIfPossible(beamBucketUrl + "/temporary"))
.withHeader(BillingEvent.getHeader())
.withSuffix(".csv");
/** Saves the billing events to detailed report CSV files keyed by registrar-tld pairs. */
static void saveDetailedCsv(
PCollection<BillingEvent> billingEvents, InvoicingPipelineOptions options) {
String yearMonth = options.getYearMonth();
billingEvents.apply(
"Write detailed report for each registrar-tld pair",
FileIO.<String, BillingEvent>writeDynamic()
.to(
String.format(
"%s/%s/%s",
options.getBillingBucketUrl(), BillingModule.INVOICES_DIRECTORY, yearMonth))
.by(BillingEvent::getDetailedReportGroupingKey)
.withNumShards(1)
.withDestinationCoder(StringUtf8Coder.of())
.withNaming(
key ->
(window, pane, numShards, shardIndex, compression) ->
String.format(
"%s_%s_%s.csv", BillingModule.DETAIL_REPORT_PREFIX, yearMonth, key))
.via(
Contextful.fn(BillingEvent::toCsv),
TextIO.sink().withHeader(BillingEvent.getHeader())));
}
/** Create the Bigquery query for a given project and yearMonth at runtime. */
static String makeQuery(String yearMonth, String projectId) {
// Get the timestamp endpoints capturing the entire month with microsecond precision
YearMonth reportingMonth = YearMonth.parse(yearMonth);
LocalDateTime firstMoment = reportingMonth.atDay(1).atTime(LocalTime.MIDNIGHT);
LocalDateTime lastMoment = reportingMonth.atEndOfMonth().atTime(LocalTime.MAX);
// Construct the month's query by filling in the billing_events.sql template
return SqlTemplate.create(getQueryFromFile(InvoicingPipeline.class, "billing_events.sql"))
.put("FIRST_TIMESTAMP_OF_MONTH", firstMoment.format(TIMESTAMP_FORMATTER))
.put("LAST_TIMESTAMP_OF_MONTH", lastMoment.format(TIMESTAMP_FORMATTER))
.put("PROJECT_ID", projectId)
.put("DATASTORE_EXPORT_DATA_SET", "latest_datastore_export")
.put("ONETIME_TABLE", "OneTime")
.put("REGISTRY_TABLE", "Registry")
.put("REGISTRAR_TABLE", "Registrar")
.put("CANCELLATION_TABLE", "Cancellation")
.build();
}
/** Create the Cloud SQL query for a given yearMonth at runtime. */
static String makeCloudSqlQuery(String yearMonth) {
YearMonth endMonth = YearMonth.parse(yearMonth).plusMonths(1);
String queryWithComments =
SqlTemplate.create(
getQueryFromFile(InvoicingPipeline.class, "cloud_sql_billing_events.sql"))
.put("FIRST_TIMESTAMP_OF_MONTH", yearMonth.concat("-01"))
.put(
"LAST_TIMESTAMP_OF_MONTH",
String.format("%d-%d-01", endMonth.getYear(), endMonth.getMonthValue()))
.build();
// Remove the comments from the query string
return SQL_COMMENT_REGEX.matcher(queryWithComments).replaceAll("");
}
public static void main(String[] args) {
PipelineOptionsFactory.register(InvoicingPipelineOptions.class);
InvoicingPipelineOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(InvoicingPipelineOptions.class);
new InvoicingPipeline(options).run();
}
}

View File

@@ -0,0 +1,42 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.invoicing;
import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Description;
/** Custom options for running the invoicing pipeline. */
public interface InvoicingPipelineOptions extends RegistryPipelineOptions {
@Description("The year and month we generate invoices for, in yyyy-MM format.")
String getYearMonth();
void setYearMonth(String value);
@Description("Filename prefix for the invoice CSV file.")
String getInvoiceFilePrefix();
void setInvoiceFilePrefix(String value);
@Description("The database to read data from.")
String getDatabase();
void setDatabase(String value);
@Description("The GCS bucket URL for invoices and detailed reports to be uploaded.")
String getBillingBucketUrl();
void setBillingBucketUrl(String value);
}

View File

@@ -1,106 +0,0 @@
// Copyright 2018 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.invoicing;
import static google.registry.beam.BeamUtils.getQueryFromFile;
import google.registry.util.SqlTemplate;
import java.time.LocalDateTime;
import java.time.LocalTime;
import java.time.YearMonth;
import java.time.format.DateTimeFormatter;
import org.apache.beam.sdk.io.DefaultFilenamePolicy.Params;
import org.apache.beam.sdk.io.FileBasedSink;
import org.apache.beam.sdk.options.ValueProvider;
import org.apache.beam.sdk.options.ValueProvider.NestedValueProvider;
import org.apache.beam.sdk.transforms.SerializableFunction;
/** Pipeline helper functions used to generate invoices from instances of {@link BillingEvent}. */
public class InvoicingUtils {
private InvoicingUtils() {}
private static final DateTimeFormatter TIMESTAMP_FORMATTER =
DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSSSSS");
/**
* Returns a function mapping from {@code BillingEvent} to filename {@code Params}.
*
* <p>Beam uses this to determine which file a given {@code BillingEvent} should get placed into.
*
* @param outputBucket the GCS bucket we're outputting reports to
* @param yearMonthProvider a runtime provider for the yyyy-MM we're generating the invoice for
*/
static SerializableFunction<BillingEvent, Params> makeDestinationFunction(
String outputBucket, ValueProvider<String> yearMonthProvider) {
return billingEvent ->
new Params()
.withShardTemplate("")
.withSuffix(".csv")
.withBaseFilename(
NestedValueProvider.of(
yearMonthProvider,
yearMonth ->
FileBasedSink.convertToFileResourceIfPossible(
String.format(
"%s/%s/%s",
outputBucket, yearMonth, billingEvent.toFilename(yearMonth)))));
}
/**
* Returns the default filename parameters for an unmappable {@code BillingEvent}.
*
* <p>The "failed" file should only be populated when an error occurs, which warrants further
* investigation.
*/
static Params makeEmptyDestinationParams(String outputBucket) {
return new Params()
.withBaseFilename(
FileBasedSink.convertToFileResourceIfPossible(
String.format("%s/%s", outputBucket, "FAILURES")));
}
/**
* Returns a provider that creates a Bigquery query for a given project and yearMonth at runtime.
*
* <p>We only know yearMonth at runtime, so this provider fills in the {@code
* sql/billing_events.sql} template at runtime.
*
* @param yearMonthProvider a runtime provider that returns which month we're invoicing for.
* @param projectId the projectId we're generating invoicing for.
*/
static ValueProvider<String> makeQueryProvider(
ValueProvider<String> yearMonthProvider, String projectId) {
return NestedValueProvider.of(
yearMonthProvider,
(yearMonth) -> {
// Get the timestamp endpoints capturing the entire month with microsecond precision
YearMonth reportingMonth = YearMonth.parse(yearMonth);
LocalDateTime firstMoment = reportingMonth.atDay(1).atTime(LocalTime.MIDNIGHT);
LocalDateTime lastMoment = reportingMonth.atEndOfMonth().atTime(LocalTime.MAX);
// Construct the month's query by filling in the billing_events.sql template
return SqlTemplate.create(getQueryFromFile(InvoicingPipeline.class, "billing_events.sql"))
.put("FIRST_TIMESTAMP_OF_MONTH", firstMoment.format(TIMESTAMP_FORMATTER))
.put("LAST_TIMESTAMP_OF_MONTH", lastMoment.format(TIMESTAMP_FORMATTER))
.put("PROJECT_ID", projectId)
.put("DATASTORE_EXPORT_DATA_SET", "latest_datastore_export")
.put("ONETIME_TABLE", "OneTime")
.put("REGISTRY_TABLE", "Registry")
.put("REGISTRAR_TABLE", "Registrar")
.put("CANCELLATION_TABLE", "Cancellation")
.build();
});
}
}

View File

@@ -0,0 +1,273 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.rde;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Verify.verify;
import static google.registry.model.common.Cursor.getCursorTimeOrStartOfTime;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.auto.value.AutoValue;
import com.google.cloud.storage.BlobId;
import com.google.common.flogger.FluentLogger;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.PgpHelper;
import google.registry.model.common.Cursor;
import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.rde.DepositFragment;
import google.registry.rde.Ghostryde;
import google.registry.rde.PendingDeposit;
import google.registry.rde.RdeCounter;
import google.registry.rde.RdeMarshaller;
import google.registry.rde.RdeResourceType;
import google.registry.rde.RdeUtil;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdeheader.XjcRdeHeaderElement;
import google.registry.xml.ValidationMode;
import google.registry.xml.XmlException;
import java.io.IOException;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.Writer;
import java.security.Security;
import java.util.Optional;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PDone;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.openpgp.PGPPublicKey;
import org.joda.time.DateTime;
public class RdeIO {
@AutoValue
abstract static class Write
extends PTransform<PCollection<KV<PendingDeposit, Iterable<DepositFragment>>>, PDone> {
abstract GcsUtils gcsUtils();
abstract String rdeBucket();
// It's OK to return a primitive array because we are only using it to construct the
// PGPPublicKey, which is not serializable.
@SuppressWarnings("mutable")
abstract byte[] stagingKeyBytes();
abstract ValidationMode validationMode();
static Builder builder() {
return new AutoValue_RdeIO_Write.Builder();
}
@AutoValue.Builder
abstract static class Builder {
abstract Builder setGcsUtils(GcsUtils gcsUtils);
abstract Builder setRdeBucket(String value);
abstract Builder setStagingKeyBytes(byte[] value);
abstract Builder setValidationMode(ValidationMode value);
abstract Write build();
}
@Override
public PDone expand(PCollection<KV<PendingDeposit, Iterable<DepositFragment>>> input) {
input
.apply(
"Write to GCS",
ParDo.of(new RdeWriter(gcsUtils(), rdeBucket(), stagingKeyBytes(), validationMode())))
.apply("Update cursors", ParDo.of(new CursorUpdater()));
return PDone.in(input.getPipeline());
}
}
private static class RdeWriter
extends DoFn<KV<PendingDeposit, Iterable<DepositFragment>>, KV<PendingDeposit, Integer>> {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final GcsUtils gcsUtils;
private final String rdeBucket;
private final byte[] stagingKeyBytes;
private final RdeMarshaller marshaller;
protected RdeWriter(
GcsUtils gcsUtils,
String rdeBucket,
byte[] stagingKeyBytes,
ValidationMode validationMode) {
this.gcsUtils = gcsUtils;
this.rdeBucket = rdeBucket;
this.stagingKeyBytes = stagingKeyBytes;
this.marshaller = new RdeMarshaller(validationMode);
}
@Setup
public void setup() {
Security.addProvider(new BouncyCastleProvider());
}
@ProcessElement
public void processElement(
@Element KV<PendingDeposit, Iterable<DepositFragment>> kv,
PipelineOptions options,
OutputReceiver<KV<PendingDeposit, Integer>> outputReceiver) {
PGPPublicKey stagingKey = PgpHelper.loadPublicKeyBytes(stagingKeyBytes);
PendingDeposit key = kv.getKey();
Iterable<DepositFragment> fragments = kv.getValue();
RdeCounter counter = new RdeCounter();
// Determine some basic things about the deposit.
final RdeMode mode = key.mode();
final String tld = key.tld();
final DateTime watermark = key.watermark();
final int revision =
Optional.ofNullable(key.revision())
.orElseGet(() -> RdeRevision.getNextRevision(tld, watermark, mode));
String id = RdeUtil.timestampToId(watermark);
String prefix = options.getJobName();
String basename = RdeNamingUtils.makeRydeFilename(tld, watermark, mode, 1, revision);
if (key.manual()) {
checkState(key.directoryWithTrailingSlash() != null, "Manual subdirectory not specified");
prefix = prefix + "/manual/" + key.directoryWithTrailingSlash() + basename;
} else {
prefix = prefix + "/" + basename;
}
BlobId xmlFilename = BlobId.of(rdeBucket, prefix + ".xml.ghostryde");
// This file will contain the byte length (ASCII) of the raw unencrypted XML.
//
// This is necessary because RdeUploadAction creates a tar file which requires that the length
// be outputted. We don't want to have to decrypt the entire ghostryde file to determine the
// length, so we just save it separately.
BlobId xmlLengthFilename = BlobId.of(rdeBucket, prefix + ".xml.length");
BlobId reportFilename = BlobId.of(rdeBucket, prefix + "-report.xml.ghostryde");
// These variables will be populated as we write the deposit XML and used for other files.
boolean failed = false;
XjcRdeHeader header;
// Write a gigantic XML file to GCS. We'll start by opening encrypted out/err file handles.
logger.atInfo().log("Writing %s and %s", xmlFilename, xmlLengthFilename);
try (OutputStream gcsOutput = gcsUtils.openOutputStream(xmlFilename);
OutputStream lengthOutput = gcsUtils.openOutputStream(xmlLengthFilename);
OutputStream ghostrydeEncoder = Ghostryde.encoder(gcsOutput, stagingKey, lengthOutput);
Writer output = new OutputStreamWriter(ghostrydeEncoder, UTF_8)) {
// Output the top portion of the XML document.
output.write(marshaller.makeHeader(id, watermark, RdeResourceType.getUris(mode), revision));
// Output XML fragments while counting them.
for (DepositFragment fragment : fragments) {
if (!fragment.xml().isEmpty()) {
output.write(fragment.xml());
counter.increment(fragment.type());
}
if (!fragment.error().isEmpty()) {
failed = true;
logger.atSevere().log("Fragment error: %s", fragment.error());
}
}
// Don't write the IDN elements for BRDA.
if (mode == RdeMode.FULL) {
for (IdnTableEnum idn : IdnTableEnum.values()) {
output.write(marshaller.marshalIdn(idn.getTable()));
counter.increment(RdeResourceType.IDN);
}
}
// Output XML that says how many resources were emitted.
header = counter.makeHeader(tld, mode);
output.write(marshaller.marshalOrDie(new XjcRdeHeaderElement(header)));
// Output the bottom of the XML document.
output.write(marshaller.makeFooter());
} catch (IOException e) {
throw new RuntimeException(e);
}
// If an entity was broken, abort after writing as much logs/deposit data as possible.
verify(!failed, "RDE staging failed for TLD %s", tld);
// Write a tiny XML file to GCS containing some information about the deposit.
//
// This will be sent to ICANN once we're done uploading the big XML to the escrow provider.
if (mode == RdeMode.FULL) {
logger.atInfo().log("Writing %s", reportFilename);
try (OutputStream gcsOutput = gcsUtils.openOutputStream(reportFilename);
OutputStream ghostrydeEncoder = Ghostryde.encoder(gcsOutput, stagingKey)) {
counter.makeReport(id, watermark, header, revision).marshal(ghostrydeEncoder, UTF_8);
} catch (IOException | XmlException e) {
throw new RuntimeException(e);
}
}
// Now that we're done, output roll the cursor forward.
if (key.manual()) {
logger.atInfo().log("Manual operation; not advancing cursor or enqueuing upload task");
} else {
outputReceiver.output(KV.of(key, revision));
}
}
}
private static class CursorUpdater extends DoFn<KV<PendingDeposit, Integer>, Void> {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@ProcessElement
public void processElement(@Element KV<PendingDeposit, Integer> input) {
tm().transact(
() -> {
PendingDeposit key = input.getKey();
int revision = input.getValue();
Registry registry = Registry.get(key.tld());
Optional<Cursor> cursor =
transactIfJpaTm(
() ->
tm().loadByKeyIfPresent(
Cursor.createVKey(key.cursor(), registry.getTldStr())));
DateTime position = getCursorTimeOrStartOfTime(cursor);
checkState(key.interval() != null, "Interval must be present");
DateTime newPosition = key.watermark().plus(key.interval());
if (!position.isBefore(newPosition)) {
logger.atWarning().log("Cursor has already been rolled forward.");
return;
}
verify(
position.equals(key.watermark()),
"Partial ordering of RDE deposits broken: %s %s",
position,
key);
tm().put(Cursor.create(key.cursor(), newPosition, registry));
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), revision);
});
}
}
}

View File

@@ -0,0 +1,303 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.rde;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.model.EppResourceUtils.loadAtPointInTimeAsync;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import com.google.common.io.BaseEncoding;
import dagger.BindsInstance;
import dagger.Component;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.gcs.GcsUtils;
import google.registry.model.EppResource;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.host.HostResource;
import google.registry.model.rde.RdeMode;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.Registrar.Type;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.VKey;
import google.registry.rde.DepositFragment;
import google.registry.rde.PendingDeposit;
import google.registry.rde.PendingDeposit.PendingDepositCoder;
import google.registry.rde.RdeFragmenter;
import google.registry.rde.RdeMarshaller;
import google.registry.xml.ValidationMode;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.function.Supplier;
import javax.inject.Inject;
import javax.inject.Singleton;
import javax.persistence.Entity;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.coders.KvCoder;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.FlatMapElements;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.Reshuffle;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.TypeDescriptor;
import org.apache.beam.sdk.values.TypeDescriptors;
import org.joda.time.DateTime;
/**
* Definition of a Dataflow Flex template, which generates RDE/BRDA deposits.
*
* <p>To stage this template locally, run the {@code stage_beam_pipeline.sh} shell script.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*
* @see <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates">Using
* Flex Templates</a>
*/
@Singleton
public class RdePipeline implements Serializable {
private final transient RdePipelineOptions options;
private final ValidationMode mode;
private final ImmutableSetMultimap<String, PendingDeposit> pendings;
private final String rdeBucket;
private final byte[] stagingKeyBytes;
private final GcsUtils gcsUtils;
// Registrars to be excluded from data escrow. Not including the sandbox-only OTE type so that
// if sneaks into production we would get an extra signal.
private static final ImmutableSet<Type> IGNORED_REGISTRAR_TYPES =
Sets.immutableEnumSet(Registrar.Type.MONITORING, Registrar.Type.TEST);
private static final String EPP_RESOURCE_QUERY =
"SELECT id FROM %entity% "
+ "WHERE COALESCE(creationClientId, '') NOT LIKE 'prober-%' "
+ "AND COALESCE(currentSponsorClientId, '') NOT LIKE 'prober-%' "
+ "AND COALESCE(lastEppUpdateClientId, '') NOT LIKE 'prober-%'";
public static String createEppResourceQuery(Class<? extends EppResource> clazz) {
return EPP_RESOURCE_QUERY.replace("%entity%", clazz.getAnnotation(Entity.class).name())
+ (clazz.equals(DomainBase.class) ? " AND tld in (:tlds)" : "");
}
@Inject
RdePipeline(RdePipelineOptions options, GcsUtils gcsUtils) {
this.options = options;
this.mode = ValidationMode.valueOf(options.getValidationMode());
this.pendings = decodePendings(options.getPendings());
this.rdeBucket = options.getGcsBucket();
this.stagingKeyBytes = BaseEncoding.base64Url().decode(options.getStagingKey());
this.gcsUtils = gcsUtils;
}
PipelineResult run() {
Pipeline pipeline = Pipeline.create(options);
PCollection<KV<PendingDeposit, Iterable<DepositFragment>>> fragments =
createFragments(pipeline);
persistData(fragments);
return pipeline.run();
}
PCollection<KV<PendingDeposit, Iterable<DepositFragment>>> createFragments(Pipeline pipeline) {
return PCollectionList.of(processRegistrars(pipeline))
.and(processNonRegistrarEntities(pipeline, DomainBase.class))
.and(processNonRegistrarEntities(pipeline, ContactResource.class))
.and(processNonRegistrarEntities(pipeline, HostResource.class))
.apply(Flatten.pCollections())
.setCoder(KvCoder.of(PendingDepositCoder.of(), SerializableCoder.of(DepositFragment.class)))
.apply("Group by PendingDeposit", GroupByKey.create());
}
void persistData(PCollection<KV<PendingDeposit, Iterable<DepositFragment>>> input) {
input.apply(
"Write to GCS and update cursors",
RdeIO.Write.builder()
.setRdeBucket(rdeBucket)
.setGcsUtils(gcsUtils)
.setValidationMode(mode)
.setStagingKeyBytes(stagingKeyBytes)
.build());
}
PCollection<KV<PendingDeposit, DepositFragment>> processRegistrars(Pipeline pipeline) {
return pipeline
.apply(
"Read all production Registrar entities",
RegistryJpaIO.read(
"SELECT clientIdentifier FROM Registrar WHERE type NOT IN (:types)",
ImmutableMap.of("types", IGNORED_REGISTRAR_TYPES),
String.class,
// TODO: consider adding coders for entities and pass them directly instead of using
// VKeys.
id -> VKey.createSql(Registrar.class, id)))
.apply(
"Marshal Registrar into DepositFragment",
FlatMapElements.into(
TypeDescriptors.kvs(
TypeDescriptor.of(PendingDeposit.class),
TypeDescriptor.of(DepositFragment.class)))
.via(
(VKey<Registrar> key) -> {
Registrar registrar = jpaTm().transact(() -> jpaTm().loadByKey(key));
DepositFragment fragment =
new RdeMarshaller(mode).marshalRegistrar(registrar);
return pendings.values().stream()
.map(pending -> KV.of(pending, fragment))
.collect(toImmutableSet());
}));
}
@SuppressWarnings("deprecation") // Reshuffle is still recommended by Dataflow.
<T extends EppResource>
PCollection<KV<PendingDeposit, DepositFragment>> processNonRegistrarEntities(
Pipeline pipeline, Class<T> clazz) {
return createInputs(pipeline, clazz)
.apply("Marshal " + clazz.getSimpleName() + " into DepositFragment", mapToFragments(clazz))
.setCoder(KvCoder.of(PendingDepositCoder.of(), SerializableCoder.of(DepositFragment.class)))
.apply(
"Reshuffle KV<PendingDeposit, DepositFragment> of "
+ clazz.getSimpleName()
+ " to prevent fusion",
Reshuffle.of());
}
<T extends EppResource> PCollection<VKey<T>> createInputs(Pipeline pipeline, Class<T> clazz) {
return pipeline.apply(
"Read all production " + clazz.getSimpleName() + " entities",
RegistryJpaIO.read(
createEppResourceQuery(clazz),
clazz.equals(DomainBase.class)
? ImmutableMap.of("tlds", pendings.keySet())
: ImmutableMap.of(),
String.class,
// TODO: consider adding coders for entities and pass them directly instead of using
// VKeys.
x -> VKey.create(clazz, x)));
}
<T extends EppResource>
FlatMapElements<VKey<T>, KV<PendingDeposit, DepositFragment>> mapToFragments(Class<T> clazz) {
return FlatMapElements.into(
TypeDescriptors.kvs(
TypeDescriptor.of(PendingDeposit.class), TypeDescriptor.of(DepositFragment.class)))
.via(
(VKey<T> key) -> {
T resource = jpaTm().transact(() -> jpaTm().loadByKey(key));
// The set of all TLDs to which this resource should be emitted.
ImmutableSet<String> tlds =
clazz.equals(DomainBase.class)
? ImmutableSet.of(((DomainBase) resource).getTld())
: pendings.keySet();
// Get the set of all point-in-time watermarks we need, to minimize rewinding.
ImmutableSet<DateTime> dates =
tlds.stream()
.map(pendings::get)
.flatMap(ImmutableSet::stream)
.map(PendingDeposit::watermark)
.collect(toImmutableSet());
// Launch asynchronous fetches of point-in-time representations of resource.
ImmutableMap<DateTime, Supplier<EppResource>> resourceAtTimes =
ImmutableMap.copyOf(
Maps.asMap(dates, input -> loadAtPointInTimeAsync(resource, input)));
// Convert resource to an XML fragment for each watermark/mode pair lazily and cache
// the result.
RdeFragmenter fragmenter =
new RdeFragmenter(resourceAtTimes, new RdeMarshaller(mode));
List<KV<PendingDeposit, DepositFragment>> results = new ArrayList<>();
for (String tld : tlds) {
for (PendingDeposit pending : pendings.get(tld)) {
// Hosts and contacts don't get included in BRDA deposits.
if (pending.mode() == RdeMode.THIN && !clazz.equals(DomainBase.class)) {
continue;
}
Optional<DepositFragment> fragment =
fragmenter.marshal(pending.watermark(), pending.mode());
fragment.ifPresent(
depositFragment -> results.add(KV.of(pending, depositFragment)));
}
}
return results;
});
}
/**
* Decodes the pipeline option extracted from the URL parameter sent by the pipeline launcher to
* the original TLD to pending deposit map.
*/
@SuppressWarnings("unchecked")
static ImmutableSetMultimap<String, PendingDeposit> decodePendings(String encodedPending) {
try (ObjectInputStream ois =
new ObjectInputStream(
new ByteArrayInputStream(
BaseEncoding.base64Url().omitPadding().decode(encodedPending)))) {
return (ImmutableSetMultimap<String, PendingDeposit>) ois.readObject();
} catch (IOException | ClassNotFoundException e) {
throw new IllegalArgumentException("Unable to parse encoded pending deposit map.", e);
}
}
/**
* Encodes the TLD to pending deposit map in an URL safe string that is sent to the pipeline
* worker by the pipeline launcher as a pipeline option.
*/
static String encodePendings(ImmutableSetMultimap<String, PendingDeposit> pendings)
throws IOException {
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
ObjectOutputStream oos = new ObjectOutputStream(baos);
oos.writeObject(pendings);
oos.flush();
return BaseEncoding.base64Url().omitPadding().encode(baos.toByteArray());
}
}
public static void main(String[] args) throws IOException, ClassNotFoundException {
PipelineOptionsFactory.register(RdePipelineOptions.class);
RdePipelineOptions options = PipelineOptionsFactory.fromArgs(args).as(RdePipelineOptions.class);
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_COMMITTED);
DaggerRdePipeline_RdePipelineComponent.builder().options(options).build().rdePipeline().run();
}
@Singleton
@Component(modules = {CredentialModule.class, ConfigModule.class})
interface RdePipelineComponent {
RdePipeline rdePipeline();
@Component.Builder
interface Builder {
@BindsInstance
Builder options(RdePipelineOptions options);
RdePipelineComponent build();
}
}
}

View File

@@ -0,0 +1,42 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.rde;
import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Description;
/** Custom options for running the spec11 pipeline. */
public interface RdePipelineOptions extends RegistryPipelineOptions {
@Description("The Base64-encoded serialized map of TLDs to PendingDeposit.")
String getPendings();
void setPendings(String value);
@Description("The validation mode (LENIENT|STRICT) that the RDE marshaller uses.")
String getValidationMode();
void setValidationMode(String value);
@Description("The GCS bucket where the encrypted RDE deposits will be uploaded to.")
String getGcsBucket();
void setGcsBucket(String value);
@Description("The Base64-encoded PGP public key to encrypt the deposits.")
String getStagingKey();
void setStagingKey(String value);
}

Some files were not shown because too many files have changed in this diff Show More