1
0
mirror of https://github.com/google/nomulus synced 2026-02-02 19:12:27 +00:00

Compare commits

...

102 Commits

Author SHA1 Message Date
gbrodman
40b7a23d88 Filter missing dsData digests during replay (#1439)
This is a result of bad data (we should never allow a null digest) and
we'll need to fix that separately, but this allows us to not fail on
this during replay
2021-11-30 15:37:42 -05:00
gbrodman
05e36f378b Add NotLoggedInException tests to flows and flow docs (#1437)
* Add NotLoggedInException tests to flows and flow docs

This wasn't included in flows.md before because the test existed in
ResourceFlowTestCase. So even though the exception could be thrown and
even though this was tested, it wasn't picked up in the documentation
because the documentation is picked up from the corresponding concrete
test class.
2021-11-30 15:00:05 -05:00
Weimin Yu
a82e6a05af Validate SQL with Datastore being Primary (#1436)
* Validate SQL with Datastore being primary

Validates the data asynchronously replicated from Datastore to SQL.
This is a short term tool optimized for the current production database.

Tested in production.
2021-11-30 12:57:49 -05:00
gbrodman
b8583bb325 Provide useful error messages on flows run during read-only mode (#1425)
We want to keep the read-only-mode-exception as an unchecked exception,
so we introduce a temporary check in the EppController that provides a
specific error message for this situation (rather than letting it fall
through to the generic "command failed" messaging
2021-11-24 14:57:44 -05:00
Rachel Guan
c31c1d4013 Replace VKey.fromWebsafeKey() with VKey.create(string) (#1414)
* Replace with stringify() and VKey.create(string)

* Convert implicit cases of VKey.fromWebsafeKey(string)

* Convert from Key to VKey to use stringify()

* Modify existing code to show correct string representation of a key

* Use VKey.create(websafeKey) to get ofy key in ResaveEntitiesCommand

* Add TODO note in CommitLogMutation and determine if key string should be modified

* Revert from stringify() to getOfyKey().getString()

* Add bug ids to TODOs
2021-11-24 12:14:13 -05:00
gbrodman
4adb7d859d Ignore read-only mode in SQL->DS replication process (#1432)
* Ignore read-only mode in SQL->DS replication process

We need to be able to save indices and save data about the replication
even when we're in read-only mode.
2021-11-24 11:51:25 -05:00
sarahcaseybot
d4aa7b3c78 Add schema change for missing PollMessage.OneTime column (#1434) 2021-11-24 11:23:26 -05:00
gbrodman
2d9e969f87 Remove converter for CreateAutoTimestamp (#1429)
We can handle it the same way that we handle UpdateAutoTimestamp, where
we simply populate it in SQL if it doesn't exist. This has the following
benefits:

1. The converter is unnecessary code
2. We get non-null column definitions for free (overridden in
EppResource to allow null creation times so that legacy *History objects
can contain null in that field
3. More importantly, this allows us for proper SQL->DS replay. If the
field is filled out using a converter (as before this PR) then the field
is only actually filled out on transaction commit (rather than when the
write occurs within the transaction). This means that when we serialize
the Transaction object during the transaction (the data that gets
replayed to Datastore), we are crucially missing the creation time.

If the creation time is written on commit, we have to start a new
transaction to write the Transaction object, and it's an absolute
necessity that the record of the transaction be included in the
transaction itself so as to avoid situations where the transaction
succeeds but the record fails.

If the field is filled out in a @PrePersist method, crucially that
occurs on the object write itself (before transaction commit).
2021-11-23 14:56:47 -05:00
Lai Jiang
65c8769c68 Refactor RDE pipeline (#1427)
The original RDE pipeline was a direct translation of the App Engine
MapReduce logic. It turned out to be too slow (taking more than a day to
run) due to the way it finds the most recent history entry.

This PR overhauled the pipeline by using embedded EPP resource entities
inside history entries (only available in SQL) and finding the most
recent entries using the SQL engine. It cuts the time done to ~2h.

Note that there are quota limits on the CPU cores and external IP
addresses for a given GCP region inside a project, which will need to
accommodate the resource requirements for the pipeline. More details are
provided in comments.

Also merged the update cursor stage and enqueue next action stage in
RdeIO so that they can be done within a transaction, same as how
MapReduce handles them.


<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1427)
<!-- Reviewable:end -->
2021-11-23 11:29:00 -05:00
Michael Muller
bf4b6978a7 Add "postgres" robot id to nomulus (#1433) 2021-11-22 12:35:51 -05:00
Rachel Guan
548ae25fac Change Optional::isEmpty to Optional::isPresent (#1428) 2021-11-18 17:08:15 -05:00
gbrodman
8393c75929 Ignore read-only mode when running commit logs / backups (#1424)
We need to be able to continue running the backup and async replay code
while the database is in read-only mode
2021-11-18 15:42:23 -05:00
sarahcaseybot
1764ae0b3f Remove TmchCrl singleton from Datastore (#1419) 2021-11-17 14:53:29 -05:00
Rachel Guan
d76abfc23a Change TaskQueueUtils to CloudTaskUtils in CommitLogFanoutAction (#1408)
* Change TaskOptions to Task in CommitLogFanoutAction

* Add a createTask method that takes clock and jitterSeconds

* Change CreateTask parameter type and improve test cases

* Improve comments and test casse

* Improve test cases that handel jitterSeconds
2021-11-17 10:54:42 -05:00
Ben McIlwain
6af9299a3c Grandfather in old data for one-time billing event requirement (#1423)
* Grandfather in old data for one-time billing event requirement

We have data from 2018 and earlier where we didn't consistently set periodYears
for OneTime BillingEvents with certain reasons. This grandfathers in that old
data so that we can successfully move it over to Cloud SQL for now, then we can
later run a query that will backfill it, after which we can then tighten up the
requirement again. Note that the requirement is still being enforced for all
billing events from 2019 onwards.

This also improves the handling of validation, by adding a private field to the
Reason enum rather than creating a throwaway inline ImmmutableSet in the
Builder.
2021-11-16 16:12:08 -05:00
gbrodman
a53c127573 Release the replay lock in SQL, not Datastore (#1422)
* Release the replay lock in SQL, not Datastore

It's always acquired in SQL, so it should always be released in SQL.
2021-11-16 11:37:20 -05:00
Ben McIlwain
8dbf4fced9 Send registrars poll messages when we add/remove server-side statuses (#1417)
* Send registrars poll messages when we add/remove server-side status values
2021-11-16 11:35:05 -05:00
gbrodman
5dc6354ebc Add backend routing for ReplicateToDatastoreAction (#1415)
Otherwise it's not visible so we can't call it
2021-11-15 16:25:10 -05:00
Lai Jiang
c84767bd07 Make Nomulus compile on macOS (#1421)
BSD sed requires a parameter to -i to indicate the backup suffix. By
adding a blank suffix the sed command works on both Linux and macOS.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1421)
<!-- Reviewable:end -->
2021-11-15 11:35:48 -05:00
Lai Jiang
a59f09e011 Update to Gradle 6.9.1 (#1420)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1420)
<!-- Reviewable:end -->
2021-11-15 10:23:26 -05:00
Michael Muller
b4b318f923 Make TaskMatcher default to POST methods (#1418)
* Make TaskMatcher default to POST methods

TaskOptions.Builder.withUrl() defaults to POST methods.  Therefore, it seems
reasonable to verify that task queue methods are using the POST method,
especially given that the method must now be identified explicitly when using
CloudTaskUtils.  This check would have guarded against the bug fixed by #1413.

* Elaborate on comment

* Further improved the comment
2021-11-12 14:03:23 -05:00
Rachel Guan
52550a9251 Correct HTTP method in CommitLogCheckPointAction (#1413)
* Correct HTTP method in CommitLogCheckPointAction
2021-11-11 15:59:48 -05:00
Michael Muller
930c4f8cfa Add all necessary proxy configuration for QA (#1416)
* Add all necessary proxy configuration for QA

Add configuration files, deployment files and the necessary enum values for
the QA environment.
2021-11-11 15:36:47 -05:00
Weimin Yu
b4468d83a9 Remove the ineffective SQL injection check (#1412)
* Remove the ineffective SQL injection check

Remove the ineffective SQL-injection attack check in go/r3pr/954. It is
quite restrictive, causing a long exempt list. It also doesn't protect
queries made through helpers such as QueryComposer etc.

We will start from scratch for a new solution.
2021-11-10 16:28:32 -05:00
Rachel Guan
4dc4daffe6 Change from TaskQueueUtils to CLoudTasksUtils in PublishInvoicesAction (#1410)
* Change from TaskQueueUtils to CLoudTasksUtils in PublishInvoicesAction
2021-11-10 10:13:19 -05:00
Rachel Guan
76458bb3b9 Change TaskQueueUtils to CloudTaskUtils in CommitLogCheckPointAction (#1409)
* Change TaskQueueUtils to CloudTaskUtils in CommitLogCheckPointAction
2021-11-10 10:13:14 -05:00
sarahcaseybot
2d1a67b01b Add a parameter to prevent spec11 from sending emails (#1407) 2021-11-05 13:02:59 -04:00
Rachel Guan
01d3932122 Test vkey behaviors when in a task queue (#1406)
* Test vkey behavior in task queue
2021-11-04 21:04:18 -04:00
sarahcaseybot
2eb8bb3996 Add Cloud SQL queries for transaction reports (#1397)
* Add the Cloud SQL queries for transaction reports

* Add the remaining queries

* Some query fixes

* Fix comments

* Fix indentation in total_nameservers

* Fix indentation on other Case condition
2021-11-03 11:25:31 -04:00
Rachel Guan
2218663d55 Add VKey to String and String to VKey methods (#1396)
* Add stringify and parse methods to SerializeUTils

* Improve comments and test cases

* Fix comments and test strings

* Fix dependency warning
2021-11-02 13:25:35 -04:00
gbrodman
e0dc2e43bb Pass the ICANN reporting BQ dataset to the DNS query coordinator (#1405) 2021-11-02 13:24:04 -04:00
Weimin Yu
7fedd40739 Fix InitSqlPipeline regarding synthesized history (#1404)
* Fix InitSqlPipeline regarding synthesized history

There are a few bad domains in Datastore that we hardcoded to ignore
during SQL population. They didn't have history so we didn't try to
filter when writing history.

Recently we created synthesized history for domains, including the bad
domains. Now we need to filter History entries.
2021-11-02 11:12:57 -04:00
Weimin Yu
f793ca5b68 Support shared database snapshot (#1403)
* Support shared database snapshot

Allow multiple workers to share a CONSISTENT database snapshot. The
motivating use case is SQL database snapshot loading, where it is too
slow to depend on one worker to load everything.

This currently is postgresql-specific, but will be improved to be
vendor-independent.

Also made sure AppEngineEnvironment.java clears the cached environment
in call cases when tearing down.
2021-11-01 13:01:37 -04:00
gbrodman
395ed19601 Canonicalize domain/host names in async DS->SQL replay (#1350) 2021-11-01 12:08:20 -04:00
Michael Muller
cecc1a6cc7 Update terraform files and instructions (#1402)
* Update terraform files and instructions

Update proxy terraform files based on current best practices and allow
exclusion of forwarding rules for HTTP endpoints.  Specifically:
-   Add a "public_web_whois" input to allow disabling the public HTTP
    whois forwarding.
-   Add "description" fields to all variables.
-   Move outputs of the top-level module into "outputs.tf".
-   Auto-reformat using hclfmt.
2021-10-29 09:10:23 -04:00
Rachel Guan
77bc072aac Add domain pa notification response to first delete domain poll message (#1400)
* Add domain pa notification response to first delete domain poll message

* Add test case for poll message

* Change time in response data to now
2021-10-28 15:45:50 -04:00
Weimin Yu
93a479837f Make entities serializable for DB validation (#1401)
* Make entities serializable for DB validation

Make entities that are asynchronously replicated between Datastore and
Cloud SQL serializable so that they may be used in BEAM pipeline based
comparison tool.

Introduced an UnsafeSerializable interface (extending Serializable) and
added to relevant classes. Implementing classes are allowed some
shortcuts as explained in the interface's Javadoc. Post migration we
will decide whether to revert this change or properly implement
serialization.

Verified with production data.
2021-10-28 12:19:09 -04:00
gbrodman
1e7aae26a3 Create a mechanism for storing / using locks explicitly only in SQL (#1392)
This is used for the replay locks so that Beam pipelines (which will be
used for database comparison) can acquire / release locks as necessary
to avoid database contention. If we're comparing contents of Datastore
and SQL databases, we shouldn't have replay actively running during the
comparison, so the pipeline will grab the locks.

Beam doesn't always play nicely with loading from / saving to Datastore,
so we need to make sure that we store the replay locks in SQL at all
times, even when Datastore is the primary DB.
2021-10-27 16:20:35 -04:00
Michael Muller
201b6e8e0b Re-enable replay tests for most environments (#1399)
* Re-enable replay tests for most environments

This enables the replay tests except in environments where
the NOMULUS_DISABLE_REPLAY_TESTS environment variable is set to "true".

* Add a check for null
2021-10-25 12:11:02 -04:00
Rachel Guan
43074ea32f Send expiring notification emails to admins if no tech emails are on file (#1387)
* Send emails to admin if tech emails are not present

* Improve test cases and comments
2021-10-21 12:59:31 -04:00
Weimin Yu
1a4a31569e Alt entity model for fast JPA bulk query (#1398)
* Alt entity model for fast JPA bulk query

Defined an alternative JPA entity model that allows fast bulk loading of
multi-level entities, DomainBase and DomainHistory. The idea is to bulk
the base table as well as the child tables separately, and assemble them
into the target entity in memory in a pipeline.

For DomainBase:

- Defined a DomainBaseLite class that models the "Domain" table only.

- Defined a DomainHost class that models the "DomainHost" table
  (nsHosts field).

- Exposed ID fields in GracePeriod so that they can be mapped to domains
  after being loaded into memory.

For DomainHistory:

- Defined a DomainHistoryLite class that models the "DomainHistory"
  table only.

- Defined a DomainHistoryHost class that models its namesake table.

- Exposed ID fields in GracePeriodHistory and DomainDsDataHistory
  classes so that they can be mapped to DomainHistory after being
  loaded into memory.

In PersistenceModule, provisioned a JpaTransactionManager that uses
the alternative entity model.

Also added a pipeline option that specifies which JpaTransactionManager
to use in a pipeline.
2021-10-20 16:48:56 -04:00
gbrodman
c7f50dae92 Use READ_COMMITTED serialization level in CreateSyntheticHEA (#1395)
I observed an instance in which a couple queries from this action were,
for whatever reason, hanging around as idle for >30 minutes. Assuming
the behavior that we saw before where "an open idle serializable
transaction means all pg read-locks stick around forever" still holds,
that's the reason why the amount of read-locks in use spirals out of
control.

I'm not sure why those queries aren't timing out, but that's a separate
issue.
2021-10-19 11:36:15 -04:00
Michael Muller
7344c424d1 Fix problems with the format tasks (#1390)
* Fix problems with the format tasks

The format check is using python2, and if "python" doesn't exist on the path
(or isn't python 2, or there is any other error in the python code or in the
shell script...) the format check just succeeds.

This change:
- Refactors out the gradle code that finds a python3 executable and use it
  to get the python executable to be used for the format check.
- Upgrades google-java-format-diff.py to python3 and removes #! line.
- Fixes shell script to ensure that failures are propagated.
- Suppresses error output when checking for python commands.

Tested:
- verified that python errors cause the build to fail
- verified that introducing a bad format diff causes check to fail
- verified that javaIncrementalFormatDryRun shows the diffs that would be
  introduced.
- verified that javaIncrementalFormatApply reformats a file.
- verified that well formatted code passes the format check.
- verified that an invalid or missing PYTHON env var causes
  google-java-format-git-diff.sh to fail with the appropriate error.

* Fix presubmit issues

Omit the format presubmit when not in a git repo and remove unused "string"
import.
2021-10-18 08:10:09 -04:00
gbrodman
969fa2b68c Fix weird flake (#1394) 2021-10-15 18:00:46 -04:00
gbrodman
9a569198fb Ignore class visibility in EntityTest (#1389) 2021-10-15 17:08:51 -04:00
gbrodman
8a53edd57b Use multiple transactions in IcannReportingUploadAction (#1386)
Relevant error log message: https://pantheon.corp.google.com/logs/viewer?project=domain-registry&minLogLevel=0&expandAll=false&timestamp=2021-10-11T15:28:01.047783000Z&customFacets=&limitCustomFacetWidth=true&dateRangeEnd=2021-10-11T20:51:40.591Z&interval=PT1H&resource=gae_app&logName=projects%2Fdomain-registry%2Flogs%2Fappengine.googleapis.com%252Frequest_log&scrollTimestamp=2021-10-11T15:10:23.174336000Z&filters=text:icannReportingUpload&dateRangeUnbound=backwardInTime&advancedFilter=resource.type%3D%22gae_app%22%0AlogName%3D%22projects%2Fdomain-registry%2Flogs%2Fappengine.googleapis.com%252Frequest_log%22%0A%22icannReportingUpload%22%0Aoperation.id%3D%22616453df00ff02a873d26cedb40001737e646f6d61696e2d726567697374727900016261636b656e643a6e6f6d756c75732d76303233000100%22

note the "invalid handle" bit

From https://cloud.google.com/datastore/docs/concepts/transactions:
"Transactions expire after 270 seconds or if idle for 60 seconds."

From b/202309933: "There is a 60 second timeout on Datastore operations
after which they will automatically rollback and the handles become
invalid."

From the logs we can see that the action is lasting significantly longer
than 270 seconds -- roughly 480 seconds in the linked log (more or
less). My running theory is that ICANN is, for some reason, now being
significantly more slow to respond than they used to be. Some uploads in
the log linked above are taking upwards of 10 seconds, especially when
they have to retry. Because we have >=45 TLDs, it's not surprising that
the action is taking >400 seconds to run.

The fix here is to perform each per-TLD operation in its own
transaction. The only reason why we need the transactions is for the
cursors anyway, and we can just grab and store those at the beginning of
the transaction.
2021-10-15 15:38:37 -04:00
Lai Jiang
d25d4073f5 Add a beam pipeline to create synthetic history entries in SQL (#1383)
* Add a beam pipeline to create synthetic history entries in SQL

The logic is mostly lifted from CreateSyntheticHistoryEntriesAction. We
do not need to test for the existence of an embedded EPP resource in the
history entry before create a synthetic one because after
InitSqlPipeline runs it is guaranteed that no embedded resource exists.
2021-10-15 14:51:01 -04:00
Ben McIlwain
6ffe84e93d Add a scrap command to hard-delete a host resource (#1391) 2021-10-15 12:28:18 -04:00
Ben McIlwain
a451524010 Add tests for obscure hostname canonicalization rule (#1388)
Also correctly configures Gradle for the util subproject (it wasn't possible to
run tests in IntelliJ without these changes).
2021-10-14 14:53:28 -04:00
Rachel Guan
bb8988ee4e Set payload in success response after sending notification emails (#1377)
* Set payload in success response after sending expiring certificate notification emails

* Modify log message and test cases for run() in sendExpiringCertificateNotificationEmailAction
2021-10-13 15:58:25 -04:00
Rachel Guan
2aff72b3b6 Add reason and requestedByRegistrar to domain renew flow (#1378)
* Resolve merge conflict

* Include reason and requestedByRegistrar in URS test file

* Modify test cases for new parameters in renew flow

* Add reason and registrar_request to renew domain command

* Update comments for new params in renew flow

* Make changes based on feedback
2021-10-13 11:41:02 -04:00
Weimin Yu
35fd61f771 Update parameter to Datastore wipe pipeline (#1385)
* Update parameter to Datastore wipe pipeline

Add the newly required RegistryEnvironment parameter to
BulkDeleteDatastorePipeline.

Remove the nullable annotation for this parameter in options
class.

Update metadata files regarding this parameter.
2021-10-11 17:31:50 -04:00
Michael Muller
13cb17e9a4 Implement several fixes affecting test flakiness (#1379)
* Implement several fixes affecting test flakiness

- Continued to do transaction manager cleanups on reply failure (lack of this
  may be causing cascading failures.
- Fix UpdateDomainCommandTest's output check (the test was checking for error
  output in standard error, but the command writes its output to the logs.
  Apparently, these may or may not be reflected in standard error depending on
  current global state)
- Remove unnecessary locking and incorrect comment in CommandTestCase.  The
  JUnit tests are not run in parallel in the same JVM and, in general, there
  are much bigger obstacles to this than standard output stream locking.

* Fix bad log message check
2021-10-11 12:54:03 -04:00
Ben McIlwain
4f1c317bbc Revert update auto timestamp non-transactional fallback (#1380)
This was added recently in PR #1341 as an attempted fix for our test flakiness,
but it turns out that it didn't address the root issue (whereas PR #1361
did). So this removes the fallback, as there's no reason this should ever be
called outside of a transactional context.
2021-10-08 16:44:45 -04:00
gbrodman
c8aa32ef05 Include more info in host/domain name failures (#1346)
We're seeing some of these in CreateSyntheticHistoryEntriesAction and I
can't tell why from the logs (it doesn't appear to print the repo ID or
domain/host name)
2021-10-08 15:17:22 -04:00
gbrodman
95a1bbf66a Temporarily disable SQL->DS replay in all tests (#1363) 2021-10-08 14:15:57 -04:00
Rachel Guan
23aa16469e Add WipeOutContactHistoryPiiAction to prod (#1356) 2021-10-08 11:46:26 -04:00
Ben McIlwain
0277c5c25a Add TmOverrideExtension for more safe TM overrides in tests (#1382)
* Add TmOverrideExtension for more safe TM overrides in tests

This is safer to use than calling setTmForTest() directly because this extension
also handles the corresponding call to removeTmOverrideForTest() automatically,
the forgetting of which has been a source of test flakiness/instability in the
past.

There are now broadly two ways to get tests to run in JPA: either use
DualDatabaseTest, an AppEngineExtension, and the corresponding JPA-specific
@Test annotations, OR use this override alongside a
JpaTransactionManagerExtension.
2021-10-07 19:26:25 -04:00
Ben McIlwain
b1b0589281 Elaborate on database read-only error message (#1355)
* Elaborate on database read-only error message
2021-10-07 13:25:24 -04:00
Ben McIlwain
28628564cc Set response payload when wiping out contact history PII (#1376)
Also uses smaller batches in tests so that they don't take so long.
2021-10-07 12:43:41 -04:00
Michael Muller
835f93f555 Add a reference to RDAP conformance checker (#1358)
* Add a reference to RDAP conformance checker

Make a note of the RDAP conformance checker for the next time that we deal
with the RDAP code - would be nice to have this in the test suite.

* Reformat comment
2021-10-07 12:34:41 -04:00
Ben McIlwain
276c188e9d Canonicalize domain/host names in initial import script (#1347)
* Canonicalize domain/host names in initial import script

* Add tests and make reduce some method visibility
2021-10-07 11:59:46 -04:00
Rachel Guan
34ecc6fbe7 Add new parameter renew_one_year to URS (#1364)
* Add autorenews to URS (#1343)

* Add autorenews to URS

* Add autorenews to existing xml files for test cases

* Harmonize domain.get() in existing code

* Fix typo in test case name

* Modify existing test helper method to allow testing with different domain bases
2021-10-06 20:40:43 -04:00
gbrodman
0f4156c563 Use a more efficient query to find resources in histories (#1354) 2021-10-06 15:20:31 -04:00
Michael Muller
e1827ab939 Defer python discovery until presubmit task (#1352)
* Customize LGTM build command

Our presubmit requires a version of python that is more recent than what
lgtm.com's build environments have installed.  Instead of trying to upgrade
them or downgrade our python version, just do the steps of the build that LGTM
needs (i.e. just build the main classes and test classes).
2021-10-06 10:09:13 -04:00
Ben McIlwain
51b2887709 Fix BigQuery data set name handling in activity reporting (#1361)
* Fix BigQuery data set name handling in activity reporting

This is not a constant (as it depends on runtime state), so it can't be named
using UPPER_SNAKE_CASE. Additionally, it's not good practice to use field
initialization when there's logic depending on runtime state involved. So this
PR changes the class to use constructor injection and moves the logic into the
constructor.

* Add fix for ICANN reporting provide

* Extract out ICANN reporting data set

* Inject TransactionManager

* Make TransactionInfo static (per Mike)

* Use ofyTm() in BackupTestStore

* Revert extraneous formatting

* Use auditedOfy in CommitLogMutationTest
2021-10-05 15:11:03 -04:00
Lai Jiang
62eb8801c5 Finish RDE pipeline implementation in SQL mode (#1330)
This PR adds the final step in RDE pipeline (enqueueing the next action
  to Cloud Tasks) and makes some necessary changes, namely by making all
  CloudTasksUtils related classes serializable, so that they can be used
  on Beam.

<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1330)
<!-- Reviewable:end -->
2021-10-04 21:02:44 -04:00
Lai Jiang
f6920454f6 Fix the beam staging script, take 3 (#1370)
The number of arguments changed in https://github.com/google/nomulus/pull/1369, so the check needs to change as well.

<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1370)
<!-- Reviewable:end -->
2021-10-04 16:44:32 -04:00
Lai Jiang
9103216a46 Fix beam deployment script again. (#1369)
uberjar task and uberjar name are now different (beamPipelineCommon and
beam_pipeline_common, respectively). This is more idiomatic with regard
to naming conventions but we need to take two different variables now.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1369)
<!-- Reviewable:end -->
2021-10-04 14:23:28 -04:00
Lai Jiang
c6705d1956 Fix sandbox cron (#1366)
* Fix sandbox cron

"synchronized" can only be used to specify a 24h time range that is
evenly divided by the interval value, e. g. "every 2 hours
synchronized".

* Change to a different time
2021-10-04 11:09:55 -04:00
Lai Jiang
737f65bd33 Change Beam uber jar name in Nomulu release GCB config (#1367)
The uber jar name was changed in #1351.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1367)
<!-- Reviewable:end -->
2021-10-04 10:47:27 -04:00
Lai Jiang
c8caa8f80b Remove the use of AppEngineEnvironment in Spec11Pipeline (#1365)
After #1348 it is no longer necessary to use AppEngineEnvironment in
Beam pipelines. In tests it is taken care of by the
DatastoreEntityExtension whereas on Dataflow the
RegistryPipelineWorkerInitializer does the same initialization for Ofy.
2021-10-02 19:23:09 -04:00
Rachel Guan
65ef18052b Add autorenews to URS (#1343)
* Add autorenews to URS

* Add autorenews to existing xml files for test cases
2021-10-01 19:11:46 -04:00
Lai Jiang
f7938e80f7 Streamline how to fake an App Engine environment (#1348)
Both `DatastoreEntityExtension.PlaceholderEnvironment` and `AppEngineEnvironment` does the same thing, so there is no point having both of them exist. To use `AppEngineEnvionrment` as an autoclosable requires the user to be mindful of where a fake App Engine environment is required. It is better to set this either in the `DatastoreEntityExtension` for tests, or in the worker initializer in Beam. It also makes it easier to remove the fake environment when we are completely datastore free.

Also made a change to how `IdService` allocate Ids in Beam.
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1348)
<!-- Reviewable:end -->
2021-10-01 16:46:46 -04:00
Lai Jiang
d8b3a30a20 Rename UpdateKmsKeyringCommand (#1353)
This brings it in line with GetKeyringSecretCommand. We still need to
remove the rest of remaining Cloud KMS related code in the future.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1353)
<!-- Reviewable:end -->
2021-10-01 16:45:45 -04:00
sarahcaseybot
93715c6f9e Add VKey workaround to spec11 pipeline (#1339)
* Add VKey workaround to spec11 pipeline

* Parallelize entity loading
2021-10-01 15:21:16 -04:00
Rachel Guan
90cf4519c5 Add a cron job to periodically empty out fields on deleted entities t… (#1303)
* Add a cron job to periodically empty out fields on deleted entities that are at least 18 months old

* Process ContactHistory entities via batching

* Improve test cases by not making assertions in a loop
2021-09-30 15:17:37 -04:00
Michael Muller
3a177f36b1 Make :core:cleanTest depend on FilterTests (#1342)
* Make :core:cleanTest depend on FilterTests

The "cleanTest" target doesn't work for our specialized tests derived from
FilterTest.  Make them all explicit dependencies of cleanTest so we can reset
the tests from a single target.
2021-09-30 10:46:36 -04:00
Lai Jiang
fbbe014e96 Make it possible to stage a single Beam pipeline (#1351) 2021-09-29 18:27:23 -04:00
Ben McIlwain
b05b77cfd1 Add/use more DatabaseHelper convenience methods (#1327)
* Add/use more DatabaseHelper convenience methods

This also fixes up some existing uses of "put" in test code that should be
inserts or updates (depending on which is intended). Doing an insert/update
makes stronger guarantees about an entity either not existing or existing,
depending on what you're doing.

* Convert more Object -> ImmutableObject

* Merge branch 'master' into tx-manager-sigs

* Revert breaking PremiumListDao change

* Refactor more insertInDb()

* Fight more testing errors

* Merge branch 'master' into tx-manager-sigs

* Merge branch 'master' into tx-manager-sigs

* Merge branch 'master' into tx-manager-sigs

* Merge branch 'master' into tx-manager-sigs

* Add removeTmOverrideForTest() calls

* Merge branch 'master' into tx-manager-sigs
2021-09-28 17:16:54 -04:00
Michael Muller
420a0b8b9a Use debian10 image for builder, not ubuntu1804 (#1345)
The debian10 image is generally a bit more recent and, in particular, includes
python 3.7.3, which we're currently using as a baseline for our builds.
2021-09-28 14:49:13 -04:00
gbrodman
cc062e3528 Reduce # shards in CreateSyntheticHistoryEntriesAction (#1344)
We need these to get created (we are blocked from moving to SQL until 30
days after their creation) so reduce this to 3 in the hopes of avoiding
the SQL overloads while we debug why those are occurring in the first
place.
2021-09-28 10:46:50 -04:00
Michael Muller
56a0e35314 Find a suitable version of python. (#1338)
* Find a suitable version of python.

When running presubmit, we were using /usr/bin/python3, which works fine on
systems that have a reasonably recent python version there.  However, our CI
system has a very old version of python there and prefers the use of "pyenv"
to modify the PATH to provide the desired version of python as simply
"python".  So add a check to use the first of "python" or "/usr/bin/python3"
that is at least version 3.7.3.
2021-09-27 16:43:45 -04:00
sarahcaseybot
de434f861f Migrate ICANN activity reports to Cloud SQL on BQ (#1332)
* Migrate ICANN activity reports to Cloud SQL on BQ

* Fix data set name
2021-09-27 15:27:20 -04:00
Ben McIlwain
3caee5fba7 Improve some log messages for readability/consistency (#1333)
* Improve some log messages for readability/consistency

* Address code review comments
2021-09-27 11:35:14 -04:00
Ben McIlwain
ff3c848def Add handling for UpdateAutoTimestamp when not in a transaction (#1341)
* Add handling for UpdateAutoTimestamp when not in a transaction

It's not clear why this is sometimes causing test flakes, but getting better
logging involved should help clear it up.

This also changes AppEngineExtension to insert without reloading the initial
test data, rather than putting it (potentially involving a merge) and reloading
it in a separate transaction. This should hopefully reduce the chance of weird
conflicts.
2021-09-27 11:32:15 -04:00
Rachel Guan
f0b3be5bb6 Improves test file for SendExpiringCertificateNotificationEmailAction (#1335)
* Improves test cases for SendExpiringCertificateNotificationEmailAction
2021-09-27 09:56:16 -04:00
gbrodman
18b808bd34 Fix injection with BackfillRegistryLocksCommand (#1337)
It would have been nice if this had failed at compile-time rather than
an NPE, but we need to make sure to specify that we need to inject this
command to get e.g. the random string generator

In addition, print out only the names of the failed domains (rather than
the entire domain object) for readability.
2021-09-24 14:08:30 -04:00
Lai Jiang
d7689539d7 Remove mention of bazel run (#1340)
Also provides a workaround in the error message.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1340)
<!-- Reviewable:end -->
2021-09-24 11:44:27 -04:00
Lai Jiang
c14ce6866b Remove remnants of JUnit 4 rules (#1336)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1336)
<!-- Reviewable:end -->
2021-09-24 06:35:35 -04:00
Michael Muller
3b84542e46 Add a presubmit to verify no new JS dependencies (#1334)
* Add a presubmit to verify no new JS dependencies

Verify that we have a known set of javascript dependencies.  This guards
against the inadvertent introduction of a new dependency with a disallowed
license.

TESTED: Added a new package to packages.json, observed presubmit failure.

* Replaced f-strings, printed python version

For some reason, it looks like we're using a python version older than 3.6 on
our CI machines.

* Remove python version trace.
2021-09-23 14:42:47 -04:00
Lai Jiang
fc7db91d70 Consolidate the use of URL parameters to specify database override (#1331)
There are actions for which we want to provide an override for the database
to use, like when launching Spec11 and Invoicing pipelines. It make sense to
consolidate around the same parameter provided from the same module for
consistency in all cases, instead of defining an override for each action.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1331)
<!-- Reviewable:end -->
2021-09-22 20:01:19 -04:00
Weimin Yu
3d8aa85d63 Fix ReadOnlyCheckingQuery's streaming method (#1329)
* Fix ReadOnlyCheckingQuery's streaming method

Following up to PR 1314: fix one more query defaulting to List when
stream() is invoked.
2021-09-21 15:40:50 -04:00
gbrodman
e14cd8bfa2 Add locking and a response in ReplicateToDatastoreAction (#1328)
* Add locking and a response in ReplicateToDatastoreAction

The response is necessary to get nicer logs in GAE and nicer cron job
behavior.

In addition:
- fix issues where locks would be backed up and replayed to Datastore
(they shouldn't be replayed)
- do ignore-read-only writes when replaying the transactions
2021-09-21 10:12:27 -04:00
Michael Muller
82e8641816 Fix javadoc problems with SoyInfo and subprojects (#1326)
* Fix javadoc problems with SoyInfo and subprojects

The *SoyInfo.java files generated by the soy compiler contain deprecation
warnings with links to files that are not imported.  This causes a javadoc
warning.  Temporarily fix this by replacing the link tags with "LINK".  This
also allows us to remove the exclusion of these files, which is a bit nicer.

Also disable javadoc tasks from subprojects.  These just break because they
don't have access to the legacy javadoc classes in the root.
2021-09-20 10:35:07 -04:00
gbrodman
7461ada0fb Include Cursor in the initial SQL population (#1323)
The test for this also required a bit of a fix in the Cursor scope
initialization. If you persist a Key<?> in some object in Datastore, it
persists just the standard data you'd expect, basically the parent, the
kind, and the object's ID/name. Then, when you load it back in from
Datastore it uses the app ID of whatever environment you're loading in
(the Key contains this info even though it isn't included in the
toString() of Key)

If you persist the websafe string format of a Key (which is what we do
for Cursors), it includes the app ID so when you load it, it contains
the old app ID not the new one (if the app ID has changed).

In the pipelines, we use the standard default environment which has a
different app ID from the test environment that's set up by the
DatastoreEntityExtension.

As a result, we should check in Cursor to see if the key is *any*
cross-tld-key, rather than the exact one that exists within this app
environment.
2021-09-19 09:23:02 -04:00
Ben McIlwain
d91ca0eb8a Clean up tx manager insert() signature and add convenience helper method (#1325)
* Clean up tx manager insert() signature and add convenience helper method

This is the first of a series of PRs to clean up the type signatures on the
TransactionManager methods (which are way too generic), along with creating some
helper methods for use in tests only that don't require creating transactions
all over the place, thus reducing visual noise at callsites. This first method
is DatabaseHelper.insertInDb(), but there will be plenty of others. Note that
this is only for the Cloud SQL transaction manager -- I'm not bothering to
migrate any Datastore-only code, as that will be going away soon enough.
2021-09-17 14:45:07 -04:00
gbrodman
12dac76dc8 Skip synthetic history entries for resources that don't need them (#1320)
* Skip synthetic history entries for resources that don't need them

The reason for creating synthetic history entries is so that we can
guarantee that each EppResource's most recent *History object contains
that resource at that point in time. If the most recent *History object
in SQL contains that resource already, there is no need to create a
synthetic *History object for that resource.
2021-09-17 12:10:15 -04:00
Michael Muller
ad6471b3fd Clean up a few lint warnings (#1324)
The build is generating the following lint warnings:

core/src/main/java/google/registry/flows/certs/CertificateChecker.java:246:
warning: [ReferenceEquality] Compariso
n using reference equality instead of value equality
        && (lastExpiringNotificationSentDate == START_OF_TIME
                                             ^
    (see https://errorprone.info/bugpattern/ReferenceEquality)
core/src/test/java/google/registry/backup/ReplayCommitLogsToSqlActionTest.java:350:
warning: [UnnecessaryParenthes
es] These grouping parentheses are unnecessary; it is unlikely the code will
be misinterpreted without them
        .that(jpaTm().transact((() -> jpaTm().loadByEntity(contactResource))))
2021-09-17 09:15:48 -04:00
Rachel Guan
942584b880 Update the initial value for lastExpiringCertNotificationSentDate to START_OF_TIME (#1321)
* Update the initial value for lastExpiringCertNotificationSentDate to START_OF_TIME
2021-09-16 13:06:47 -04:00
Ben McIlwain
8d421e995e Rename client ID to registrar ID in most places (#1317)
* Rename client ID to registrar ID in most places

This is a code-only change, that shouldn't require any sort of data
migration. Correspondingly, there are some existing uses of clientId that are
not migrated (e.g. Datastore fields, task queue payloads, URL parameters for
actions that might be hit from task queues, etc.). And it of course doesn't
modify any fields in EPP XML. Note that the Cloud SQL schema fields are
already named using the registar_id pattern.

This also doesn't yet touch on the -c parameters in nomulus tools; that will be
coming later (since that is an external manual touch-point, it will require a
lot more in the way of changes to various meta scripts and documentation).

* Change more client IDs

* Merge branch 'master' into clientid-to-registrarid
2021-09-16 12:57:43 -04:00
gbrodman
adc10131a0 Don't change UpdateAutoTimestamp on DS->SQL replay (#1322)
* Don't change UpdateAutoTimestamp on DS->SQL replay
2021-09-16 10:44:53 -04:00
677 changed files with 16748 additions and 5872 deletions

View File

@@ -196,9 +196,41 @@ allprojects {
}
}
rootProject.ext {
pyver = { exe ->
try {
ext.execInBash(
exe + " -c 'import sys; print(sys.hexversion)' 2>/dev/null",
"/") as Integer
} catch (org.gradle.process.internal.ExecException e) {
return -1;
}
}
// Return the path to a usable python3 executable.
getPythonExecutable = {
// Find a python version greater than 3.7.3 (this is somewhat arbitrary, we
// know we'd like at least 3.6, but 3.7.3 is the latest that ships with
// Debian so it seems like that should be available anywhere).
def MIN_PY_VER = 0x3070300
if (pyver('python') >= MIN_PY_VER) {
return 'python'
} else if (pyver('/usr/bin/python3') >= MIN_PY_VER) {
return '/usr/bin/python3'
} else {
throw new GradleException("No usable Python version found (build " +
"requires at least python 3.7.3)");
}
}
}
task runPresubmits(type: Exec) {
executable '/usr/bin/python3'
args('config/presubmits.py')
doFirst {
executable getPythonExecutable()
}
}
def javadocSource = []
@@ -412,9 +444,10 @@ rootProject.ext {
? "${rootDir}/.."
: rootDir
def formatDiffScript = "${scriptDir}/google-java-format-git-diff.sh"
def pythonExe = getPythonExecutable()
return ext.execInBash(
"${formatDiffScript} ${action}", "${workingDir}")
"PYTHON=${pythonExe} ${formatDiffScript} ${action}", "${workingDir}")
}
}
@@ -422,18 +455,23 @@ rootProject.ext {
// Note that this task checks modified Java files in the entire repository.
task javaIncrementalFormatCheck {
doLast {
def checkResult = invokeJavaDiffFormatScript("check")
if (checkResult == 'true') {
throw new IllegalStateException(
"Some Java files need to be reformatted. You may use the "
+ "'javaIncrementalFormatDryRun' task to review\n "
+ "the changes, or the 'javaIncrementalFormatApply' task "
+ "to reformat.")
} else if (checkResult != 'false') {
throw new RuntimeException(
"Failed to invoke format check script:\n" + checkResult)
// We can only do this in a git tree.
if (new File("${rootDir}/.git").exists()) {
def checkResult = invokeJavaDiffFormatScript("check")
if (checkResult == 'true') {
throw new IllegalStateException(
"Some Java files need to be reformatted. You may use the "
+ "'javaIncrementalFormatDryRun' task to review\n "
+ "the changes, or the 'javaIncrementalFormatApply' task "
+ "to reformat.")
} else if (checkResult != 'false') {
throw new RuntimeException(
"Failed to invoke format check script:\n" + checkResult)
}
println("Incremental Java format check ok.")
} else {
println("Omitting format check: not in a git directory.")
}
println("Incremental Java format check ok.")
}
}
@@ -457,15 +495,14 @@ task javaIncrementalFormatApply {
task javadoc(type: Javadoc) {
source javadocSource
classpath = files(javadocClasspath)
// Exclude the misbehaving generated-by-Soy Java files
exclude "**/*SoyInfo.java"
destinationDir = file("${buildDir}/docs/javadoc")
options.encoding = "UTF-8"
// In a lot of places we don't write @return so suppress warnings about that.
options.addBooleanOption('Xdoclint:all,-missing', true)
options.addBooleanOption("-allow-script-in-comments",true)
options.tags = ["type:a:Generic Type",
"error:a:Expected Error"]
"error:a:Expected Error",
"invariant:a:Guaranteed Property"]
}
tasks.build.dependsOn(tasks.javadoc)
@@ -483,3 +520,15 @@ task coreDev {
}
javadocDependentTasks.each { tasks.javadoc.dependsOn(it) }
// disable javadoc in subprojects, these will break because they don't have
// the correct classpath (see above).
gradle.taskGraph.whenReady { graph ->
graph.getAllTasks().each { task ->
def subprojectJavadoc = (task.path =~ /:.+:javadoc/)
if (subprojectJavadoc) {
println "Skipping ${task.path} for javadoc (only root javadoc works)"
task.enabled = false
}
}
}

View File

@@ -39,7 +39,7 @@ public final class SystemInfo {
pid.getOutputStream().close();
pid.waitFor();
} catch (IOException e) {
logger.atWarning().withCause(e).log("%s command not available", cmd);
logger.atWarning().withCause(e).log("%s command not available.", cmd);
return false;
}
return true;

View File

@@ -140,6 +140,8 @@ PROPERTIES = [
'a BEAM pipeline to image. Setting this property to empty string '
'will disable image generation.',
'/usr/bin/dot'),
Property('pipeline',
'The name of the Beam pipeline being staged.')
]
GRADLE_FLAGS = [

View File

@@ -17,9 +17,11 @@ These aren't built in to the static code analysis tools we use (e.g. Checkstyle,
Error Prone) so we must write them manually.
"""
import json
import os
from typing import List, Tuple
import sys
import textwrap
import re
# We should never analyze any generated files
@@ -28,6 +30,13 @@ UNIVERSALLY_SKIPPED_PATTERNS = {"/build/", "cloudbuild-caches", "/out/", ".git/"
FORBIDDEN = 1
REQUIRED = 2
# The list of expected json packages and their licenses.
# These should be one of the allowed licenses in:
# config/dependency-license/allowed_licenses.json
EXPECTED_JS_PACKAGES = [
'google-closure-library', # Owned by Google, Apache 2.0
]
class PresubmitCheck:
@@ -110,9 +119,10 @@ PRESUBMITS = {
"AppEngineExtension.register(...) instead.",
# PostgreSQLContainer instantiation must specify docker tag
# TODO(b/204572437): Fix the pattern to pass DatabaseSnapshotTest.java
PresubmitCheck(
r"[\s\S]*new\s+PostgreSQLContainer(<[\s\S]*>)?\(\s*\)[\s\S]*",
"java", {}):
"java", {"DatabaseSnapshotTest.java"}):
"PostgreSQLContainer instantiation must specify docker tag.",
# Various Soy linting checks
@@ -177,53 +187,6 @@ PRESUBMITS = {
{"/node_modules/", "google/registry/ui/js/util.js", "registrar_bin."},
):
"JavaScript files should not include console logging.",
# SQL injection protection rule for java source file:
# The sql template passed to createQuery/createNativeQuery methods must be
# a variable name in UPPER_CASE_UNDERSCORE format, i.e., a static final
# String variable. This forces the use of parameter-binding on all queries
# that take parameters.
# The rule would forbid invocation of createQuery(Criteria). However, this
# can be handled by adding a helper method in an exempted class to make
# the calls.
# TODO(b/179158393): enable the 'ConstantName' Java style check to ensure
# that non-final variables do not use the UPPER_CASE_UNDERSCORE format.
PresubmitCheck(
# Line 1: the method names we check and the opening parenthesis, which
# marks the beginning of the first parameter
# Line 2: The first parameter is a match if is NOT any of the following:
# - final variable name: \s*([A-Z_]+
# - string literal: "([^"]|\\")*"
# - concatenation of literals: (\s*\+\s*"([^"]|\\")*")*
# Line 3: , or the closing parenthesis, marking the end of the first
# parameter
r'.*\.(query|createQuery|createNativeQuery)\('
r'(?!(\s*([A-Z_]+|"([^"]|\\")*"(\s*\+\s*"([^"]|\\")*")*)'
r'(,|\s*\))))',
"java",
# ActivityReportingQueryBuilder deals with Dremel queries
{"src/test", "ActivityReportingQueryBuilder.java",
# This class contains helper method to make queries in Beam.
"RegistryJpaIO.java",
# TODO(b/179158393): Remove everything below, which should be done
# using Criteria
"ForeignKeyIndex.java",
"HistoryEntryDao.java",
"JpaTransactionManager.java",
"JpaTransactionManagerImpl.java",
# CriteriaQueryBuilder is a false positive
"CriteriaQueryBuilder.java",
"RdapDomainSearchAction.java",
"RdapNameserverSearchAction.java",
"RdapSearchActionBase.java",
"ReadOnlyCheckingEntityManager.java",
"RegistryQuery",
},
):
"The first String parameter to EntityManager.create(Native)Query "
"methods must be one of the following:\n"
" - A String literal\n"
" - Concatenation of String literals only\n"
" - The name of a static final String variable"
}
# Note that this regex only works for one kind of Flyway file. If we want to
@@ -311,6 +274,26 @@ def verify_flyway_index():
return not success
def verify_javascript_deps():
"""Verifies that we haven't introduced any new javascript dependencies."""
with open('package.json') as f:
package = json.load(f)
deps = list(package['dependencies'].keys())
if deps != EXPECTED_JS_PACKAGES:
print('Unexpected javascript dependencies. Was expecting '
'%s, got %s.' % (EXPECTED_JS_PACKAGES, deps))
print(textwrap.dedent("""
* If the new dependencies are intentional, please verify that the
* license is one of the allowed licenses (see
* config/dependency-license/allowed_licenses.json) and add an entry
* for the package (with the license in a comment) to the
* EXPECTED_JS_PACKAGES variable in config/presubmits.py.
"""))
return True
return False
def get_files():
for root, dirnames, filenames in os.walk("."):
for filename in filenames:
@@ -318,6 +301,7 @@ def get_files():
if __name__ == "__main__":
print('python version is %s' % sys.version)
failed = False
for file in get_files():
error_messages = []
@@ -334,5 +318,8 @@ if __name__ == "__main__":
# when we put it here it fails fast before all of the tests are run.
failed |= verify_flyway_index()
# Make sure we haven't introduced any javascript dependencies.
failed |= verify_javascript_deps()
if failed:
sys.exit(1)

View File

@@ -457,6 +457,22 @@ task soyToJava {
"--srcs", "${soyFiles.join(',')}",
"--compileTimeGlobalsFile", "${resourcesSourceDir}/google/registry/ui/globals.txt"
}
// Replace the "@link" tags after the "@deprecated" tags in the generated
// files. The soy compiler doesn't generate imports for these, causing
// us to get warnings when we generate javadocs.
// TODO(b/200296387): To be fair, the deprecations are accurate: we're
// using the old "SoyInfo" classes instead of the new "Templates" files.
// When we convert to the new classes, this hack can go away.
def outputs = fileTree(outputDirectory) {
include '**/*.java'
}
outputs.each { file ->
exec {
commandLine 'sed', '-i""', '-e', 's/@link/LINK/g', file.getCanonicalPath()
}
}
}
doLast {
@@ -687,33 +703,19 @@ createToolTask(
'google.registry.tools.DevTool',
sourceSets.nonprod)
createToolTask(
'initSqlPipeline', 'google.registry.beam.initsql.InitSqlPipeline')
createToolTask(
'jpaDemoPipeline', 'google.registry.beam.common.JpaDemoPipeline')
'validateSqlPipeline', 'google.registry.beam.comparedb.ValidateSqlPipeline')
project.tasks.create('initSqlPipeline', JavaExec) {
main = 'google.registry.beam.initsql.InitSqlPipeline'
doFirst {
getToolArgsList().ifPresent {
args it
}
createToolTask(
'jpaDemoPipeline', 'google.registry.beam.common.JpaDemoPipeline')
def isDirectRunner =
args.contains('DirectRunner') || args.contains('--runner=DirectRunner')
// The dependency containing DirectRunner is intentionally excluded from the
// production binary, so that it won't be chosen by mistake: we definitely do
// not want to use it for the real jobs, yet DirectRunner is the default if
// the user forgets to override it.
// DirectRunner is required for tests and is already on testRuntimeClasspath.
// For simplicity, we add testRuntimeClasspath to this task's classpath instead
// of defining a new configuration just for the DirectRunner dependency.
classpath =
isDirectRunner
? sourceSets.main.runtimeClasspath.plus(sourceSets.test.runtimeClasspath)
: sourceSets.main.runtimeClasspath
}
}
createToolTask(
'createSyntheticHistoryEntries',
'google.registry.tools.javascrap.CreateSyntheticHistoryEntriesPipeline')
// Caller must provide projectId, GCP region, runner, and the kinds to delete
// (comma-separated kind names or '*' for all). E.g.:
@@ -756,50 +758,57 @@ createUberJar('nomulus', 'nomulus', 'google.registry.tools.RegistryTool')
// This packages more code and dependency than necessary. However, without
// restructuring the source tree it is difficult to generate leaner jars.
createUberJar(
'beam_pipeline_common',
'beamPipelineCommon',
'beam_pipeline_common',
'')
// Create beam staging task if environment is alpha or crash.
// All other environments use formally released pipelines through CloudBuild.
// Create beam staging task if the environment is alpha. Production, sandbox and
// qa use formally released pipelines through CloudBuild, whereas crash and
// alpha use the pipelines staged on alpha deployment project.
//
// User should install gcloud and login to GCP before invoking this tasks.
if (environment in ['alpha', 'crash']) {
if (environment == 'alpha') {
def pipelines = [
[
mainClass: 'google.registry.beam.initsql.InitSqlPipeline',
metaData: 'google/registry/beam/init_sql_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.datastore.BulkDeleteDatastorePipeline',
metaData: 'google/registry/beam/bulk_delete_datastore_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.spec11.Spec11Pipeline',
metaData: 'google/registry/beam/spec11_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.invoicing.InvoicingPipeline',
metaData: 'google/registry/beam/invoicing_pipeline_metadata.json'
],
[
mainClass: 'google.registry.beam.rde.RdePipeline',
metaData: 'google/registry/beam/rde_pipeline_metadata.json'
],
initSql :
[
mainClass: 'google.registry.beam.initsql.InitSqlPipeline',
metaData : 'google/registry/beam/init_sql_pipeline_metadata.json'
],
bulkDeleteDatastore:
[
mainClass: 'google.registry.beam.datastore.BulkDeleteDatastorePipeline',
metaData : 'google/registry/beam/bulk_delete_datastore_pipeline_metadata.json'
],
spec11 :
[
mainClass: 'google.registry.beam.spec11.Spec11Pipeline',
metaData : 'google/registry/beam/spec11_pipeline_metadata.json'
],
invoicing :
[
mainClass: 'google.registry.beam.invoicing.InvoicingPipeline',
metaData : 'google/registry/beam/invoicing_pipeline_metadata.json'
],
rde :
[
mainClass: 'google.registry.beam.rde.RdePipeline',
metaData : 'google/registry/beam/rde_pipeline_metadata.json'
],
]
project.tasks.create("stage_beam_pipelines") {
project.tasks.create("stageBeamPipelines") {
doLast {
pipelines.each {
def mainClass = it['mainClass']
def metaData = it['metaData']
def pipelineName = CaseFormat.UPPER_CAMEL.to(
CaseFormat.LOWER_UNDERSCORE,
mainClass.substring(mainClass.lastIndexOf('.') + 1))
def imageName = "gcr.io/${gcpProject}/beam/${pipelineName}"
def metaDataBaseName = metaData.substring(metaData.lastIndexOf('/') + 1)
def uberJarName = tasks.beam_pipeline_common.outputs.files.asPath
if (rootProject.pipeline == ''|| rootProject.pipeline == it.key) {
def mainClass = it.value['mainClass']
def metaData = it.value['metaData']
def pipelineName = CaseFormat.UPPER_CAMEL.to(
CaseFormat.LOWER_UNDERSCORE,
mainClass.substring(mainClass.lastIndexOf('.') + 1))
def imageName = "gcr.io/${gcpProject}/beam/${pipelineName}"
def metaDataBaseName = metaData.substring(metaData.lastIndexOf('/') + 1)
def uberJarName = tasks.beamPipelineCommon.outputs.files.asPath
def command = "\
def command = "\
gcloud dataflow flex-template build \
gs://${gcpProject}-deploy/live/beam/${metaDataBaseName} \
--image-gcr-path ${imageName}:live \
@@ -809,10 +818,11 @@ if (environment in ['alpha', 'crash']) {
--jar ${uberJarName} \
--env FLEX_TEMPLATE_JAVA_MAIN_CLASS=${mainClass} \
--project ${gcpProject}".toString()
rootProject.ext.execInBash(command, '/tmp')
rootProject.ext.execInBash(command, '/tmp')
}
}
}
}.dependsOn(tasks.beam_pipeline_common)
}.dependsOn(tasks.beamPipelineCommon)
}
// A jar with classes and resources from main sourceSet, excluding internal
@@ -1087,6 +1097,10 @@ test {
// TODO(weiminyu): Remove dependency on sqlIntegrationTest
}.dependsOn(fragileTest, outcastTest, standardTest, registryToolIntegrationTest, sqlIntegrationTest)
// When we override tests, we also break the cleanTest command.
cleanTest.dependsOn(cleanFragileTest, cleanOutcastTest, cleanStandardTest,
cleanRegistryToolIntegrationTest, cleanSqlIntegrationTest)
project.build.dependsOn devtool
project.build.dependsOn buildToolImage
project.build.dependsOn ':stage'

View File

@@ -41,11 +41,12 @@ public class BackupUtils {
}
/**
* Converts the given {@link ImmutableObject} to a raw Datastore entity and write it to an
* {@link OutputStream} in delimited protocol buffer format.
* Converts the given {@link ImmutableObject} to a raw Datastore entity and write it to an {@link
* OutputStream} in delimited protocol buffer format.
*/
static void serializeEntity(ImmutableObject entity, OutputStream stream) throws IOException {
EntityTranslator.convertToPb(auditedOfy().save().toEntity(entity)).writeDelimitedTo(stream);
EntityTranslator.convertToPb(auditedOfy().saveIgnoringReadOnly().toEntity(entity))
.writeDelimitedTo(stream);
}
/**

View File

@@ -14,21 +14,21 @@
package google.registry.backup;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static google.registry.backup.ExportCommitLogDiffAction.LOWER_CHECKPOINT_TIME_PARAM;
import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_TIME_PARAM;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import google.registry.model.ofy.CommitLogCheckpoint;
import google.registry.model.ofy.CommitLogCheckpointRoot;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.TaskQueueUtils;
import google.registry.util.CloudTasksUtils;
import javax.inject.Inject;
import org.joda.time.DateTime;
@@ -56,7 +56,8 @@ public final class CommitLogCheckpointAction implements Runnable {
@Inject Clock clock;
@Inject CommitLogCheckpointStrategy strategy;
@Inject TaskQueueUtils taskQueueUtils;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject CommitLogCheckpointAction() {}
@Override
@@ -73,16 +74,20 @@ public final class CommitLogCheckpointAction implements Runnable {
return;
}
auditedOfy()
.saveWithoutBackup()
.saveIgnoringReadOnly()
.entities(
checkpoint, CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
taskQueueUtils.enqueue(
getQueue(QUEUE_NAME),
withUrl(ExportCommitLogDiffAction.PATH)
.param(LOWER_CHECKPOINT_TIME_PARAM, lastWrittenTime.toString())
.param(
UPPER_CHECKPOINT_TIME_PARAM, checkpoint.getCheckpointTime().toString()));
cloudTasksUtils.enqueue(
QUEUE_NAME,
CloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
});
}
}

View File

@@ -55,10 +55,9 @@ public final class CommitLogImports {
* represents the changes in one transaction. The {@code CommitLogManifest} contains deleted
* entity keys, whereas each {@code CommitLogMutation} contains one whole entity.
*/
public static ImmutableList<ImmutableList<VersionedEntity>> loadEntitiesByTransaction(
static ImmutableList<ImmutableList<VersionedEntity>> loadEntitiesByTransaction(
InputStream inputStream) {
try (AppEngineEnvironment appEngineEnvironment = new AppEngineEnvironment();
InputStream input = new BufferedInputStream(inputStream)) {
try (InputStream input = new BufferedInputStream(inputStream)) {
Iterator<ImmutableObject> commitLogs = createDeserializingIterator(input, false);
checkState(commitLogs.hasNext());
checkState(commitLogs.next() instanceof CommitLogCheckpoint);
@@ -105,7 +104,7 @@ public final class CommitLogImports {
* represents the changes in one transaction. The {@code CommitLogManifest} contains deleted
* entity keys, whereas each {@code CommitLogMutation} contains one whole entity.
*/
public static ImmutableList<VersionedEntity> loadEntities(InputStream inputStream) {
static ImmutableList<VersionedEntity> loadEntities(InputStream inputStream) {
return loadEntitiesByTransaction(inputStream).stream()
.flatMap(ImmutableList::stream)
.collect(toImmutableList());

View File

@@ -93,7 +93,7 @@ public final class DeleteOldCommitLogsAction implements Runnable {
public void run() {
DateTime deletionThreshold = clock.nowUtc().minus(maxAge);
logger.atInfo().log(
"Processing asynchronous deletion of unreferenced CommitLogManifests older than %s",
"Processing asynchronous deletion of unreferenced CommitLogManifests older than %s.",
deletionThreshold);
mrRunner
@@ -208,7 +208,7 @@ public final class DeleteOldCommitLogsAction implements Runnable {
getContext().incrementCounter("EPP resources missing pre-threshold revision (SEE LOGS)");
logger.atSevere().log(
"EPP resource missing old enough revision: "
+ "%s (created on %s) has %d revisions between %s and %s, while threshold is %s",
+ "%s (created on %s) has %d revisions between %s and %s, while threshold is %s.",
Key.create(eppResource),
eppResource.getCreationTime(),
eppResource.getRevisions().size(),

View File

@@ -100,7 +100,7 @@ public final class ExportCommitLogDiffAction implements Runnable {
// Load the keys of all the manifests to include in this diff.
List<Key<CommitLogManifest>> sortedKeys = loadAllDiffKeys(lowerCheckpoint, upperCheckpoint);
logger.atInfo().log("Found %d manifests to export", sortedKeys.size());
logger.atInfo().log("Found %d manifests to export.", sortedKeys.size());
// Open an output channel to GCS, wrapped in a stream for convenience.
try (OutputStream gcsStream =
gcsUtils.openOutputStream(
@@ -124,7 +124,7 @@ public final class ExportCommitLogDiffAction implements Runnable {
for (int i = 0; i < keyChunks.size(); i++) {
// Force the async load to finish.
Collection<CommitLogManifest> chunkValues = nextChunkToExport.values();
logger.atInfo().log("Loaded %d manifests", chunkValues.size());
logger.atInfo().log("Loaded %d manifests.", chunkValues.size());
// Since there is no hard bound on how much data this might be, take care not to let the
// Objectify session cache fill up and potentially run out of memory. This is the only safe
// point to do this since at this point there is no async load in progress.
@@ -134,12 +134,12 @@ public final class ExportCommitLogDiffAction implements Runnable {
nextChunkToExport = auditedOfy().load().keys(keyChunks.get(i + 1));
}
exportChunk(gcsStream, chunkValues);
logger.atInfo().log("Exported %d manifests", chunkValues.size());
logger.atInfo().log("Exported %d manifests.", chunkValues.size());
}
} catch (IOException e) {
throw new RuntimeException(e);
}
logger.atInfo().log("Exported %d manifests in total", sortedKeys.size());
logger.atInfo().log("Exported %d total manifests.", sortedKeys.size());
}
/**

View File

@@ -94,14 +94,14 @@ class GcsDiffFileLister {
logger.atInfo().log(
"Gap discovered in sequence terminating at %s, missing file: %s",
sequence.lastKey(), filename);
logger.atInfo().log("Found sequence from %s to %s", checkpointTime, lastTime);
logger.atInfo().log("Found sequence from %s to %s.", checkpointTime, lastTime);
return false;
}
}
sequence.put(checkpointTime, blobInfo);
checkpointTime = getLowerBoundTime(blobInfo);
}
logger.atInfo().log("Found sequence from %s to %s", checkpointTime, lastTime);
logger.atInfo().log("Found sequence from %s to %s.", checkpointTime, lastTime);
return true;
}
@@ -140,7 +140,7 @@ class GcsDiffFileLister {
}
}
if (upperBoundTimesToBlobInfo.isEmpty()) {
logger.atInfo().log("No files found");
logger.atInfo().log("No files found.");
return ImmutableList.of();
}
@@ -185,7 +185,7 @@ class GcsDiffFileLister {
logger.atInfo().log(
"Actual restore from time: %s", getLowerBoundTime(sequence.firstEntry().getValue()));
logger.atInfo().log("Found %d files to restore", sequence.size());
logger.atInfo().log("Found %d files to restore.", sequence.size());
return ImmutableList.copyOf(sequence.values());
}

View File

@@ -33,6 +33,7 @@ import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import google.registry.model.UpdateAutoTimestamp;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
@@ -111,7 +112,7 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
return;
}
Optional<Lock> lock =
Lock.acquire(
Lock.acquireSql(
this.getClass().getSimpleName(), null, LEASE_LENGTH, requestStatusChecker, false);
if (!lock.isPresent()) {
String message = "Can't acquire SQL commit log replay lock, aborting.";
@@ -139,7 +140,7 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
response.setPayload(message);
} finally {
lock.ifPresent(Lock::release);
lock.ifPresent(Lock::releaseSql);
}
}
@@ -222,8 +223,11 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
// Load and process the Datastore transactions one at a time
ImmutableList<ImmutableList<VersionedEntity>> allTransactions =
CommitLogImports.loadEntitiesByTransaction(input);
allTransactions.forEach(
transaction -> jpaTm().transact(() -> replayTransaction(transaction)));
try (UpdateAutoTimestamp.DisableAutoUpdateResource disabler =
UpdateAutoTimestamp.disableAutoUpdate()) {
allTransactions.forEach(
transaction -> jpaTm().transact(() -> replayTransaction(transaction)));
}
// if we succeeded, set the last-seen time
DateTime checkpoint = DateTime.parse(metadata.getName().substring(DIFF_FILE_PREFIX.length()));
jpaTm().transact(() -> SqlReplayCheckpoint.set(checkpoint));
@@ -269,7 +273,7 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
ofyPojo.getClass());
}
} catch (Throwable t) {
logger.atSevere().log("Error when replaying object %s", ofyPojo);
logger.atSevere().log("Error when replaying object %s.", ofyPojo);
throw t;
}
}
@@ -296,7 +300,7 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
jpaTm().deleteIgnoringReadOnly(entityVKey);
}
} catch (Throwable t) {
logger.atSevere().log("Error when deleting key %s", entityVKey);
logger.atSevere().log("Error when deleting key %s.", entityVKey);
throw t;
}
}

View File

@@ -103,13 +103,13 @@ public class RestoreCommitLogsAction implements Runnable {
!FORBIDDEN_ENVIRONMENTS.contains(RegistryEnvironment.get()),
"DO NOT RUN IN PRODUCTION OR SANDBOX.");
if (dryRun) {
logger.atInfo().log("Running in dryRun mode");
logger.atInfo().log("Running in dry-run mode.");
}
String gcsBucket = gcsBucketOverride.orElse(defaultGcsBucket);
logger.atInfo().log("Restoring from %s.", gcsBucket);
List<BlobInfo> diffFiles = diffLister.listDiffFiles(gcsBucket, fromTime, toTime);
if (diffFiles.isEmpty()) {
logger.atInfo().log("Nothing to restore");
logger.atInfo().log("Nothing to restore.");
return;
}
Map<Integer, DateTime> bucketTimestamps = new HashMap<>();
@@ -143,7 +143,7 @@ public class RestoreCommitLogsAction implements Runnable {
.build()),
Stream.of(CommitLogCheckpointRoot.create(lastCheckpoint.getCheckpointTime())))
.collect(toImmutableList()));
logger.atInfo().log("Restore complete");
logger.atInfo().log("Restore complete.");
}
/**

View File

@@ -105,29 +105,29 @@ public abstract class VersionedEntity implements Serializable {
* VersionedEntity VersionedEntities}. See {@link CommitLogImports#loadEntities} for more
* information.
*/
public static Stream<VersionedEntity> fromManifest(CommitLogManifest manifest) {
static Stream<VersionedEntity> fromManifest(CommitLogManifest manifest) {
long commitTimeMillis = manifest.getCommitTime().getMillis();
return manifest.getDeletions().stream()
.map(com.googlecode.objectify.Key::getRaw)
.map(key -> builder().commitTimeMills(commitTimeMillis).key(key).build());
.map(key -> newBuilder().commitTimeMills(commitTimeMillis).key(key).build());
}
/* Converts a {@link CommitLogMutation} to a {@link VersionedEntity}. */
public static VersionedEntity fromMutation(CommitLogMutation mutation) {
static VersionedEntity fromMutation(CommitLogMutation mutation) {
return from(
com.googlecode.objectify.Key.create(mutation).getParent().getId(),
mutation.getEntityProtoBytes());
}
public static VersionedEntity from(long commitTimeMillis, byte[] entityProtoBytes) {
return builder()
return newBuilder()
.entityProtoBytes(entityProtoBytes)
.key(EntityTranslator.createFromPbBytes(entityProtoBytes).getKey())
.commitTimeMills(commitTimeMillis)
.build();
}
static Builder builder() {
private static Builder newBuilder() {
return new AutoValue_VersionedEntity.Builder();
}
@@ -142,7 +142,7 @@ public abstract class VersionedEntity implements Serializable {
public abstract VersionedEntity build();
public Builder entityProtoBytes(byte[] bytes) {
Builder entityProtoBytes(byte[] bytes) {
return entityProtoBytes(new ImmutableBytes(bytes));
}
}

View File

@@ -24,10 +24,8 @@ import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.domain.RegistryLock;
import google.registry.model.eppcommon.Trid;
import google.registry.model.host.HostResource;
@@ -84,8 +82,7 @@ public final class AsyncTaskEnqueuer {
}
/** Enqueues a task to asynchronously re-save an entity at some point in the future. */
public void enqueueAsyncResave(
ImmutableObject entityToResave, DateTime now, DateTime whenToResave) {
public void enqueueAsyncResave(VKey<?> entityToResave, DateTime now, DateTime whenToResave) {
enqueueAsyncResave(entityToResave, now, ImmutableSortedSet.of(whenToResave));
}
@@ -96,10 +93,9 @@ public final class AsyncTaskEnqueuer {
* itself to run at the next time if there are remaining re-saves scheduled.
*/
public void enqueueAsyncResave(
ImmutableObject entityToResave, DateTime now, ImmutableSortedSet<DateTime> whenToResave) {
VKey<?> entityKey, DateTime now, ImmutableSortedSet<DateTime> whenToResave) {
DateTime firstResave = whenToResave.first();
checkArgument(isBeforeOrAt(now, firstResave), "Can't enqueue a resave to run in the past");
Key<ImmutableObject> entityKey = Key.create(entityToResave);
Duration etaDuration = new Duration(now, firstResave);
if (etaDuration.isLongerThan(MAX_ASYNC_ETA)) {
logger.atInfo().log(
@@ -114,7 +110,7 @@ public final class AsyncTaskEnqueuer {
.method(Method.POST)
.header("Host", backendHostname)
.countdownMillis(etaDuration.getMillis())
.param(PARAM_RESOURCE_KEY, entityKey.getString())
.param(PARAM_RESOURCE_KEY, entityKey.getOfyKey().getString())
.param(PARAM_REQUESTED_TIME, now.toString());
if (whenToResave.size() > 1) {
task.param(PARAM_RESAVE_TIMES, Joiner.on(',').join(whenToResave.tailSet(firstResave, false)));
@@ -126,18 +122,17 @@ public final class AsyncTaskEnqueuer {
public void enqueueAsyncDelete(
EppResource resourceToDelete,
DateTime now,
String requestingClientId,
String requestingRegistrarId,
Trid trid,
boolean isSuperuser) {
Key<EppResource> resourceKey = Key.create(resourceToDelete);
logger.atInfo().log(
"Enqueuing async deletion of %s on behalf of registrar %s.",
resourceKey, requestingClientId);
resourceToDelete.getRepoId(), requestingRegistrarId);
TaskOptions task =
TaskOptions.Builder.withMethod(Method.PULL)
.countdownMillis(asyncDeleteDelay.getMillis())
.param(PARAM_RESOURCE_KEY, resourceKey.getString())
.param(PARAM_REQUESTING_CLIENT_ID, requestingClientId)
.param(PARAM_RESOURCE_KEY, resourceToDelete.createVKey().getOfyKey().getString())
.param(PARAM_REQUESTING_CLIENT_ID, requestingRegistrarId)
.param(PARAM_SERVER_TRANSACTION_ID, trid.getServerTransactionId())
.param(PARAM_IS_SUPERUSER, Boolean.toString(isSuperuser))
.param(PARAM_REQUESTED_TIME, now.toString());

View File

@@ -24,6 +24,7 @@ import static google.registry.batch.AsyncTaskEnqueuer.QUEUE_ASYNC_HOST_RENAME;
import static google.registry.request.RequestParameters.extractIntParameter;
import static google.registry.request.RequestParameters.extractLongParameter;
import static google.registry.request.RequestParameters.extractOptionalBooleanParameter;
import static google.registry.request.RequestParameters.extractOptionalDatetimeParameter;
import static google.registry.request.RequestParameters.extractOptionalIntParameter;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredDatetimeParameter;
@@ -78,6 +79,7 @@ public class BatchModule {
@Provides
@Parameter(PARAM_RESOURCE_KEY)
// TODO(b/207363014): figure out if this needs to be modified for vkey string replacement
static Key<ImmutableObject> provideResourceKey(HttpServletRequest req) {
return Key.create(extractRequiredParameter(req, PARAM_RESOURCE_KEY));
}
@@ -106,6 +108,13 @@ public class BatchModule {
return extractIntParameter(req, RelockDomainAction.PREVIOUS_ATTEMPTS_PARAM);
}
@Provides
@Parameter(ExpandRecurringBillingEventsAction.PARAM_CURSOR_TIME)
static Optional<DateTime> provideCursorTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(
req, ExpandRecurringBillingEventsAction.PARAM_CURSOR_TIME);
}
@Provides
@Named(QUEUE_ASYNC_ACTIONS)
static Queue provideAsyncActionsPushQueue() {

View File

@@ -311,7 +311,7 @@ public class DeleteContactsAndHostsAction implements Runnable {
@Override
public void reduce(final DeletionRequest deletionRequest, ReducerInput<Boolean> values) {
final boolean hasNoActiveReferences = !Iterators.contains(values, true);
logger.atInfo().log("Processing async deletion request for %s", deletionRequest.key());
logger.atInfo().log("Processing async deletion request for %s.", deletionRequest.key());
DeletionResult result =
tm()
.transactNew(
@@ -343,15 +343,15 @@ public class DeleteContactsAndHostsAction implements Runnable {
}
// Contacts and external hosts have a direct client id. For subordinate hosts it needs to be
// read off of the superordinate domain.
String resourceClientId = resource.getPersistedCurrentSponsorClientId();
String resourceRegistrarId = resource.getPersistedCurrentSponsorRegistrarId();
if (resource instanceof HostResource && ((HostResource) resource).isSubordinate()) {
resourceClientId =
resourceRegistrarId =
tm().loadByKey(((HostResource) resource).getSuperordinateDomain())
.cloneProjectedAtTime(now)
.getCurrentSponsorClientId();
.getCurrentSponsorRegistrarId();
}
boolean requestedByCurrentOwner =
resourceClientId.equals(deletionRequest.requestingClientId());
resourceRegistrarId.equals(deletionRequest.requestingClientId());
boolean deleteAllowed =
hasNoActiveReferences && (requestedByCurrentOwner || deletionRequest.isSuperuser());
@@ -371,14 +371,14 @@ public class DeleteContactsAndHostsAction implements Runnable {
HistoryEntry historyEntry =
HistoryEntry.createBuilderForResource(resource)
.setClientId(deletionRequest.requestingClientId())
.setRegistrarId(deletionRequest.requestingClientId())
.setModificationTime(now)
.setType(getHistoryEntryType(resource, deleteAllowed))
.build();
PollMessage.OneTime pollMessage =
new PollMessage.OneTime.Builder()
.setClientId(deletionRequest.requestingClientId())
.setRegistrarId(deletionRequest.requestingClientId())
.setMsg(pollMessageText)
.setParent(historyEntry)
.setEventTime(now)
@@ -523,18 +523,19 @@ public class DeleteContactsAndHostsAction implements Runnable {
static DeletionRequest createFromTask(TaskHandle task, DateTime now)
throws Exception {
ImmutableMap<String, String> params = ImmutableMap.copyOf(task.extractParams());
Key<EppResource> resourceKey =
Key.create(
VKey<EppResource> resourceKey =
VKey.create(
checkNotNull(params.get(PARAM_RESOURCE_KEY), "Resource to delete not specified"));
EppResource resource =
checkNotNull(
auditedOfy().load().key(resourceKey).now(), "Resource to delete doesn't exist");
auditedOfy().load().key(resourceKey.getOfyKey()).now(),
"Resource to delete doesn't exist");
checkState(
resource instanceof ContactResource || resource instanceof HostResource,
"Cannot delete a %s via this action",
resource.getClass().getSimpleName());
return new AutoValue_DeleteContactsAndHostsAction_DeletionRequest.Builder()
.setKey(resourceKey)
.setKey(resourceKey.getOfyKey())
.setLastUpdateTime(resource.getUpdateTimestamp().getTimestamp())
.setRequestingClientId(
checkNotNull(
@@ -605,12 +606,12 @@ public class DeleteContactsAndHostsAction implements Runnable {
static boolean doesResourceStateAllowDeletion(EppResource resource, DateTime now) {
Key<EppResource> key = Key.create(resource);
if (isDeleted(resource, now)) {
logger.atWarning().log("Cannot asynchronously delete %s because it is already deleted", key);
logger.atWarning().log("Cannot asynchronously delete %s because it is already deleted.", key);
return false;
}
if (!resource.getStatusValues().contains(PENDING_DELETE)) {
logger.atWarning().log(
"Cannot asynchronously delete %s because it is not in PENDING_DELETE", key);
"Cannot asynchronously delete %s because it is not in PENDING_DELETE.", key);
return false;
}
return true;

View File

@@ -167,7 +167,7 @@ public class DeleteExpiredDomainsAction implements Runnable {
/** Runs the actual domain delete flow and returns whether the deletion was successful. */
private boolean runDomainDeleteFlow(DomainBase domain) {
logger.atInfo().log("Attempting to delete domain %s", domain.getDomainName());
logger.atInfo().log("Attempting to delete domain '%s'.", domain.getDomainName());
// Create a new transaction that the flow's execution will be enlisted in that loads the domain
// transactionally. This way we can ensure that nothing else has modified the domain in question
// in the intervening period since the query above found it.
@@ -203,7 +203,7 @@ public class DeleteExpiredDomainsAction implements Runnable {
if (eppOutput.isPresent()) {
if (eppOutput.get().isSuccess()) {
logger.atInfo().log("Successfully deleted domain %s", domain.getDomainName());
logger.atInfo().log("Successfully deleted domain '%s'.", domain.getDomainName());
} else {
logger.atSevere().log(
"Failed to delete domain %s; EPP response:\n\n%s",

View File

@@ -135,14 +135,14 @@ public class DeleteLoadTestDataAction implements Runnable {
}
private void deleteContact(ContactResource contact) {
if (!LOAD_TEST_REGISTRARS.contains(contact.getPersistedCurrentSponsorClientId())) {
if (!LOAD_TEST_REGISTRARS.contains(contact.getPersistedCurrentSponsorRegistrarId())) {
return;
}
// We cannot remove contacts from domains in the general case, so we cannot delete contacts
// that are linked to domains (since it would break the foreign keys)
if (EppResourceUtils.isLinked(contact.createVKey(), clock.nowUtc())) {
logger.atWarning().log(
"Cannot delete contact with repo ID %s since it is referenced from a domain",
"Cannot delete contact with repo ID %s since it is referenced from a domain.",
contact.getRepoId());
return;
}
@@ -150,7 +150,7 @@ public class DeleteLoadTestDataAction implements Runnable {
}
private void deleteHost(HostResource host) {
if (!LOAD_TEST_REGISTRARS.contains(host.getPersistedCurrentSponsorClientId())) {
if (!LOAD_TEST_REGISTRARS.contains(host.getPersistedCurrentSponsorRegistrarId())) {
return;
}
VKey<HostResource> hostVKey = host.createVKey();
@@ -177,7 +177,7 @@ public class DeleteLoadTestDataAction implements Runnable {
HistoryEntryDao.loadHistoryObjectsForResource(eppResource.createVKey());
if (isDryRun) {
logger.atInfo().log(
"Would delete repo ID %s along with %d history objects",
"Would delete repo ID %s along with %d history objects.",
eppResource.getRepoId(), historyObjects.size());
} else {
historyObjects.forEach(tm()::delete);
@@ -198,7 +198,7 @@ public class DeleteLoadTestDataAction implements Runnable {
@Override
public final void map(EppResource resource) {
if (LOAD_TEST_REGISTRARS.contains(resource.getPersistedCurrentSponsorClientId())) {
if (LOAD_TEST_REGISTRARS.contains(resource.getPersistedCurrentSponsorRegistrarId())) {
deleteResource(resource);
getContext()
.incrementCounter(

View File

@@ -228,7 +228,7 @@ public class DeleteProberDataAction implements Runnable {
if (EppResourceUtils.isActive(domain, tm().getTransactionTime())) {
if (isDryRun) {
logger.atInfo().log(
"Would soft-delete the active domain: %s (%s)",
"Would soft-delete the active domain: %s (%s).",
domain.getDomainName(), domain.getRepoId());
} else {
softDeleteDomain(domain, registryAdminRegistrarId, dnsQueue);
@@ -237,7 +237,7 @@ public class DeleteProberDataAction implements Runnable {
} else {
if (isDryRun) {
logger.atInfo().log(
"Would hard-delete the non-active domain: %s (%s) and its dependents",
"Would hard-delete the non-active domain: %s (%s) and its dependents.",
domain.getDomainName(), domain.getRepoId());
} else {
domainRepoIdsToHardDelete.add(domain.getRepoId());
@@ -291,7 +291,7 @@ public class DeleteProberDataAction implements Runnable {
.setModificationTime(tm().getTransactionTime())
.setBySuperuser(true)
.setReason("Deletion of prober data")
.setClientId(registryAdminRegistrarId)
.setRegistrarId(registryAdminRegistrarId)
.build();
// Note that we don't bother handling grace periods, billing events, pending transfers, poll
// messages, or auto-renews because those will all be hard-deleted the next time the job runs
@@ -331,7 +331,7 @@ public class DeleteProberDataAction implements Runnable {
getContext().incrementCounter("skipped, non-prober data");
}
} catch (Throwable t) {
logger.atSevere().withCause(t).log("Error while deleting prober data for key %s", key);
logger.atSevere().withCause(t).log("Error while deleting prober data for key %s.", key);
getContext().incrementCounter(String.format("error, kind %s", key.getKind()));
}
}
@@ -372,7 +372,7 @@ public class DeleteProberDataAction implements Runnable {
if (EppResourceUtils.isActive(domain, now)) {
if (isDryRun) {
logger.atInfo().log(
"Would soft-delete the active domain: %s (%s)", domainName, domainKey);
"Would soft-delete the active domain: %s (%s).", domainName, domainKey);
} else {
tm().transact(() -> softDeleteDomain(domain, registryAdminRegistrarId, dnsQueue));
}

View File

@@ -148,9 +148,9 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
.reduce(0, Integer::sum);
if (!isDryRun) {
logger.atInfo().log("Saved OneTime billing events", numBillingEventsSaved);
logger.atInfo().log("Saved OneTime billing events.", numBillingEventsSaved);
} else {
logger.atInfo().log("Generated OneTime billing events (dry run)", numBillingEventsSaved);
logger.atInfo().log("Generated OneTime billing events (dry run).", numBillingEventsSaved);
}
logger.atInfo().log(
"Recurring event expansion %s complete for billing event range [%s, %s).",
@@ -324,7 +324,7 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
DomainHistory historyEntry =
new DomainHistory.Builder()
.setBySuperuser(false)
.setClientId(recurring.getClientId())
.setRegistrarId(recurring.getRegistrarId())
.setModificationTime(tm().getTransactionTime())
.setDomain(tm().loadByKey(domainKey))
.setPeriod(Period.create(1, YEARS))
@@ -354,7 +354,7 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
syntheticOneTimesBuilder.add(
new OneTime.Builder()
.setBillingTime(billingTime)
.setClientId(recurring.getClientId())
.setRegistrarId(recurring.getRegistrarId())
.setCost(renewCost)
.setEventTime(eventTime)
.setFlags(union(recurring.getFlags(), Flag.SYNTHETIC))

View File

@@ -173,7 +173,7 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
retrier.callWithRetry(
() -> dnsQueue.addDomainRefreshTask(domainName),
TransientFailureException.class);
logger.atInfo().log("Enqueued DNS refresh for domain %s.", domainName);
logger.atInfo().log("Enqueued DNS refresh for domain '%s'.", domainName);
});
deleteTasksWithRetry(
refreshRequests,
@@ -353,8 +353,7 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
static DnsRefreshRequest createFromTask(TaskHandle task, DateTime now) throws Exception {
ImmutableMap<String, String> params = ImmutableMap.copyOf(task.extractParams());
VKey<HostResource> hostKey =
VKey.fromWebsafeKey(
checkNotNull(params.get(PARAM_HOST_KEY), "Host to refresh not specified"));
VKey.create(checkNotNull(params.get(PARAM_HOST_KEY), "Host to refresh not specified"));
HostResource host =
tm().transact(() -> tm().loadByKeyIfPresent(hostKey))
.orElseThrow(() -> new NoSuchElementException("Host to refresh doesn't exist"));

View File

@@ -203,11 +203,11 @@ public class RelockDomainAction implements Runnable {
"Domain %s has a pending transfer.",
domainName);
checkArgument(
domain.getCurrentSponsorClientId().equals(oldLock.getRegistrarId()),
domain.getCurrentSponsorRegistrarId().equals(oldLock.getRegistrarId()),
"Domain %s has been transferred from registrar %s to registrar %s since the unlock.",
domainName,
oldLock.getRegistrarId(),
domain.getCurrentSponsorClientId());
domain.getCurrentSponsorRegistrarId());
}
private void handleNonRetryableFailure(RegistryLock oldLock, Throwable t) {
@@ -293,7 +293,7 @@ public class RelockDomainAction implements Runnable {
private ImmutableSet<InternetAddress> getEmailRecipients(String registrarId) {
Registrar registrar =
Registrar.loadByClientIdCached(registrarId)
Registrar.loadByRegistrarIdCached(registrarId)
.orElseThrow(
() ->
new IllegalStateException(String.format("Unknown registrar %s", registrarId)));
@@ -313,7 +313,7 @@ public class RelockDomainAction implements Runnable {
builder.add(new InternetAddress(registryLockEmailAddress));
} catch (AddressException e) {
// This shouldn't stop any other emails going out, so swallow it
logger.atWarning().log("Invalid email address %s", registryLockEmailAddress);
logger.atWarning().log("Invalid email address '%s'.", registryLockEmailAddress);
}
}
return builder.build();

View File

@@ -76,13 +76,15 @@ public class ResaveEntityAction implements Runnable {
"Re-saving entity %s which was enqueued at %s.", resourceKey, requestedTime);
tm().transact(
() -> {
// TODO(/207363014): figure out if this should modified for vkey string replacement
ImmutableObject entity = tm().loadByKey(VKey.from(resourceKey));
tm().put(
(entity instanceof EppResource)
? ((EppResource) entity).cloneProjectedAtTime(tm().getTransactionTime())
: entity);
if (!resaveTimes.isEmpty()) {
asyncTaskEnqueuer.enqueueAsyncResave(entity, requestedTime, resaveTimes);
asyncTaskEnqueuer.enqueueAsyncResave(
VKey.from(resourceKey), requestedTime, resaveTimes);
}
});
response.setPayload("Entity re-saved.");

View File

@@ -55,6 +55,7 @@ import org.joda.time.format.DateTimeFormatter;
path = SendExpiringCertificateNotificationEmailAction.PATH,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class SendExpiringCertificateNotificationEmailAction implements Runnable {
public static final String PATH = "/_dr/task/sendExpiringCertificateNotificationEmail";
/**
* Used as an offset when storing the last notification email sent date.
@@ -96,8 +97,13 @@ public class SendExpiringCertificateNotificationEmailAction implements Runnable
public void run() {
response.setContentType(MediaType.PLAIN_TEXT_UTF_8);
try {
sendNotificationEmails();
int numEmailsSent = sendNotificationEmails();
String message =
String.format(
"Done. Sent %d expiring certificate notification emails in total.", numEmailsSent);
logger.atInfo().log(message);
response.setStatus(SC_OK);
response.setPayload(message);
} catch (Exception e) {
logger.atWarning().withCause(e).log(
"Exception thrown when sending expiring certificate notification emails.");
@@ -107,9 +113,11 @@ public class SendExpiringCertificateNotificationEmailAction implements Runnable
}
/**
* Returns a list of registrars that should receive expiring notification emails. There are two
* certificates that should be considered (the main certificate and failOver certificate). The
* registrars should receive notifications if one of the certificate checks returns true.
* Returns a list of registrars that should receive expiring notification emails.
*
* <p>There are two certificates that should be considered (the main certificate and failOver
* certificate). The registrars should receive notifications if one of the certificate checks
* returns true.
*/
@VisibleForTesting
ImmutableList<RegistrarInfo> getRegistrarsWithExpiringCertificates() {
@@ -151,15 +159,17 @@ public class SendExpiringCertificateNotificationEmailAction implements Runnable
}
try {
ImmutableSet<InternetAddress> recipients = getEmailAddresses(registrar, Type.TECH);
ImmutableSet<InternetAddress> ccs = getEmailAddresses(registrar, Type.ADMIN);
Date expirationDate = certificateChecker.getCertificate(certificate.get()).getNotAfter();
logger.atInfo().log(
"Registrar %s should receive an email that its %s SSL certificate will expire on %s.",
registrar.getRegistrarName(),
" %s SSL certificate of registrar '%s' will expire on %s.",
certificateType.getDisplayName(),
registrar.getRegistrarName(),
expirationDate.toString());
if (recipients.isEmpty()) {
if (recipients.isEmpty() && ccs.isEmpty()) {
logger.atWarning().log(
"Registrar %s contains no email addresses to receive notification email.",
"Registrar %s contains no TECH nor ADMIN email addresses to receive notification"
+ " email.",
registrar.getRegistrarName());
return false;
}
@@ -172,9 +182,9 @@ public class SendExpiringCertificateNotificationEmailAction implements Runnable
registrar.getRegistrarName(),
certificateType,
expirationDate,
registrar.getClientId()))
registrar.getRegistrarId()))
.setRecipients(recipients)
.setCcs(getEmailAddresses(registrar, Type.ADMIN))
.setCcs(ccs)
.build());
/*
* A duration time offset is used here to ensure that date comparison between two
@@ -243,32 +253,32 @@ public class SendExpiringCertificateNotificationEmailAction implements Runnable
/** Sends notification emails to registrars with expiring certificates. */
@VisibleForTesting
int sendNotificationEmails() {
int emailsSent = 0;
int numEmailsSent = 0;
for (RegistrarInfo registrarInfo : getRegistrarsWithExpiringCertificates()) {
Registrar registrar = registrarInfo.registrar();
if (registrarInfo.isCertExpiring()) {
sendNotificationEmail(
registrar,
registrar.getLastExpiringCertNotificationSentDate(),
CertificateType.PRIMARY,
registrar.getClientCertificate());
emailsSent++;
if (registrarInfo.isCertExpiring()
&& sendNotificationEmail(
registrar,
registrar.getLastExpiringCertNotificationSentDate(),
CertificateType.PRIMARY,
registrar.getClientCertificate())) {
numEmailsSent++;
}
if (registrarInfo.isFailOverCertExpiring()) {
sendNotificationEmail(
registrar,
registrar.getLastExpiringFailoverCertNotificationSentDate(),
CertificateType.FAILOVER,
registrar.getFailoverClientCertificate());
emailsSent++;
if (registrarInfo.isFailOverCertExpiring()
&& sendNotificationEmail(
registrar,
registrar.getLastExpiringFailoverCertNotificationSentDate(),
CertificateType.FAILOVER,
registrar.getFailoverClientCertificate())) {
numEmailsSent++;
}
}
logger.atInfo().log(
"Attempted to send %d expiring certificate notification emails.", emailsSent);
return emailsSent;
return numEmailsSent;
}
/** Returns a list of email addresses of the registrar that should receive a notification email */
/**
* Returns a list of email addresses of the registrar that should receive a notification email.
*/
@VisibleForTesting
ImmutableSet<InternetAddress> getEmailAddresses(Registrar registrar, Type contactType) {
ImmutableSortedSet<RegistrarContact> contacts = registrar.getContactsOfType(contactType);
@@ -327,6 +337,7 @@ public class SendExpiringCertificateNotificationEmailAction implements Runnable
@AutoValue
public abstract static class RegistrarInfo {
static RegistrarInfo create(
Registrar registrar, boolean isCertExpiring, boolean isFailOverCertExpiring) {
return new AutoValue_SendExpiringCertificateNotificationEmailAction_RegistrarInfo(

View File

@@ -0,0 +1,138 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static org.apache.http.HttpStatus.SC_INTERNAL_SERVER_ERROR;
import static org.apache.http.HttpStatus.SC_OK;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.contact.ContactHistory;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Stream;
import javax.inject.Inject;
import org.joda.time.DateTime;
/**
* An action that wipes out Personal Identifiable Information (PII) fields of {@link ContactHistory}
* entities.
*
* <p>ContactHistory entities should be retained in the database for only certain amount of time.
* This periodic wipe out action only applies to SQL.
*/
@Action(
service = Service.BACKEND,
path = WipeOutContactHistoryPiiAction.PATH,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class WipeOutContactHistoryPiiAction implements Runnable {
public static final String PATH = "/_dr/task/wipeOutContactHistoryPii";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final Clock clock;
private final Response response;
private final int minMonthsBeforeWipeOut;
private final int wipeOutQueryBatchSize;
@Inject
public WipeOutContactHistoryPiiAction(
Clock clock,
@Config("minMonthsBeforeWipeOut") int minMonthsBeforeWipeOut,
@Config("wipeOutQueryBatchSize") int wipeOutQueryBatchSize,
Response response) {
this.clock = clock;
this.response = response;
this.minMonthsBeforeWipeOut = minMonthsBeforeWipeOut;
this.wipeOutQueryBatchSize = wipeOutQueryBatchSize;
}
@Override
public void run() {
response.setContentType(MediaType.PLAIN_TEXT_UTF_8);
try {
int totalNumOfWipedEntities = 0;
DateTime wipeOutTime = clock.nowUtc().minusMonths(minMonthsBeforeWipeOut);
logger.atInfo().log(
"About to wipe out all PII of contact history entities prior to %s.", wipeOutTime);
int numOfWipedEntities = 0;
do {
numOfWipedEntities =
jpaTm()
.transact(
() ->
wipeOutContactHistoryData(
getNextContactHistoryEntitiesWithPiiBatch(wipeOutTime)));
totalNumOfWipedEntities += numOfWipedEntities;
} while (numOfWipedEntities > 0);
String msg =
String.format(
"Done. Wiped out PII of %d ContactHistory entities in total.",
totalNumOfWipedEntities);
logger.atInfo().log(msg);
response.setPayload(msg);
response.setStatus(SC_OK);
} catch (Exception e) {
logger.atSevere().withCause(e).log(
"Exception thrown during the process of wiping out contact history PII.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(
String.format(
"Exception thrown during the process of wiping out contact history PII with cause"
+ ": %s",
e));
}
}
/**
* Returns a stream of up to {@link #wipeOutQueryBatchSize} {@link ContactHistory} entities
* containing PII that are prior to @param wipeOutTime.
*/
@VisibleForTesting
Stream<ContactHistory> getNextContactHistoryEntitiesWithPiiBatch(DateTime wipeOutTime) {
// email is one of the required fields in EPP, meaning it's initially not null.
// Therefore, checking if it's null is one way to avoid processing contact history entities
// that have been processed previously. Refer to RFC 5733 for more information.
return jpaTm()
.query(
"FROM ContactHistory WHERE modificationTime < :wipeOutTime " + "AND email IS NOT NULL",
ContactHistory.class)
.setParameter("wipeOutTime", wipeOutTime)
.setMaxResults(wipeOutQueryBatchSize)
.getResultStream();
}
/** Wipes out the PII of each of the {@link ContactHistory} entities in the stream. */
@VisibleForTesting
int wipeOutContactHistoryData(Stream<ContactHistory> contactHistoryEntities) {
AtomicInteger numOfEntities = new AtomicInteger(0);
contactHistoryEntities.forEach(
contactHistoryEntity -> {
jpaTm().update(contactHistoryEntity.asBuilder().wipeOutPii().build());
numOfEntities.incrementAndGet();
});
logger.atInfo().log(
"Wiped out all PII fields of %d ContactHistory entities.", numOfEntities.get());
return numOfEntities.get();
}
}

View File

@@ -92,7 +92,12 @@ public class WipeoutDatastoreAction implements Runnable {
.setJobName(createJobName("bulk-delete-datastore-", clock))
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(ImmutableMap.of("kindsToDelete", "*"));
.setParameters(
ImmutableMap.of(
"kindsToDelete",
"*",
"registryEnvironment",
RegistryEnvironment.get().name()));
LaunchFlexTemplateResponse launchResponse =
dataflow
.projects()

View File

@@ -0,0 +1,88 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.common;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import com.google.common.flogger.FluentLogger;
import java.util.List;
import javax.persistence.EntityManager;
import javax.persistence.EntityTransaction;
/**
* A database snapshot shareable by concurrent queries from multiple database clients. A snapshot is
* uniquely identified by its {@link #getSnapshotId snapshotId}, and must stay open until all
* concurrent queries to this snapshot have attached to it by calling {@link
* google.registry.persistence.transaction.JpaTransactionManager#setDatabaseSnapshot}. However, it
* can be closed before those queries complete.
*
* <p>This feature is <em>Postgresql-only</em>.
*
* <p>To support large queries, transaction isolation level is fixed at the REPEATABLE_READ to avoid
* exhausting predicate locks at the SERIALIZABLE level.
*/
// TODO(b/193662898): vendor-independent support for richer transaction semantics.
public class DatabaseSnapshot implements AutoCloseable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private String snapshotId;
private EntityManager entityManager;
private EntityTransaction transaction;
private DatabaseSnapshot() {}
public String getSnapshotId() {
checkState(entityManager != null, "Snapshot not opened yet.");
checkState(entityManager.isOpen(), "Snapshot already closed.");
return snapshotId;
}
private DatabaseSnapshot open() {
entityManager = jpaTm().getStandaloneEntityManager();
transaction = entityManager.getTransaction();
transaction.setRollbackOnly();
transaction.begin();
entityManager
.createNativeQuery("SET TRANSACTION ISOLATION LEVEL REPEATABLE READ")
.executeUpdate();
List<?> snapshotIds =
entityManager.createNativeQuery("SELECT pg_export_snapshot();").getResultList();
checkState(snapshotIds.size() == 1, "Unexpected number of snapshots: %s", snapshotIds.size());
snapshotId = (String) snapshotIds.get(0);
return this;
}
@Override
public void close() {
if (transaction != null && transaction.isActive()) {
try {
transaction.rollback();
} catch (Exception e) {
logger.atWarning().withCause(e).log("Failed to close a Database Snapshot");
}
}
if (entityManager != null && entityManager.isOpen()) {
entityManager.close();
}
}
public static DatabaseSnapshot createSnapshot() {
return new DatabaseSnapshot().open();
}
}

View File

@@ -17,7 +17,6 @@ package google.registry.beam.common;
import static com.google.common.base.Verify.verify;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import google.registry.backup.AppEngineEnvironment;
import google.registry.model.contact.ContactResource;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.persistence.transaction.JpaTransactionManager;
@@ -59,18 +58,16 @@ public class JpaDemoPipeline implements Serializable {
public void processElement() {
// AppEngineEnvironment is needed as long as JPA entity classes still depends
// on Objectify.
try (AppEngineEnvironment allowOfyEntity = new AppEngineEnvironment()) {
int result =
(Integer)
jpaTm()
.transact(
() ->
jpaTm()
.getEntityManager()
.createNativeQuery("select 1;")
.getSingleResult());
verify(result == 1, "Expecting 1, got %s.", result);
}
int result =
(Integer)
jpaTm()
.transact(
() ->
jpaTm()
.getEntityManager()
.createNativeQuery("select 1;")
.getSingleResult());
verify(result == 1, "Expecting 1, got %s.", result);
counter.inc();
}
}));

View File

@@ -20,11 +20,9 @@ import static org.apache.beam.sdk.values.TypeDescriptors.integers;
import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Streams;
import google.registry.backup.AppEngineEnvironment;
import google.registry.beam.common.RegistryQuery.CriteriaQuerySupplier;
import google.registry.model.UpdateAutoTimestamp;
import google.registry.model.UpdateAutoTimestamp.DisableAutoUpdateResource;
import google.registry.model.ofy.ObjectifyService;
import google.registry.model.replay.SqlEntity;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.persistence.transaction.TransactionManagerFactory;
@@ -140,6 +138,9 @@ public final class RegistryJpaIO {
abstract Coder<T> coder();
@Nullable
abstract String snapshotId();
abstract Builder<R, T> toBuilder();
@Override
@@ -147,7 +148,9 @@ public final class RegistryJpaIO {
public PCollection<T> expand(PBegin input) {
return input
.apply("Starting " + name(), Create.of((Void) null))
.apply("Run query for " + name(), ParDo.of(new QueryRunner<>(query(), resultMapper())))
.apply(
"Run query for " + name(),
ParDo.of(new QueryRunner<>(query(), resultMapper(), snapshotId())))
.setCoder(coder())
.apply("Reshuffle", Reshuffle.viaRandomKey());
}
@@ -164,6 +167,18 @@ public final class RegistryJpaIO {
return toBuilder().coder(coder).build();
}
/**
* Specifies the database snapshot to use for this query.
*
* <p>This feature is <em>Postgresql-only</em>. User is responsible for keeping the snapshot
* available until all JVM workers have started using it by calling {@link
* JpaTransactionManager#setDatabaseSnapshot}.
*/
// TODO(b/193662898): vendor-independent support for richer transaction semantics.
public Read<R, T> withSnapshot(@Nullable String snapshotId) {
return toBuilder().snapshotId(snapshotId).build();
}
static <R, T> Builder<R, T> builder() {
return new AutoValue_RegistryJpaIO_Read.Builder<R, T>()
.name(DEFAULT_NAME)
@@ -181,6 +196,8 @@ public final class RegistryJpaIO {
abstract Builder<R, T> coder(Coder coder);
abstract Builder<R, T> snapshotId(@Nullable String sharedSnapshotId);
abstract Read<R, T> build();
Builder<R, T> criteriaQuery(CriteriaQuerySupplier<R> criteriaQuery) {
@@ -203,22 +220,28 @@ public final class RegistryJpaIO {
static class QueryRunner<R, T> extends DoFn<Void, T> {
private final RegistryQuery<R> query;
private final SerializableFunction<R, T> resultMapper;
// java.util.Optional is not serializable. Use of Guava Optional is discouraged.
@Nullable private final String snapshotId;
QueryRunner(RegistryQuery<R> query, SerializableFunction<R, T> resultMapper) {
QueryRunner(
RegistryQuery<R> query,
SerializableFunction<R, T> resultMapper,
@Nullable String snapshotId) {
this.query = query;
this.resultMapper = resultMapper;
this.snapshotId = snapshotId;
}
@ProcessElement
public void processElement(OutputReceiver<T> outputReceiver) {
// AppEngineEnvironment is need for handling VKeys, which involve Ofy keys. Unlike
// SqlBatchWriter, it is unnecessary to initialize ObjectifyService in this class.
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
// TODO(b/187210388): JpaTransactionManager should support non-transactional query.
jpaTm()
.transactNoRetry(
() -> query.stream().map(resultMapper::apply).forEach(outputReceiver::output));
}
jpaTm()
.transactNoRetry(
() -> {
if (snapshotId != null) {
jpaTm().setDatabaseSnapshot(snapshotId);
}
query.stream().map(resultMapper::apply).forEach(outputReceiver::output);
});
}
}
}
@@ -364,16 +387,6 @@ public final class RegistryJpaIO {
this.withAutoTimestamp = withAutoTimestamp;
}
@Setup
public void setup() {
// AppEngineEnvironment is needed as long as Objectify keys are still involved in the handling
// of SQL entities (e.g., in VKeys). ObjectifyService needs to be initialized when conversion
// between Ofy entity and Datastore entity is needed.
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
ObjectifyService.initOfy();
}
}
@ProcessElement
public void processElement(@Element KV<ShardedKey<Integer>, Iterable<T>> kv) {
if (withAutoTimestamp) {
@@ -386,19 +399,17 @@ public final class RegistryJpaIO {
}
private void actuallyProcessElement(@Element KV<ShardedKey<Integer>, Iterable<T>> kv) {
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
ImmutableList<Object> entities =
Streams.stream(kv.getValue())
.map(this.jpaConverter::apply)
// TODO(b/177340730): post migration delete the line below.
.filter(Objects::nonNull)
.collect(ImmutableList.toImmutableList());
try {
jpaTm().transact(() -> jpaTm().putAll(entities));
counter.inc(entities.size());
} catch (RuntimeException e) {
processSingly(entities);
}
ImmutableList<Object> entities =
Streams.stream(kv.getValue())
.map(this.jpaConverter::apply)
// TODO(b/177340730): post migration delete the line below.
.filter(Objects::nonNull)
.collect(ImmutableList.toImmutableList());
try {
jpaTm().transact(() -> jpaTm().putAll(entities));
counter.inc(entities.size());
} catch (RuntimeException e) {
processSingly(entities);
}
}

View File

@@ -21,6 +21,7 @@ import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.persistence.PersistenceModule;
import google.registry.persistence.PersistenceModule.BeamBulkQueryJpaTm;
import google.registry.persistence.PersistenceModule.BeamJpaTm;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.JpaTransactionManager;
@@ -45,9 +46,19 @@ public interface RegistryPipelineComponent {
@Config("projectId")
String getProjectId();
/** Returns the regular {@link JpaTransactionManager} for general use. */
@BeamJpaTm
Lazy<JpaTransactionManager> getJpaTransactionManager();
/**
* Returns a {@link JpaTransactionManager} optimized for bulk loading multi-level JPA entities
* ({@link google.registry.model.domain.DomainBase} and {@link
* google.registry.model.domain.DomainHistory}). Please refer to {@link
* google.registry.model.bulkquery.BulkQueryEntities} for more information.
*/
@BeamBulkQueryJpaTm
Lazy<JpaTransactionManager> getBulkQueryJpaTransactionManager();
@Component.Builder
interface Builder {

View File

@@ -16,6 +16,7 @@ package google.registry.beam.common;
import google.registry.beam.common.RegistryJpaIO.Write;
import google.registry.config.RegistryEnvironment;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import java.util.Objects;
import javax.annotation.Nullable;
@@ -34,7 +35,6 @@ import org.apache.beam.sdk.options.Description;
public interface RegistryPipelineOptions extends GcpOptions {
@Description("The Registry environment.")
@Nullable
RegistryEnvironment getRegistryEnvironment();
void setRegistryEnvironment(RegistryEnvironment environment);
@@ -45,6 +45,12 @@ public interface RegistryPipelineOptions extends GcpOptions {
void setIsolationOverride(TransactionIsolationLevel isolationOverride);
@Description("The JPA Transaction Manager to use.")
@Default.Enum(value = "REGULAR")
JpaTransactionManagerType getJpaTransactionManagerType();
void setJpaTransactionManagerType(JpaTransactionManagerType jpaTransactionManagerType);
@Description("The number of entities to write to the SQL database in one operation.")
@Default.Integer(20)
int getSqlWriteBatchSize();

View File

@@ -20,6 +20,8 @@ import com.google.auto.service.AutoService;
import com.google.common.flogger.FluentLogger;
import dagger.Lazy;
import google.registry.config.RegistryEnvironment;
import google.registry.config.SystemPropertySetter;
import google.registry.model.AppEngineEnvironment;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.persistence.transaction.TransactionManagerFactory;
import org.apache.beam.sdk.harness.JvmInitializer;
@@ -35,18 +37,36 @@ import org.apache.beam.sdk.options.PipelineOptions;
@AutoService(JvmInitializer.class)
public class RegistryPipelineWorkerInitializer implements JvmInitializer {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
public static final String PROPERTY = "google.registry.beam";
@Override
public void beforeProcessing(PipelineOptions options) {
RegistryPipelineOptions registryOptions = options.as(RegistryPipelineOptions.class);
RegistryEnvironment environment = registryOptions.getRegistryEnvironment();
if (environment == null || environment.equals(RegistryEnvironment.UNITTEST)) {
return;
throw new RuntimeException(
"A registry environment must be specified in the pipeline options.");
}
logger.atInfo().log("Setting up RegistryEnvironment: %s", environment);
logger.atInfo().log("Setting up RegistryEnvironment %s.", environment);
environment.setup();
Lazy<JpaTransactionManager> transactionManagerLazy =
toRegistryPipelineComponent(registryOptions).getJpaTransactionManager();
RegistryPipelineComponent registryPipelineComponent =
toRegistryPipelineComponent(registryOptions);
Lazy<JpaTransactionManager> transactionManagerLazy;
switch (registryOptions.getJpaTransactionManagerType()) {
case BULK_QUERY:
transactionManagerLazy = registryPipelineComponent.getBulkQueryJpaTransactionManager();
break;
case REGULAR:
default:
transactionManagerLazy = registryPipelineComponent.getJpaTransactionManager();
}
TransactionManagerFactory.setJpaTmOnBeamWorker(transactionManagerLazy::get);
// Masquerade all threads as App Engine threads so we can create Ofy keys in the pipeline. Also
// loads all ofy entities.
new AppEngineEnvironment("s~" + registryPipelineComponent.getProjectId())
.setEnvironmentForAllThreads();
// Set the system property so that we can call IdService.allocateId() without access to
// datastore.
SystemPropertySetter.PRODUCTION_IMPL.setProperty(PROPERTY, "true");
}
}

View File

@@ -0,0 +1,169 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static google.registry.beam.comparedb.ValidateSqlUtils.createSqlEntityTupleTag;
import static google.registry.beam.initsql.Transforms.createTagForKind;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Verify;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.backup.VersionedEntity;
import google.registry.beam.initsql.Transforms;
import google.registry.model.billing.BillingEvent;
import google.registry.model.common.Cursor;
import google.registry.model.contact.ContactHistory;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.host.HostHistory;
import google.registry.model.host.HostResource;
import google.registry.model.poll.PollMessage;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.apache.beam.sdk.values.TupleTagList;
import org.joda.time.DateTime;
/** Utilities for loading Datastore snapshots. */
public final class DatastoreSnapshots {
private DatastoreSnapshots() {}
/**
* Datastore kinds eligible for validation. This set must be consistent with {@link
* SqlSnapshots#ALL_SQL_ENTITIES}.
*/
@VisibleForTesting
static final ImmutableSet<Class<?>> ALL_DATASTORE_KINDS =
ImmutableSet.of(
Registry.class,
Cursor.class,
Registrar.class,
ContactResource.class,
RegistrarContact.class,
HostResource.class,
HistoryEntry.class,
AllocationToken.class,
BillingEvent.Recurring.class,
BillingEvent.OneTime.class,
BillingEvent.Cancellation.class,
PollMessage.class,
DomainBase.class);
/**
* Returns the Datastore snapshot right before {@code commitLogToTime} for the user specified
* {@code kinds}. The resulting snapshot has all changes that happened before {@code
* commitLogToTime}, and none at or after {@code commitLogToTime}.
*
* <p>If {@code HistoryEntry} is included in {@code kinds}, the result will contain {@code
* PCollections} for the child entities, {@code DomainHistory}, {@code ContactHistory}, and {@code
* HostHistory}.
*/
static PCollectionTuple loadDatastoreSnapshotByKind(
Pipeline pipeline,
String exportDir,
String commitLogDir,
DateTime commitLogFromTime,
DateTime commitLogToTime,
Set<Class<?>> kinds) {
PCollectionTuple snapshot =
pipeline.apply(
"Load Datastore snapshot.",
Transforms.loadDatastoreSnapshot(
exportDir,
commitLogDir,
commitLogFromTime,
commitLogToTime,
kinds.stream().map(Key::getKind).collect(ImmutableSet.toImmutableSet())));
PCollectionTuple perTypeSnapshots = PCollectionTuple.empty(pipeline);
for (Class<?> kind : kinds) {
PCollection<VersionedEntity> perKindSnapshot =
snapshot.get(createTagForKind(Key.getKind(kind)));
if (SqlEntity.class.isAssignableFrom(kind)) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag((Class<? extends SqlEntity>) kind),
datastoreEntityToPojo(perKindSnapshot, kind.getSimpleName()));
continue;
}
Verify.verify(kind == HistoryEntry.class, "Unexpected Non-SqlEntity class: %s", kind);
PCollectionTuple historyEntriesByType = splitHistoryEntry(perKindSnapshot);
for (Map.Entry<TupleTag<?>, PCollection<?>> entry :
historyEntriesByType.getAll().entrySet()) {
perTypeSnapshots = perTypeSnapshots.and(entry.getKey().getId(), entry.getValue());
}
}
return perTypeSnapshots;
}
/**
* Splits a {@link PCollection} of {@link HistoryEntry HistoryEntries} into three collections of
* its child entities by type.
*/
static PCollectionTuple splitHistoryEntry(PCollection<VersionedEntity> historyEntries) {
return historyEntries.apply(
"Split HistoryEntry by Resource Type",
ParDo.of(
new DoFn<VersionedEntity, SqlEntity>() {
@ProcessElement
public void processElement(
@Element VersionedEntity historyEntry, MultiOutputReceiver out) {
Optional.ofNullable(Transforms.convertVersionedEntityToSqlEntity(historyEntry))
.ifPresent(
sqlEntity ->
out.get(createSqlEntityTupleTag(sqlEntity.getClass()))
.output(sqlEntity));
}
})
.withOutputTags(
createSqlEntityTupleTag(DomainHistory.class),
TupleTagList.of(createSqlEntityTupleTag(ContactHistory.class))
.and(createSqlEntityTupleTag(HostHistory.class))));
}
/**
* Transforms a {@link PCollection} of {@link VersionedEntity VersionedEntities} to Ofy Java
* objects.
*/
static PCollection<SqlEntity> datastoreEntityToPojo(
PCollection<VersionedEntity> entities, String desc) {
return entities.apply(
"Datastore Entity to Pojo " + desc,
ParDo.of(
new DoFn<VersionedEntity, SqlEntity>() {
@ProcessElement
public void processElement(
@Element VersionedEntity entity, OutputReceiver<SqlEntity> out) {
Optional.ofNullable(Transforms.convertVersionedEntityToSqlEntity(entity))
.ifPresent(out::output);
}
}));
}
}

View File

@@ -0,0 +1,145 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import com.google.auto.value.AutoValue;
import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import dagger.Component;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.gcs.GcsUtils;
import google.registry.util.Clock;
import google.registry.util.UtilsModule;
import java.io.IOException;
import java.util.Comparator;
import java.util.NoSuchElementException;
import java.util.Optional;
import javax.inject.Inject;
import javax.inject.Singleton;
import org.joda.time.DateTime;
import org.joda.time.Instant;
import org.joda.time.Interval;
/** Finds the necessary information for loading the most recent Datastore snapshot. */
public class LatestDatastoreSnapshotFinder {
private final String projectId;
private final GcsUtils gcsUtils;
private final Clock clock;
@Inject
LatestDatastoreSnapshotFinder(
@Config("projectId") String projectId, GcsUtils gcsUtils, Clock clock) {
this.projectId = projectId;
this.gcsUtils = gcsUtils;
this.clock = clock;
}
/**
* Finds information of the most recent Datastore snapshot, including the GCS folder of the
* exported data files and the start and stop times of the export. The folder of the CommitLogs is
* also included in the return.
*/
public DatastoreSnapshotInfo getSnapshotInfo() {
String bucketName = RegistryConfig.getDatastoreBackupsBucket().substring("gs://".length());
/**
* Find the bucket-relative path to the overall metadata file of the last Datastore export.
* Since Datastore export is saved daily, we may need to look back to yesterday. If found, the
* return value is like
* "2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata".
*/
Optional<String> metaFilePathOptional = findMostRecentExportMetadataFile(bucketName, 2);
if (!metaFilePathOptional.isPresent()) {
throw new NoSuchElementException("No exports found over the past 2 days.");
}
String metaFilePath = metaFilePathOptional.get();
String metaFileFolder = metaFilePath.substring(0, metaFilePath.indexOf('/'));
Instant exportStartTime = Instant.parse(metaFileFolder.replace('_', '.') + 'Z');
BlobInfo blobInfo = gcsUtils.getBlobInfo(BlobId.of(bucketName, metaFilePath));
Instant exportEndTime = new Instant(blobInfo.getCreateTime());
return DatastoreSnapshotInfo.create(
String.format("gs://%s/%s", bucketName, metaFileFolder),
getCommitLogDir(),
new Interval(exportStartTime, exportEndTime));
}
public String getCommitLogDir() {
return "gs://" + projectId + "-commits";
}
/**
* Finds the bucket-relative path of the overall export metadata file, in the given bucket,
* searching back up to {@code lookBackDays} days, including today.
*
* <p>The overall export metadata file is the last file created during a Datastore export. All
* data has been exported by the creation time of this file. The name of this file, like that of
* all files in the same export, begins with the timestamp when the export starts.
*
* <p>An example return value: {@code
* 2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata}.
*/
private Optional<String> findMostRecentExportMetadataFile(String bucketName, int lookBackDays) {
DateTime today = clock.nowUtc();
for (int day = 0; day < lookBackDays; day++) {
String dateString = today.minusDays(day).toString("yyyy-MM-dd");
try {
Optional<String> metaFilePath =
gcsUtils.listFolderObjects(bucketName, dateString).stream()
.filter(s -> s.endsWith("overall_export_metadata"))
.map(s -> dateString + s)
.sorted(Comparator.<String>naturalOrder().reversed())
.findFirst();
if (metaFilePath.isPresent()) {
return metaFilePath;
}
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
return Optional.empty();
}
/** Holds information about a Datastore snapshot. */
@AutoValue
abstract static class DatastoreSnapshotInfo {
abstract String exportDir();
abstract String commitLogDir();
abstract Interval exportInterval();
static DatastoreSnapshotInfo create(
String exportDir, String commitLogDir, Interval exportOperationInterval) {
return new AutoValue_LatestDatastoreSnapshotFinder_DatastoreSnapshotInfo(
exportDir, commitLogDir, exportOperationInterval);
}
}
@Singleton
@Component(
modules = {
CredentialModule.class,
ConfigModule.class,
CloudTasksUtilsModule.class,
UtilsModule.class
})
interface LatestDatastoreSnapshotFinderFinderComponent {
LatestDatastoreSnapshotFinder datastoreSnapshotInfoFinder();
}
}

View File

@@ -0,0 +1,384 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static google.registry.beam.comparedb.ValidateSqlUtils.createSqlEntityTupleTag;
import static google.registry.beam.comparedb.ValidateSqlUtils.getMedianIdForHistoryTable;
import com.google.common.base.Verify;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.collect.Streams;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.model.billing.BillingEvent;
import google.registry.model.bulkquery.BulkQueryEntities;
import google.registry.model.bulkquery.DomainBaseLite;
import google.registry.model.bulkquery.DomainHistoryHost;
import google.registry.model.bulkquery.DomainHistoryLite;
import google.registry.model.bulkquery.DomainHost;
import google.registry.model.common.Cursor;
import google.registry.model.contact.ContactHistory;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.DomainHistory.DomainHistoryId;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.GracePeriod.GracePeriodHistory;
import google.registry.model.domain.secdns.DelegationSignerData;
import google.registry.model.domain.secdns.DomainDsDataHistory;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.host.HostHistory;
import google.registry.model.host.HostResource;
import google.registry.model.poll.PollMessage;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.tld.Registry;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import java.io.Serializable;
import java.util.Optional;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SerializableFunction;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TypeDescriptor;
import org.apache.beam.sdk.values.TypeDescriptors;
/**
* Utilities for loading SQL snapshots.
*
* <p>For {@link DomainBase} and {@link DomainHistory}, this class assumes the presence of the
* {@link google.registry.persistence.PersistenceModule.JpaTransactionManagerType#BULK_QUERY
* bulk-query-capable JpaTransactionManager}, and takes advantage of it for higher throughput.
*
* <p>For now this class is meant for use during the database migration period only. Therefore, it
* contains optimizations specifically for the production database at the current size, e.g.,
* parallel queries for select tables.
*/
public final class SqlSnapshots {
private SqlSnapshots() {}
/**
* SQL entity types that are eligible for validation. This set must be consistent with {@link
* DatastoreSnapshots#ALL_DATASTORE_KINDS}.
*/
static final ImmutableSet<Class<? extends SqlEntity>> ALL_SQL_ENTITIES =
ImmutableSet.of(
Registry.class,
Cursor.class,
Registrar.class,
ContactResource.class,
RegistrarContact.class,
HostResource.class,
AllocationToken.class,
BillingEvent.Recurring.class,
BillingEvent.OneTime.class,
BillingEvent.Cancellation.class,
PollMessage.class,
DomainBase.class,
ContactHistory.class,
HostHistory.class,
DomainHistory.class);
/**
* Loads a SQL snapshot for the given {@code sqlEntityTypes}.
*
* <p>If {@code snapshotId} is present, all queries use the specified database snapshot,
* guaranteeing a consistent result.
*/
public static PCollectionTuple loadCloudSqlSnapshotByType(
Pipeline pipeline,
ImmutableSet<Class<? extends SqlEntity>> sqlEntityTypes,
Optional<String> snapshotId) {
PCollectionTuple perTypeSnapshots = PCollectionTuple.empty(pipeline);
for (Class<? extends SqlEntity> clazz : sqlEntityTypes) {
if (clazz == DomainBase.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(DomainBase.class),
loadAndAssembleDomainBase(pipeline, snapshotId));
continue;
}
if (clazz == DomainHistory.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(DomainHistory.class),
loadAndAssembleDomainHistory(pipeline, snapshotId));
continue;
}
if (clazz == ContactHistory.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(ContactHistory.class),
loadContactHistory(pipeline, snapshotId));
continue;
}
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(clazz),
pipeline.apply(
"SQL Load " + clazz.getSimpleName(),
RegistryJpaIO.read(
() -> CriteriaQueryBuilder.create(clazz).build(), SqlEntity.class::cast)
.withSnapshot(snapshotId.orElse(null))));
}
return perTypeSnapshots;
}
/**
* Bulk-loads parts of {@link DomainBase} and assembles them in the pipeline.
*
* @see BulkQueryEntities
*/
public static PCollection<SqlEntity> loadAndAssembleDomainBase(
Pipeline pipeline, Optional<String> snapshotId) {
PCollection<KV<String, Serializable>> baseObjects =
readAllAndAssignKey(pipeline, DomainBaseLite.class, DomainBaseLite::getRepoId, snapshotId);
PCollection<KV<String, Serializable>> gracePeriods =
readAllAndAssignKey(pipeline, GracePeriod.class, GracePeriod::getDomainRepoId, snapshotId);
PCollection<KV<String, Serializable>> delegationSigners =
readAllAndAssignKey(
pipeline,
DelegationSignerData.class,
DelegationSignerData::getDomainRepoId,
snapshotId);
PCollection<KV<String, Serializable>> domainHosts =
readAllAndAssignKey(pipeline, DomainHost.class, DomainHost::getDomainRepoId, snapshotId);
return PCollectionList.of(
ImmutableList.of(baseObjects, gracePeriods, delegationSigners, domainHosts))
.apply("SQL Merge DomainBase parts", Flatten.pCollections())
.apply("Group by Domain Parts by RepoId", GroupByKey.create())
.apply(
"Assemble DomainBase",
ParDo.of(
new DoFn<KV<String, Iterable<Serializable>>, SqlEntity>() {
@ProcessElement
public void processElement(
@Element KV<String, Iterable<Serializable>> kv,
OutputReceiver<SqlEntity> outputReceiver) {
TypedClassifier partsByType = new TypedClassifier(kv.getValue());
ImmutableSet<DomainBaseLite> baseObjects =
partsByType.getAllOf(DomainBaseLite.class);
Verify.verify(
baseObjects.size() == 1,
"Expecting one DomainBaseLite object per repoId: " + kv.getKey());
outputReceiver.output(
BulkQueryEntities.assembleDomainBase(
baseObjects.iterator().next(),
partsByType.getAllOf(GracePeriod.class),
partsByType.getAllOf(DelegationSignerData.class),
partsByType.getAllOf(DomainHost.class).stream()
.map(DomainHost::getHostVKey)
.collect(ImmutableSet.toImmutableSet())));
}
}));
}
/**
* Loads all {@link ContactHistory} entities from the database.
*
* <p>This method uses two queries to load data in parallel. This is a performance optimization
* specifically for the production database.
*/
static PCollection<SqlEntity> loadContactHistory(Pipeline pipeline, Optional<String> snapshotId) {
long medianId =
getMedianIdForHistoryTable("ContactHistory")
.orElseThrow(
() -> new IllegalStateException("Not a valid database: no ContactHistory."));
PCollection<SqlEntity> part1 =
pipeline.apply(
"SQL Load ContactHistory first half",
RegistryJpaIO.read(
String.format("select c from ContactHistory c where id <= %s", medianId),
false,
SqlEntity.class::cast)
.withSnapshot(snapshotId.orElse(null)));
PCollection<SqlEntity> part2 =
pipeline.apply(
"SQL Load ContactHistory second half",
RegistryJpaIO.read(
String.format("select c from ContactHistory c where id > %s", medianId),
false,
SqlEntity.class::cast)
.withSnapshot(snapshotId.orElse(null)));
return PCollectionList.of(part1)
.and(part2)
.apply("Combine ContactHistory parts", Flatten.pCollections());
}
/**
* Bulk-loads all parts of {@link DomainHistory} and assembles them in the pipeline.
*
* <p>This method uses two queries to load {@link DomainBaseLite} in parallel. This is a
* performance optimization specifically for the production database.
*
* @see BulkQueryEntities
*/
static PCollection<SqlEntity> loadAndAssembleDomainHistory(
Pipeline pipeline, Optional<String> snapshotId) {
long medianId =
getMedianIdForHistoryTable("DomainHistory")
.orElseThrow(
() -> new IllegalStateException("Not a valid database: no DomainHistory."));
PCollection<KV<String, Serializable>> baseObjectsPart1 =
queryAndAssignKey(
pipeline,
"first half",
String.format("select c from DomainHistory c where id <= %s", medianId),
DomainHistoryLite.class,
compose(DomainHistoryLite::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
PCollection<KV<String, Serializable>> baseObjectsPart2 =
queryAndAssignKey(
pipeline,
"second half",
String.format("select c from DomainHistory c where id > %s", medianId),
DomainHistoryLite.class,
compose(DomainHistoryLite::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
PCollection<KV<String, Serializable>> gracePeriods =
readAllAndAssignKey(
pipeline,
GracePeriodHistory.class,
compose(GracePeriodHistory::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
PCollection<KV<String, Serializable>> delegationSigners =
readAllAndAssignKey(
pipeline,
DomainDsDataHistory.class,
compose(DomainDsDataHistory::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
PCollection<KV<String, Serializable>> domainHosts =
readAllAndAssignKey(
pipeline,
DomainHistoryHost.class,
compose(DomainHistoryHost::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
PCollection<KV<String, Serializable>> transactionRecords =
readAllAndAssignKey(
pipeline,
DomainTransactionRecord.class,
compose(DomainTransactionRecord::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
return PCollectionList.of(
ImmutableList.of(
baseObjectsPart1,
baseObjectsPart2,
gracePeriods,
delegationSigners,
domainHosts,
transactionRecords))
.apply("Merge DomainHistory parts", Flatten.pCollections())
.apply("Group by DomainHistory Parts by DomainHistoryId string", GroupByKey.create())
.apply(
"Assemble DomainHistory",
ParDo.of(
new DoFn<KV<String, Iterable<Serializable>>, SqlEntity>() {
@ProcessElement
public void processElement(
@Element KV<String, Iterable<Serializable>> kv,
OutputReceiver<SqlEntity> outputReceiver) {
TypedClassifier partsByType = new TypedClassifier(kv.getValue());
ImmutableSet<DomainHistoryLite> baseObjects =
partsByType.getAllOf(DomainHistoryLite.class);
Verify.verify(
baseObjects.size() == 1,
"Expecting one DomainHistoryLite object per domainHistoryId: "
+ kv.getKey());
outputReceiver.output(
BulkQueryEntities.assembleDomainHistory(
baseObjects.iterator().next(),
partsByType.getAllOf(DomainDsDataHistory.class),
partsByType.getAllOf(DomainHistoryHost.class).stream()
.map(DomainHistoryHost::getHostVKey)
.collect(ImmutableSet.toImmutableSet()),
partsByType.getAllOf(GracePeriodHistory.class),
partsByType.getAllOf(DomainTransactionRecord.class)));
}
}));
}
static <R, T> PCollection<KV<String, Serializable>> readAllAndAssignKey(
Pipeline pipeline,
Class<R> type,
SerializableFunction<R, String> keyFunction,
Optional<String> snapshotId) {
return pipeline
.apply(
"SQL Load " + type.getSimpleName(),
RegistryJpaIO.read(() -> CriteriaQueryBuilder.create(type).build())
.withSnapshot(snapshotId.orElse(null)))
.apply(
"Assign Key to " + type.getSimpleName(),
MapElements.into(
TypeDescriptors.kvs(
TypeDescriptors.strings(), TypeDescriptor.of(Serializable.class)))
.via(obj -> KV.of(keyFunction.apply(obj), (Serializable) obj)));
}
static <R, T> PCollection<KV<String, Serializable>> queryAndAssignKey(
Pipeline pipeline,
String diffrentiator,
String jplQuery,
Class<R> type,
SerializableFunction<R, String> keyFunction,
Optional<String> snapshotId) {
return pipeline
.apply(
"SQL Load " + type.getSimpleName() + " " + diffrentiator,
RegistryJpaIO.read(jplQuery, false, type::cast).withSnapshot(snapshotId.orElse(null)))
.apply(
"Assign Key to " + type.getSimpleName() + " " + diffrentiator,
MapElements.into(
TypeDescriptors.kvs(
TypeDescriptors.strings(), TypeDescriptor.of(Serializable.class)))
.via(obj -> KV.of(keyFunction.apply(obj), (Serializable) obj)));
}
// TODO(b/205988530): don't use beam serializablefunction, make one that extends Java's Function.
private static <R, I, T> SerializableFunction<R, T> compose(
SerializableFunction<R, I> f1, SerializableFunction<I, T> f2) {
return r -> f2.apply(f1.apply(r));
}
/** Container that receives mixed-typed data and groups them by {@link Class}. */
static class TypedClassifier {
private final ImmutableSetMultimap<Class<?>, Object> classifiedEntities;
TypedClassifier(Iterable<Serializable> inputs) {
this.classifiedEntities =
Streams.stream(inputs)
.collect(ImmutableSetMultimap.toImmutableSetMultimap(Object::getClass, x -> x));
}
<T> ImmutableSet<T> getAllOf(Class<T> clazz) {
return classifiedEntities.get(clazz).stream()
.map(clazz::cast)
.collect(ImmutableSet.toImmutableSet());
}
}
}

View File

@@ -0,0 +1,159 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static com.google.common.base.Verify.verify;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.beam.common.RegistryPipelineWorkerInitializer;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder.DatastoreSnapshotInfo;
import google.registry.beam.comparedb.ValidateSqlUtils.CompareSqlEntity;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.replay.SqlEntity;
import google.registry.model.replay.SqlReplayCheckpoint;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.TransactionManagerFactory;
import java.io.Serializable;
import java.util.Optional;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult.State;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.WithKeys;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
/**
* Validates the asynchronous data replication process from Datastore (primary storage) to Cloud SQL
* (secondary storage).
*/
public class ValidateSqlPipeline {
/** Specifies the extra CommitLogs to load before the start of a Database export. */
private static final int COMMIT_LOG_MARGIN_MINUTES = 10;
private final ValidateSqlPipelineOptions options;
private final DatastoreSnapshotInfo mostRecentExport;
public ValidateSqlPipeline(
ValidateSqlPipelineOptions options, DatastoreSnapshotInfo mostRecentExport) {
this.options = options;
this.mostRecentExport = mostRecentExport;
}
void run() {
run(Pipeline.create(options));
}
@VisibleForTesting
void run(Pipeline pipeline) {
// TODO(weiminyu): Acquire the commit log replay lock when the lock release bug is fixed.
DateTime latestCommitLogTime =
TransactionManagerFactory.jpaTm().transact(() -> SqlReplayCheckpoint.get());
Preconditions.checkState(
latestCommitLogTime.isAfter(mostRecentExport.exportInterval().getEnd()),
"Cannot recreate Datastore snapshot since target time is in the middle of an export.");
try (DatabaseSnapshot databaseSnapshot = DatabaseSnapshot.createSnapshot()) {
setupPipeline(pipeline, Optional.of(databaseSnapshot.getSnapshotId()), latestCommitLogTime);
State state = pipeline.run().waitUntilFinish();
if (!State.DONE.equals(state)) {
throw new IllegalStateException("Unexpected pipeline state: " + state);
}
}
}
void setupPipeline(
Pipeline pipeline, Optional<String> sqlSnapshotId, DateTime latestCommitLogTime) {
pipeline
.getCoderRegistry()
.registerCoderForClass(SqlEntity.class, SerializableCoder.of(Serializable.class));
PCollectionTuple datastoreSnapshot =
DatastoreSnapshots.loadDatastoreSnapshotByKind(
pipeline,
mostRecentExport.exportDir(),
mostRecentExport.commitLogDir(),
mostRecentExport.exportInterval().getStart().minusMinutes(COMMIT_LOG_MARGIN_MINUTES),
// Increase by 1ms since we want to include commitLogs latestCommitLogTime but
// this parameter is exclusive.
latestCommitLogTime.plusMillis(1),
DatastoreSnapshots.ALL_DATASTORE_KINDS);
PCollectionTuple cloudSqlSnapshot =
SqlSnapshots.loadCloudSqlSnapshotByType(
pipeline, SqlSnapshots.ALL_SQL_ENTITIES, sqlSnapshotId);
verify(
datastoreSnapshot.getAll().keySet().equals(cloudSqlSnapshot.getAll().keySet()),
"Expecting the same set of types in both snapshots.");
for (Class<? extends SqlEntity> clazz : SqlSnapshots.ALL_SQL_ENTITIES) {
TupleTag<SqlEntity> tag = ValidateSqlUtils.createSqlEntityTupleTag(clazz);
verify(
datastoreSnapshot.has(tag), "Missing %s in Datastore snapshot.", clazz.getSimpleName());
verify(cloudSqlSnapshot.has(tag), "Missing %s in Cloud SQL snapshot.", clazz.getSimpleName());
PCollectionList.of(datastoreSnapshot.get(tag))
.and(cloudSqlSnapshot.get(tag))
.apply("Combine from both snapshots: " + clazz.getSimpleName(), Flatten.pCollections())
.apply(
"Assign primary key to merged " + clazz.getSimpleName(),
WithKeys.of(ValidateSqlPipeline::getPrimaryKeyString).withKeyType(strings()))
.apply("Group by primary key " + clazz.getSimpleName(), GroupByKey.create())
.apply("Compare " + clazz.getSimpleName(), ParDo.of(new CompareSqlEntity()));
}
}
private static String getPrimaryKeyString(SqlEntity sqlEntity) {
// SqlEntity.getPrimaryKeyString only works with entities registered with Hibernate.
// We are using the BulkQueryJpaTransactionManager, which does not recognize DomainBase and
// DomainHistory. See BulkQueryEntities.java for more information.
if (sqlEntity instanceof DomainBase) {
return "DomainBase_" + ((DomainBase) sqlEntity).getRepoId();
}
if (sqlEntity instanceof DomainHistory) {
return "DomainHistory_" + ((DomainHistory) sqlEntity).getDomainHistoryId().toString();
}
return sqlEntity.getPrimaryKeyString();
}
public static void main(String[] args) {
ValidateSqlPipelineOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(ValidateSqlPipelineOptions.class);
// Defensively set important options.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
options.setJpaTransactionManagerType(JpaTransactionManagerType.BULK_QUERY);
// Reuse Dataflow worker initialization code to set up JPA in the pipeline harness.
new RegistryPipelineWorkerInitializer().beforeProcessing(options);
DatastoreSnapshotInfo mostRecentExport =
DaggerLatestDatastoreSnapshotFinder_LatestDatastoreSnapshotFinderFinderComponent.create()
.datastoreSnapshotInfoFinder()
.getSnapshotInfo();
new ValidateSqlPipeline(options, mostRecentExport).run(Pipeline.create(options));
}
}

View File

@@ -0,0 +1,20 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import google.registry.beam.common.RegistryPipelineOptions;
/** BEAM pipeline options for {@link ValidateSqlPipeline}. */
public interface ValidateSqlPipelineOptions extends RegistryPipelineOptions {}

View File

@@ -0,0 +1,321 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static com.google.common.base.Verify.verify;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSortedMap;
import com.google.common.collect.Maps;
import com.google.common.flogger.FluentLogger;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.contact.ContactBase;
import google.registry.model.contact.ContactHistory;
import google.registry.model.domain.DomainContent;
import google.registry.model.domain.DomainHistory;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.host.HostHistory;
import google.registry.model.poll.PollMessage;
import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import java.lang.reflect.Field;
import java.math.BigInteger;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.money.Money;
/** Helpers for use by {@link ValidateSqlPipeline}. */
final class ValidateSqlUtils {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private ValidateSqlUtils() {}
/**
* Query template for finding the median value of the {@code history_revision_id} column in one of
* the History tables.
*
* <p>The {@link ValidateSqlPipeline} uses this query to parallelize the query to some of the
* history tables. Although the {@code repo_id} column is the leading column in the primary keys
* of these tables, in practice and with production data, division by {@code history_revision_id}
* works slightly faster for unknown reasons.
*/
private static final String MEDIAN_ID_QUERY_TEMPLATE =
"SELECT history_revision_id FROM ( "
+ " SELECT"
+ " ROW_NUMBER() OVER (ORDER BY history_revision_id ASC) AS rownumber,"
+ " history_revision_id"
+ " FROM \"%TABLE%\""
+ ") AS foo\n"
+ "WHERE rownumber in (select count(*) / 2 + 1 from \"%TABLE%\")";
static Optional<Long> getMedianIdForHistoryTable(String tableName) {
Preconditions.checkArgument(
tableName.endsWith("History"), "Table must be one of the History tables.");
String sqlText = MEDIAN_ID_QUERY_TEMPLATE.replace("%TABLE%", tableName);
List results =
jpaTm()
.transact(() -> jpaTm().getEntityManager().createNativeQuery(sqlText).getResultList());
verify(results.size() < 2, "MidPoint query should have at most one result.");
if (results.isEmpty()) {
return Optional.empty();
}
return Optional.of(((BigInteger) results.get(0)).longValue());
}
static TupleTag<SqlEntity> createSqlEntityTupleTag(Class<? extends SqlEntity> actualType) {
return new TupleTag<SqlEntity>(actualType.getSimpleName()) {};
}
static class CompareSqlEntity extends DoFn<KV<String, Iterable<SqlEntity>>, Void> {
private final HashMap<String, Counter> totalCounters = new HashMap<>();
private final HashMap<String, Counter> missingCounters = new HashMap<>();
private final HashMap<String, Counter> unequalCounters = new HashMap<>();
private final HashMap<String, Counter> badEntityCounters = new HashMap<>();
private volatile boolean logPrinted = false;
private String getCounterKey(Class<?> clazz) {
return PollMessage.class.isAssignableFrom(clazz) ? "PollMessage" : clazz.getSimpleName();
}
private synchronized void ensureCounterExists(String counterKey) {
if (totalCounters.containsKey(counterKey)) {
return;
}
totalCounters.put(counterKey, Metrics.counter("CompareDB", "Total Compared: " + counterKey));
missingCounters.put(
counterKey, Metrics.counter("CompareDB", "Missing In One DB: " + counterKey));
unequalCounters.put(counterKey, Metrics.counter("CompareDB", "Not Equal:" + counterKey));
badEntityCounters.put(counterKey, Metrics.counter("CompareDB", "Bad Entities:" + counterKey));
}
/**
* A rudimentary debugging helper that prints the first pair of unequal entities in each worker.
* This will be removed when we start exporting such entities to GCS.
*/
void logDiff(String key, Object entry0, Object entry1) {
if (logPrinted) {
return;
}
logPrinted = true;
Map<String, Object> fields0 = ((ImmutableObject) entry0).toDiffableFieldMap();
Map<String, Object> fields1 = ((ImmutableObject) entry1).toDiffableFieldMap();
StringBuilder sb = new StringBuilder();
fields0.forEach(
(field, value) -> {
if (fields1.containsKey(field)) {
if (!Objects.equals(value, fields1.get(field))) {
sb.append(field + " not match: " + value + " -> " + fields1.get(field) + "\n");
}
} else {
sb.append(field + "Not found in entity 2\n");
}
});
fields1.forEach(
(field, value) -> {
if (!fields0.containsKey(field)) {
sb.append(field + "Not found in entity 1\n");
}
});
logger.atWarning().log(key + " " + sb.toString());
}
@ProcessElement
public void processElement(@Element KV<String, Iterable<SqlEntity>> kv) {
ImmutableList<SqlEntity> entities = ImmutableList.copyOf(kv.getValue());
verify(!entities.isEmpty(), "Can't happen: no value for key %s.", kv.getKey());
verify(entities.size() <= 2, "Unexpected duplicates for key %s", kv.getKey());
String counterKey = getCounterKey(entities.get(0).getClass());
ensureCounterExists(counterKey);
totalCounters.get(counterKey).inc();
if (entities.size() == 1) {
missingCounters.get(counterKey).inc();
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
logPrinted = true;
logger.atWarning().log("Unexpected single entity: %s", kv.getKey());
}
return;
}
SqlEntity entity0;
SqlEntity entity1;
try {
entity0 = normalizeEntity(entities.get(0));
entity1 = normalizeEntity(entities.get(1));
} catch (Exception e) {
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
logPrinted = true;
badEntityCounters.get(counterKey).inc();
}
return;
}
if (!Objects.equals(entity0, entity1)) {
unequalCounters.get(counterKey).inc();
logDiff(kv.getKey(), entities.get(0), entities.get(1));
}
}
}
static SqlEntity normalizeEntity(SqlEntity sqlEntity) {
if (sqlEntity instanceof EppResource) {
return normalizeEppResource(sqlEntity);
}
if (sqlEntity instanceof HistoryEntry) {
return (SqlEntity) normalizeHistoryEntry((HistoryEntry) sqlEntity);
}
if (sqlEntity instanceof Registry) {
return normalizeRegistry((Registry) sqlEntity);
}
if (sqlEntity instanceof OneTime) {
return normalizeOnetime((OneTime) sqlEntity);
}
return sqlEntity;
}
/**
* Normalizes an {@link EppResource} instance for comparison.
*
* <p>This method may modify the input object using reflection instead of making a copy with
* {@code eppResource.asBuilder().build()}, because when {@code eppResource} is a {@link
* google.registry.model.domain.DomainBase}, the {@code build} method accesses the Database, which
* we want to avoid.
*/
static SqlEntity normalizeEppResource(SqlEntity eppResource) {
try {
Field authField =
eppResource instanceof DomainContent
? DomainContent.class.getDeclaredField("authInfo")
: eppResource instanceof ContactBase
? ContactBase.class.getDeclaredField("authInfo")
: null;
if (authField != null) {
authField.setAccessible(true);
AuthInfo authInfo = (AuthInfo) authField.get(eppResource);
// When AuthInfo is missing, the authInfo field is null if the object is loaded from
// Datastore, or a PasswordAuth with null properties if loaded from SQL. In the second case
// we set the authInfo field to null.
if (authInfo != null
&& authInfo.getPw() != null
&& authInfo.getPw().getRepoId() == null
&& authInfo.getPw().getValue() == null) {
authField.set(eppResource, null);
}
}
Field field = EppResource.class.getDeclaredField("revisions");
field.setAccessible(true);
field.set(eppResource, null);
return eppResource;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* Normalizes a {@link HistoryEntry} for comparison.
*
* <p>This method modifies the input using reflection because relevant builder methods performs
* unwanted checks and changes.
*/
static HistoryEntry normalizeHistoryEntry(HistoryEntry historyEntry) {
// History objects from Datastore do not have details of their EppResource objects
// (domainContent, contactBase, hostBase).
try {
if (historyEntry instanceof DomainHistory) {
Field domainContent = DomainHistory.class.getDeclaredField("domainContent");
domainContent.setAccessible(true);
domainContent.set(historyEntry, null);
Field domainTransactionRecords =
HistoryEntry.class.getDeclaredField("domainTransactionRecords");
domainTransactionRecords.setAccessible(true);
Set<?> domainTransactionRecordsValue = (Set<?>) domainTransactionRecords.get(historyEntry);
if (domainTransactionRecordsValue != null && domainTransactionRecordsValue.isEmpty()) {
domainTransactionRecords.set(historyEntry, null);
}
} else if (historyEntry instanceof ContactHistory) {
Field contactBase = ContactHistory.class.getDeclaredField("contactBase");
contactBase.setAccessible(true);
contactBase.set(historyEntry, null);
} else if (historyEntry instanceof HostHistory) {
Field hostBase = HostHistory.class.getDeclaredField("hostBase");
hostBase.setAccessible(true);
hostBase.set(historyEntry, null);
}
return historyEntry;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
static Registry normalizeRegistry(Registry registry) {
if (registry.getStandardCreateCost().getAmount().scale() == 0) {
return registry;
}
return registry
.asBuilder()
.setCreateBillingCost(normalizeMoney(registry.getStandardCreateCost()))
.setRestoreBillingCost(normalizeMoney(registry.getStandardRestoreCost()))
.setServerStatusChangeBillingCost(normalizeMoney(registry.getServerStatusChangeCost()))
.setRegistryLockOrUnlockBillingCost(
normalizeMoney(registry.getRegistryLockOrUnlockBillingCost()))
.setRenewBillingCostTransitions(
ImmutableSortedMap.copyOf(
Maps.transformValues(
registry.getRenewBillingCostTransitions(), ValidateSqlUtils::normalizeMoney)))
.setEapFeeSchedule(
ImmutableSortedMap.copyOf(
Maps.transformValues(
registry.getEapFeeScheduleAsMap(), ValidateSqlUtils::normalizeMoney)))
.build();
}
/** Normalizes an {@link OneTime} instance for comparison. */
static OneTime normalizeOnetime(OneTime oneTime) {
Money cost = oneTime.getCost();
if (cost.getAmount().scale() == 0) {
return oneTime;
}
try {
return oneTime.asBuilder().setCost(normalizeMoney(oneTime.getCost())).build();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
static Money normalizeMoney(Money original) {
// Strips ".00" from the amount.
return Money.of(original.getCurrencyUnit(), original.getAmount().stripTrailingZeros());
}
}

View File

@@ -25,6 +25,7 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.flogger.FluentLogger;
import com.google.datastore.v1.Entity;
import google.registry.config.RegistryEnvironment;
import java.util.Iterator;
import java.util.Map;
import org.apache.beam.sdk.Pipeline;
@@ -308,6 +309,11 @@ public class BulkDeleteDatastorePipeline {
public interface BulkDeletePipelineOptions extends GcpOptions {
@Description("The Registry environment.")
RegistryEnvironment getRegistryEnvironment();
void setRegistryEnvironment(RegistryEnvironment environment);
@Description(
"The Datastore KINDs to be deleted. The format may be:\n"
+ "\t- The list of kinds to be deleted as a comma-separated string, or\n"

View File

@@ -169,14 +169,15 @@ public class DatastoreV1 {
int numSplits;
try {
long estimatedSizeBytes = getEstimatedSizeBytes(datastore, query, namespace);
logger.atInfo().log("Estimated size bytes for the query is: %s", estimatedSizeBytes);
logger.atInfo().log("Estimated size for the query is %d bytes.", estimatedSizeBytes);
numSplits =
(int)
Math.min(
NUM_QUERY_SPLITS_MAX,
Math.round(((double) estimatedSizeBytes) / DEFAULT_BUNDLE_SIZE_BYTES));
} catch (Exception e) {
logger.atWarning().log("Failed the fetch estimatedSizeBytes for query: %s", query, e);
logger.atWarning().withCause(e).log(
"Failed the fetch estimatedSizeBytes for query: %s", query);
// Fallback in case estimated size is unavailable.
numSplits = NUM_QUERY_SPLITS_MIN;
}
@@ -215,7 +216,7 @@ public class DatastoreV1 {
private static Entity getLatestTableStats(
String ourKind, @Nullable String namespace, Datastore datastore) throws DatastoreException {
long latestTimestamp = queryLatestStatisticsTimestamp(datastore, namespace);
logger.atInfo().log("Latest stats timestamp for kind %s is %s", ourKind, latestTimestamp);
logger.atInfo().log("Latest stats timestamp for kind %s is %s.", ourKind, latestTimestamp);
Query.Builder queryBuilder = Query.newBuilder();
if (Strings.isNullOrEmpty(namespace)) {
@@ -234,7 +235,7 @@ public class DatastoreV1 {
long now = System.currentTimeMillis();
RunQueryResponse response = datastore.runQuery(request);
logger.atFine().log(
"Query for per-kind statistics took %sms", System.currentTimeMillis() - now);
"Query for per-kind statistics took %d ms.", System.currentTimeMillis() - now);
QueryResultBatch batch = response.getBatch();
if (batch.getEntityResultsCount() == 0) {
@@ -330,7 +331,7 @@ public class DatastoreV1 {
logger.atWarning().log(
"Failed to translate Gql query '%s': %s", gqlQueryWithZeroLimit, e.getMessage());
logger.atWarning().log(
"User query might have a limit already set, so trying without zero limit");
"User query might have a limit already set, so trying without zero limit.");
// Retry without the zero limit.
return translateGqlQuery(gql, datastore, namespace);
} else {
@@ -514,10 +515,10 @@ public class DatastoreV1 {
@ProcessElement
public void processElement(ProcessContext c) throws Exception {
String gqlQuery = c.element();
logger.atInfo().log("User query: '%s'", gqlQuery);
logger.atInfo().log("User query: '%s'.", gqlQuery);
Query query =
translateGqlQueryWithLimitCheck(gqlQuery, datastore, v1Options.getNamespace());
logger.atInfo().log("User gql query translated to Query(%s)", query);
logger.atInfo().log("User gql query translated to Query(%s).", query);
c.output(query);
}
}
@@ -573,7 +574,7 @@ public class DatastoreV1 {
estimatedNumSplits = numSplits;
}
logger.atInfo().log("Splitting the query into %s splits", estimatedNumSplits);
logger.atInfo().log("Splitting the query into %d splits.", estimatedNumSplits);
List<Query> querySplits;
try {
querySplits =
@@ -647,7 +648,7 @@ public class DatastoreV1 {
throw exception;
}
if (!BackOffUtils.next(sleeper, backoff)) {
logger.atSevere().log("Aborting after %s retries.", MAX_RETRIES);
logger.atSevere().log("Aborting after %d retries.", MAX_RETRIES);
throw exception;
}
}

View File

@@ -20,11 +20,11 @@ import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.backup.AppEngineEnvironment;
import google.registry.backup.VersionedEntity;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.initsql.Transforms.RemoveDomainBaseForeignKeys;
import google.registry.model.billing.BillingEvent;
import google.registry.model.common.Cursor;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.token.AllocationToken;
@@ -62,6 +62,7 @@ import org.joda.time.DateTime;
* <ol>
* <li>{@link Registry}: Assumes that {@code PremiumList} and {@code ReservedList} have been set
* up in the SQL database.
* <li>{@link Cursor}: Logically can depend on {@code Registry}, but without foreign key.
* <li>{@link Registrar}: Logically depends on {@code Registry}, Foreign key not modeled yet.
* <li>{@link ContactResource}: references {@code Registrar}
* <li>{@link RegistrarContact}: references {@code Registrar}.
@@ -101,7 +102,11 @@ public class InitSqlPipeline implements Serializable {
*/
private static final ImmutableList<Class<?>> PHASE_ONE_ORDERED =
ImmutableList.of(
Registry.class, Registrar.class, ContactResource.class, RegistrarContact.class);
Registry.class,
Cursor.class,
Registrar.class,
ContactResource.class,
RegistrarContact.class);
/**
* Datastore kinds to be written to the SQL database after the cleansed version of {@link
@@ -224,9 +229,7 @@ public class InitSqlPipeline implements Serializable {
}
private static ImmutableList<String> toKindStrings(Collection<Class<?>> entityClasses) {
try (AppEngineEnvironment env = new AppEngineEnvironment()) {
return entityClasses.stream().map(Key::getKind).collect(ImmutableList.toImmutableList());
}
return entityClasses.stream().map(Key::getKind).collect(ImmutableList.toImmutableList());
}
public static void main(String[] args) {

View File

@@ -22,6 +22,7 @@ import static google.registry.beam.initsql.BackupPaths.getExportFilePatterns;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
import static google.registry.util.DomainNameUtils.canonicalizeDomainName;
import static java.util.Comparator.comparing;
import static org.apache.beam.sdk.values.TypeDescriptors.kvs;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
@@ -100,8 +101,9 @@ public final class Transforms {
}
/**
* Composite {@link PTransform transform} that loads the Datastore snapshot at {@code
* commitLogToTime} for caller specified {@code kinds}.
* Composite {@link PTransform transform} that loads the Datastore snapshot right before {@code
* commitLogToTime} for caller specified {@code kinds}. The resulting snapshot has all changes
* that happened before {@code commitLogToTime}, and none at or after {@code commitLogToTime}.
*
* <p>Caller must provide the location of a Datastore export that started AFTER {@code
* commitLogFromTime} and completed BEFORE {@code commitLogToTime}, as well as the root directory
@@ -261,8 +263,8 @@ public final class Transforms {
// Production data repair configs go below. See b/185954992.
// Prober domains in bad state, without associated contacts, hosts, billings, and history.
// They can be safely ignored.
// Prober domains in bad state, without associated contacts, hosts, billings, and non-synthesized
// history. They can be safely ignored.
private static final ImmutableSet<String> IGNORED_DOMAINS =
ImmutableSet.of("6AF6D2-IQCANT", "2-IQANYT");
@@ -277,7 +279,7 @@ public final class Transforms {
// Prober contacts referencing phantom registrars. They and their associated history entries can
// be safely ignored.
private static final ImmutableSet IGNORED_CONTACTS =
private static final ImmutableSet<String> IGNORED_CONTACTS =
ImmutableSet.of(
"1_WJ0TEST-GOOGLE", "1_WJ1TEST-GOOGLE", "1_WJ2TEST-GOOGLE", "1_WJ3TEST-GOOGLE");
@@ -298,7 +300,7 @@ public final class Transforms {
return !IGNORED_HOSTS.contains(roid);
}
if (entity.getKind().equals("HistoryEntry")) {
// Remove production bad data: History of the contacts to be ignored:
// Remove production bad data: Histories of ignored EPP resources:
com.google.appengine.api.datastore.Key parentKey = entity.getKey().getParent();
if (parentKey.getKind().equals("ContactResource")) {
String contactRoid = parentKey.getName();
@@ -308,6 +310,10 @@ public final class Transforms {
String hostRoid = parentKey.getName();
return !IGNORED_HOSTS.contains(hostRoid);
}
if (parentKey.getKind().equals("DomainBase")) {
String domainRoid = parentKey.getName();
return !IGNORED_DOMAINS.contains(domainRoid);
}
}
// End of production-specific checks.
@@ -320,7 +326,8 @@ public final class Transforms {
return true;
}
private static Entity repairBadData(Entity entity) {
@VisibleForTesting
static Entity repairBadData(Entity entity) {
if (entity.getKind().equals("Cancellation")
&& Objects.equals(entity.getProperty("reason"), "AUTO_RENEW")) {
// AUTO_RENEW has been moved from 'reason' to flags. Change reason to RENEW and add the
@@ -328,6 +335,15 @@ public final class Transforms {
// instead of append. See b/185954992.
entity.setUnindexedProperty("reason", Reason.RENEW.name());
entity.setUnindexedProperty("flags", ImmutableList.of(Flag.AUTO_RENEW.name()));
} else if (entity.getKind().equals("DomainBase")) {
// Canonicalize old domain/host names from 2016 and earlier before we were enforcing this.
entity.setIndexedProperty(
"fullyQualifiedDomainName",
canonicalizeDomainName((String) entity.getProperty("fullyQualifiedDomainName")));
} else if (entity.getKind().equals("HostResource")) {
entity.setIndexedProperty(
"fullyQualifiedHostName",
canonicalizeDomainName((String) entity.getProperty("fullyQualifiedHostName")));
}
return entity;
}
@@ -348,7 +364,7 @@ public final class Transforms {
* to make Optional work with BEAM)
*/
@Nullable
public static Object convertVersionedEntityToSqlEntity(VersionedEntity dsEntity) {
public static SqlEntity convertVersionedEntityToSqlEntity(VersionedEntity dsEntity) {
return dsEntity
.getEntity()
.filter(Transforms::isMigratable)
@@ -365,7 +381,8 @@ public final class Transforms {
* Returns a {@link PTransform} that produces a {@link PCollection} containing all elements in the
* given {@link Iterable}.
*/
static PTransform<PBegin, PCollection<String>> toStringPCollection(Iterable<String> strings) {
private static PTransform<PBegin, PCollection<String>> toStringPCollection(
Iterable<String> strings) {
return Create.of(strings).withCoder(StringUtf8Coder.of());
}
@@ -373,7 +390,7 @@ public final class Transforms {
* Returns a {@link PTransform} from file {@link Metadata} to {@link VersionedEntity} using
* caller-provided {@code transformer}.
*/
static PTransform<PCollection<Metadata>, PCollection<VersionedEntity>> processFiles(
private static PTransform<PCollection<Metadata>, PCollection<VersionedEntity>> processFiles(
DoFn<ReadableFile, VersionedEntity> transformer) {
return new PTransform<PCollection<Metadata>, PCollection<VersionedEntity>>() {
@Override
@@ -389,7 +406,7 @@ public final class Transforms {
private final DateTime fromTime;
private final DateTime toTime;
public FilterCommitLogFileByTime(DateTime fromTime, DateTime toTime) {
FilterCommitLogFileByTime(DateTime fromTime, DateTime toTime) {
checkNotNull(fromTime, "fromTime");
checkNotNull(toTime, "toTime");
checkArgument(

View File

@@ -57,7 +57,8 @@ import org.apache.beam.sdk.values.TypeDescriptor;
/**
* Definition of a Dataflow Flex pipeline template, which generates a given month's invoices.
*
* <p>To stage this template locally, run the {@code stage_beam_pipeline.sh} shell script.
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha
* --pipeline=invoicing}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*
@@ -125,7 +126,7 @@ public class InvoicingPipeline implements Serializable {
oneTime.getId(),
DateTimeUtils.toZonedDateTime(oneTime.getBillingTime(), ZoneId.of("UTC")),
DateTimeUtils.toZonedDateTime(oneTime.getEventTime(), ZoneId.of("UTC")),
registrar.getClientId(),
registrar.getRegistrarId(),
registrar.getBillingIdentifier().toString(),
registrar.getPoNumber().orElse(""),
DomainNameUtils.getTldFromDomainName(oneTime.getTargetId()),

View File

@@ -19,10 +19,13 @@ import static com.google.common.base.Verify.verify;
import static google.registry.model.common.Cursor.getCursorTimeOrStartOfTime;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static google.registry.rde.RdeModule.BRDA_QUEUE;
import static google.registry.rde.RdeModule.RDE_UPLOAD_QUEUE;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.auto.value.AutoValue;
import com.google.cloud.storage.BlobId;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.PgpHelper;
@@ -31,14 +34,20 @@ import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.rde.BrdaCopyAction;
import google.registry.rde.DepositFragment;
import google.registry.rde.Ghostryde;
import google.registry.rde.PendingDeposit;
import google.registry.rde.RdeCounter;
import google.registry.rde.RdeMarshaller;
import google.registry.rde.RdeModule;
import google.registry.rde.RdeResourceType;
import google.registry.rde.RdeUploadAction;
import google.registry.rde.RdeUtil;
import google.registry.request.Action.Service;
import google.registry.request.RequestParameters;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.util.CloudTasksUtils;
import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdeheader.XjcRdeHeaderElement;
import google.registry.xml.ValidationMode;
@@ -66,8 +75,12 @@ public class RdeIO {
abstract static class Write
extends PTransform<PCollection<KV<PendingDeposit, Iterable<DepositFragment>>>, PDone> {
private static final long serialVersionUID = 3334807737227087760L;
abstract GcsUtils gcsUtils();
abstract CloudTasksUtils cloudTasksUtils();
abstract String rdeBucket();
// It's OK to return a primitive array because we are only using it to construct the
@@ -83,7 +96,9 @@ public class RdeIO {
@AutoValue.Builder
abstract static class Builder {
abstract Builder setGcsUtils(GcsUtils gcsUtils);
abstract Builder setGcsUtils(GcsUtils value);
abstract Builder setCloudTasksUtils(CloudTasksUtils value);
abstract Builder setRdeBucket(String value);
@@ -100,7 +115,9 @@ public class RdeIO {
.apply(
"Write to GCS",
ParDo.of(new RdeWriter(gcsUtils(), rdeBucket(), stagingKeyBytes(), validationMode())))
.apply("Update cursors", ParDo.of(new CursorUpdater()));
.apply(
"Update cursor and enqueue next action",
ParDo.of(new CursorUpdater(cloudTasksUtils())));
return PDone.in(input.getPipeline());
}
}
@@ -109,6 +126,7 @@ public class RdeIO {
extends DoFn<KV<PendingDeposit, Iterable<DepositFragment>>, KV<PendingDeposit, Integer>> {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final long serialVersionUID = 5496375923068400382L;
private final GcsUtils gcsUtils;
private final String rdeBucket;
@@ -155,7 +173,7 @@ public class RdeIO {
checkState(key.directoryWithTrailingSlash() != null, "Manual subdirectory not specified");
prefix = prefix + "/manual/" + key.directoryWithTrailingSlash() + basename;
} else {
prefix = prefix + "/" + basename;
prefix = prefix + '/' + basename;
}
BlobId xmlFilename = BlobId.of(rdeBucket, prefix + ".xml.ghostryde");
// This file will contain the byte length (ASCII) of the raw unencrypted XML.
@@ -172,7 +190,7 @@ public class RdeIO {
// Write a gigantic XML file to GCS. We'll start by opening encrypted out/err file handles.
logger.atInfo().log("Writing %s and %s", xmlFilename, xmlLengthFilename);
logger.atInfo().log("Writing files '%s' and '%s'.", xmlFilename, xmlLengthFilename);
try (OutputStream gcsOutput = gcsUtils.openOutputStream(xmlFilename);
OutputStream lengthOutput = gcsUtils.openOutputStream(xmlLengthFilename);
OutputStream ghostrydeEncoder = Ghostryde.encoder(gcsOutput, stagingKey, lengthOutput);
@@ -219,7 +237,7 @@ public class RdeIO {
//
// This will be sent to ICANN once we're done uploading the big XML to the escrow provider.
if (mode == RdeMode.FULL) {
logger.atInfo().log("Writing %s", reportFilename);
logger.atInfo().log("Writing file '%s'.", reportFilename);
try (OutputStream gcsOutput = gcsUtils.openOutputStream(reportFilename);
OutputStream ghostrydeEncoder = Ghostryde.encoder(gcsOutput, stagingKey)) {
counter.makeReport(id, watermark, header, revision).marshal(ghostrydeEncoder, UTF_8);
@@ -229,7 +247,7 @@ public class RdeIO {
}
// Now that we're done, output roll the cursor forward.
if (key.manual()) {
logger.atInfo().log("Manual operation; not advancing cursor or enqueuing upload task");
logger.atInfo().log("Manual operation; not advancing cursor or enqueuing upload task.");
} else {
outputReceiver.output(KV.of(key, revision));
}
@@ -237,10 +255,19 @@ public class RdeIO {
}
private static class CursorUpdater extends DoFn<KV<PendingDeposit, Integer>, Void> {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final long serialVersionUID = 5822176227753327224L;
private final CloudTasksUtils cloudTasksUtils;
private CursorUpdater(CloudTasksUtils cloudTasksUtils) {
this.cloudTasksUtils = cloudTasksUtils;
}
@ProcessElement
public void processElement(@Element KV<PendingDeposit, Integer> input) {
public void processElement(
@Element KV<PendingDeposit, Integer> input, PipelineOptions options) {
tm().transact(
() -> {
PendingDeposit key = input.getKey();
@@ -265,8 +292,33 @@ public class RdeIO {
key);
tm().put(Cursor.create(key.cursor(), newPosition, registry));
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s", key.cursor(), key.tld(), newPosition);
"Rolled forward %s on %s cursor to %s.", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), revision);
if (key.mode() == RdeMode.FULL) {
cloudTasksUtils.enqueue(
RDE_UPLOAD_QUEUE,
CloudTasksUtils.createPostTask(
RdeUploadAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
key.tld(),
RdeModule.PARAM_PREFIX,
options.getJobName() + '/')));
} else {
cloudTasksUtils.enqueue(
BRDA_QUEUE,
CloudTasksUtils.createPostTask(
BrdaCopyAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
key.tld(),
RdeModule.PARAM_WATERMARK,
key.watermark().toString(),
RdeModule.PARAM_PREFIX,
options.getJobName() + '/')));
}
});
}
}

View File

@@ -14,36 +14,53 @@
package google.registry.beam.rde;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.model.EppResourceUtils.loadAtPointInTimeAsync;
import static google.registry.beam.rde.RdePipeline.TupleTags.DOMAIN_FRAGMENTS;
import static google.registry.beam.rde.RdePipeline.TupleTags.EXTERNAL_HOST_FRAGMENTS;
import static google.registry.beam.rde.RdePipeline.TupleTags.HOST_TO_PENDING_DEPOSIT;
import static google.registry.beam.rde.RdePipeline.TupleTags.PENDING_DEPOSIT;
import static google.registry.beam.rde.RdePipeline.TupleTags.REFERENCED_CONTACTS;
import static google.registry.beam.rde.RdePipeline.TupleTags.REFERENCED_HOSTS;
import static google.registry.beam.rde.RdePipeline.TupleTags.REVISION_ID;
import static google.registry.beam.rde.RdePipeline.TupleTags.SUPERORDINATE_DOMAINS;
import static google.registry.model.reporting.HistoryEntryDao.RESOURCE_TYPES_TO_HISTORY_TYPES;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static org.apache.beam.sdk.values.TypeDescriptors.kvs;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import com.google.common.collect.Streams;
import com.google.common.io.BaseEncoding;
import dagger.BindsInstance;
import dagger.Component;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.gcs.GcsUtils;
import google.registry.model.EppResource;
import google.registry.model.contact.ContactHistory;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.host.HostHistory;
import google.registry.model.host.HostResource;
import google.registry.model.rde.RdeMode;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.Registrar.Type;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.reporting.HistoryEntryDao;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.VKey;
import google.registry.rde.DepositFragment;
import google.registry.rde.PendingDeposit;
import google.registry.rde.PendingDeposit.PendingDepositCoder;
import google.registry.rde.RdeFragmenter;
import google.registry.rde.RdeMarshaller;
import google.registry.util.CloudTasksUtils;
import google.registry.util.UtilsModule;
import google.registry.xml.ValidationMode;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
@@ -51,73 +68,158 @@ import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.function.Supplier;
import java.lang.reflect.InvocationTargetException;
import java.util.HashSet;
import javax.inject.Inject;
import javax.inject.Singleton;
import javax.persistence.Entity;
import javax.persistence.IdClass;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.coders.KvCoder;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.coders.StringUtf8Coder;
import org.apache.beam.sdk.coders.VarLongCoder;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.Filter;
import org.apache.beam.sdk.transforms.FlatMapElements;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.Reshuffle;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.join.CoGbkResult;
import org.apache.beam.sdk.transforms.join.CoGroupByKey;
import org.apache.beam.sdk.transforms.join.KeyedPCollectionTuple;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.apache.beam.sdk.values.TupleTagList;
import org.apache.beam.sdk.values.TypeDescriptor;
import org.apache.beam.sdk.values.TypeDescriptors;
import org.joda.time.DateTime;
/**
* Definition of a Dataflow Flex template, which generates RDE/BRDA deposits.
*
* <p>To stage this template locally, run the {@code stage_beam_pipeline.sh} shell script.
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha
* --pipeline=rde}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*
* <p>This pipeline only works for pending deposits with the same watermark, the {@link
* google.registry.rde.RdeStagingAction} will batch such pending deposits together and launch
* multiple pipelines if multiple watermarks exist.
*
* <p>The pipeline is broadly divided into two parts -- creating the {@link DepositFragment}s, and
* processing them.
*
* <h1>Creating {@link DepositFragment}</h1>
*
* <h2>{@link Registrar}</h2>
*
* Non-test registrar entities are loaded from Cloud SQL and marshalled into deposit fragments. They
* are <b>NOT</b> rewound to the watermark.
*
* <h2>{@link EppResource}</h2>
*
* All EPP resources are loaded from the corresponding {@link HistoryEntry}, which has the resource
* embedded. In general we find most recent history entry before watermark and filter out the ones
* that are soft-deleted by watermark. The history is emitted as pairs of (resource repo ID: history
* revision ID) from the SQL query.
*
* <h3>{@link DomainBase}</h3>
*
* After the most recent (live) domain resources are loaded from the corresponding history objects,
* we marshall them to deposit fragments and emit the (pending deposit: deposit fragment) pairs for
* further processing. We also find all the contacts and hosts referenced by a given domain and emit
* pairs of (contact/host repo ID: pending deposit) for all RDE pending deposits for further
* processing.
*
* <h3>{@link ContactResource}</h3>
*
* We first join most recent contact histories, represented by (contact repo ID: contact history
* revision ID) pairs, with referenced contacts, represented by (contact repo ID: pending deposit)
* pairs, on the contact repo ID, to remove unreferenced contact histories. Contact resources are
* then loaded from the remaining referenced contact histories, and marshalled into (pending
* deposit: deposit fragment) pairs.
*
* <h3>{@link HostResource}</h3>
*
* Similar to {@link ContactResource}, we join the most recent host history with referenced hosts to
* find most recent referenced hosts. For external hosts we do the same treatment as we did on
* contacts and obtain the (pending deposit: deposit fragment) pairs. For subordinate hosts, we need
* to find the superordinate domain in order to properly handle pending transfer in the deposit as
* well. So we first find the superordinate domain repo ID from the host and join the (superordinate
* domain repo ID: (subordinate host repo ID: (pending deposit: revision ID))) pair with the (domain
* repo ID: revision ID) pair obtained from the domain history query in order to map the host at
* watermark to the domain at watermark. We then proceed to create the (pending deposit: deposit
* fragment) pair for subordinate hosts using the added domain information.
*
* <h1>Processing {@link DepositFragment}</h1>
*
* The (pending deposit: deposit fragment) pairs from different resources are combined and grouped
* by pending deposit. For each pending deposit, all the relevant deposit fragments are written into
* a encrypted file stored on GCS. The filename is uniquely determined by the Beam job ID so there
* is no need to lock the GCS write operation to prevent stomping. The cursor for staging the
* pending deposit is then rolled forward, and the next action is enqueued. The latter two
* operations are performed in a transaction so the cursor is rolled back if enqueueing failed.
*
* @see <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates">Using
* Flex Templates</a>
*/
@Singleton
public class RdePipeline implements Serializable {
private static final long serialVersionUID = -4866795928854754666L;
private final transient RdePipelineOptions options;
private final ValidationMode mode;
private final ImmutableSetMultimap<String, PendingDeposit> pendings;
private final ImmutableSet<PendingDeposit> pendingDeposits;
private final DateTime watermark;
private final String rdeBucket;
private final byte[] stagingKeyBytes;
private final GcsUtils gcsUtils;
private final CloudTasksUtils cloudTasksUtils;
private final RdeMarshaller marshaller;
// Registrars to be excluded from data escrow. Not including the sandbox-only OTE type so that
// if sneaks into production we would get an extra signal.
private static final ImmutableSet<Type> IGNORED_REGISTRAR_TYPES =
Sets.immutableEnumSet(Registrar.Type.MONITORING, Registrar.Type.TEST);
private static final String EPP_RESOURCE_QUERY =
"SELECT id FROM %entity% "
+ "WHERE COALESCE(creationClientId, '') NOT LIKE 'prober-%' "
+ "AND COALESCE(currentSponsorClientId, '') NOT LIKE 'prober-%' "
+ "AND COALESCE(lastEppUpdateClientId, '') NOT LIKE 'prober-%'";
public static String createEppResourceQuery(Class<? extends EppResource> clazz) {
return EPP_RESOURCE_QUERY.replace("%entity%", clazz.getAnnotation(Entity.class).name())
+ (clazz.equals(DomainBase.class) ? " AND tld in (:tlds)" : "");
}
// The field name of the EPP resource embedded in its corresponding history entry.
private static final ImmutableMap<Class<? extends HistoryEntry>, String> EPP_RESOURCE_FIELD_NAME =
ImmutableMap.of(
DomainHistory.class,
"domainContent",
ContactHistory.class,
"contactBase",
HostHistory.class,
"hostBase");
@Inject
RdePipeline(RdePipelineOptions options, GcsUtils gcsUtils) {
RdePipeline(RdePipelineOptions options, GcsUtils gcsUtils, CloudTasksUtils cloudTasksUtils) {
this.options = options;
this.mode = ValidationMode.valueOf(options.getValidationMode());
this.pendings = decodePendings(options.getPendings());
this.rdeBucket = options.getGcsBucket();
this.pendingDeposits = decodePendingDeposits(options.getPendings());
ImmutableSet<DateTime> potentialWatermarks =
pendingDeposits.stream()
.map(PendingDeposit::watermark)
.distinct()
.collect(toImmutableSet());
checkArgument(
potentialWatermarks.size() == 1,
String.format(
"RDE pipeline should only work on pending deposits "
+ "with the same watermark, but %d were given: %s",
potentialWatermarks.size(), potentialWatermarks));
this.watermark = potentialWatermarks.asList().get(0);
this.rdeBucket = options.getRdeStagingBucket();
this.stagingKeyBytes = BaseEncoding.base64Url().decode(options.getStagingKey());
this.gcsUtils = gcsUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.marshaller = new RdeMarshaller(mode);
}
PipelineResult run() {
@@ -129,152 +231,452 @@ public class RdePipeline implements Serializable {
}
PCollection<KV<PendingDeposit, Iterable<DepositFragment>>> createFragments(Pipeline pipeline) {
return PCollectionList.of(processRegistrars(pipeline))
.and(processNonRegistrarEntities(pipeline, DomainBase.class))
.and(processNonRegistrarEntities(pipeline, ContactResource.class))
.and(processNonRegistrarEntities(pipeline, HostResource.class))
.apply(Flatten.pCollections())
PCollection<KV<PendingDeposit, DepositFragment>> registrarFragments =
processRegistrars(pipeline);
PCollection<KV<String, Long>> domainHistories =
getMostRecentHistoryEntries(pipeline, DomainHistory.class);
PCollection<KV<String, Long>> contactHistories =
getMostRecentHistoryEntries(pipeline, ContactHistory.class);
PCollection<KV<String, Long>> hostHistories =
getMostRecentHistoryEntries(pipeline, HostHistory.class);
PCollectionTuple processedDomainHistories = processDomainHistories(domainHistories);
PCollection<KV<PendingDeposit, DepositFragment>> domainFragments =
processedDomainHistories.get(DOMAIN_FRAGMENTS);
PCollection<KV<PendingDeposit, DepositFragment>> contactFragments =
processContactHistories(
processedDomainHistories.get(REFERENCED_CONTACTS), contactHistories);
PCollectionTuple processedHosts =
processHostHistories(processedDomainHistories.get(REFERENCED_HOSTS), hostHistories);
PCollection<KV<PendingDeposit, DepositFragment>> externalHostFragments =
processedHosts.get(EXTERNAL_HOST_FRAGMENTS);
PCollection<KV<PendingDeposit, DepositFragment>> subordinateHostFragments =
processSubordinateHosts(processedHosts.get(SUPERORDINATE_DOMAINS), domainHistories);
return PCollectionList.of(registrarFragments)
.and(domainFragments)
.and(contactFragments)
.and(externalHostFragments)
.and(subordinateHostFragments)
.apply(
"Combine PendingDeposit:DepositFragment pairs from all entities",
Flatten.pCollections())
.setCoder(KvCoder.of(PendingDepositCoder.of(), SerializableCoder.of(DepositFragment.class)))
.apply("Group by PendingDeposit", GroupByKey.create());
.apply("Group DepositFragment by PendingDeposit", GroupByKey.create());
}
void persistData(PCollection<KV<PendingDeposit, Iterable<DepositFragment>>> input) {
input.apply(
"Write to GCS and update cursors",
"Write to GCS, update cursors, and enqueue upload tasks",
RdeIO.Write.builder()
.setRdeBucket(rdeBucket)
.setGcsUtils(gcsUtils)
.setCloudTasksUtils(cloudTasksUtils)
.setValidationMode(mode)
.setStagingKeyBytes(stagingKeyBytes)
.build());
}
PCollection<KV<PendingDeposit, DepositFragment>> processRegistrars(Pipeline pipeline) {
private PCollection<KV<PendingDeposit, DepositFragment>> processRegistrars(Pipeline pipeline) {
// Note that the namespace in the metric is not being used by Stackdriver, it just has to be
// non-empty.
// See:
// https://stackoverflow.com/questions/48530496/google-dataflow-custom-metrics-not-showing-on-stackdriver
Counter includedRegistrarCounter = Metrics.counter("RDE", "IncludedRegistrar");
Counter registrarFragmentCounter = Metrics.counter("RDE", "RegistrarFragment");
return pipeline
.apply(
"Read all production Registrar entities",
"Read all production Registrars",
RegistryJpaIO.read(
"SELECT clientIdentifier FROM Registrar WHERE type NOT IN (:types)",
ImmutableMap.of("types", IGNORED_REGISTRAR_TYPES),
String.class,
// TODO: consider adding coders for entities and pass them directly instead of using
// VKeys.
id -> VKey.createSql(Registrar.class, id)))
.apply(
"Marshal Registrar into DepositFragment",
"Marshall Registrar into DepositFragment",
FlatMapElements.into(
TypeDescriptors.kvs(
kvs(
TypeDescriptor.of(PendingDeposit.class),
TypeDescriptor.of(DepositFragment.class)))
.via(
(VKey<Registrar> key) -> {
includedRegistrarCounter.inc();
Registrar registrar = jpaTm().transact(() -> jpaTm().loadByKey(key));
DepositFragment fragment =
new RdeMarshaller(mode).marshalRegistrar(registrar);
return pendings.values().stream()
.map(pending -> KV.of(pending, fragment))
.collect(toImmutableSet());
DepositFragment fragment = marshaller.marshalRegistrar(registrar);
ImmutableSet<KV<PendingDeposit, DepositFragment>> fragments =
pendingDeposits.stream()
.map(pending -> KV.of(pending, fragment))
.collect(toImmutableSet());
registrarFragmentCounter.inc(fragments.size());
return fragments;
}));
}
@SuppressWarnings("deprecation") // Reshuffle is still recommended by Dataflow.
<T extends EppResource>
PCollection<KV<PendingDeposit, DepositFragment>> processNonRegistrarEntities(
Pipeline pipeline, Class<T> clazz) {
return createInputs(pipeline, clazz)
.apply("Marshal " + clazz.getSimpleName() + " into DepositFragment", mapToFragments(clazz))
.setCoder(KvCoder.of(PendingDepositCoder.of(), SerializableCoder.of(DepositFragment.class)))
/**
* Load the most recent history entry before the watermark for a given history entry type.
*
* <p>Note that deleted and non-production resources are not included.
*
* @return A KV pair of (repoId, revisionId), used to reconstruct the composite key for the
* history entry.
*/
private <T extends HistoryEntry> PCollection<KV<String, Long>> getMostRecentHistoryEntries(
Pipeline pipeline, Class<T> historyClass) {
String repoIdFieldName = HistoryEntryDao.REPO_ID_FIELD_NAMES.get(historyClass);
String resourceFieldName = EPP_RESOURCE_FIELD_NAME.get(historyClass);
return pipeline
.apply(
"Reshuffle KV<PendingDeposit, DepositFragment> of "
+ clazz.getSimpleName()
+ " to prevent fusion",
Reshuffle.of());
String.format("Load most recent %s", historyClass.getSimpleName()),
RegistryJpaIO.read(
("SELECT %repoIdField%, id FROM %entity% WHERE (%repoIdField%, modificationTime)"
+ " IN (SELECT %repoIdField%, MAX(modificationTime) FROM %entity% WHERE"
+ " modificationTime <= :watermark GROUP BY %repoIdField%) AND"
+ " %resourceField%.deletionTime > :watermark AND"
+ " COALESCE(%resourceField%.creationClientId, '') NOT LIKE 'prober-%' AND"
+ " COALESCE(%resourceField%.currentSponsorClientId, '') NOT LIKE 'prober-%'"
+ " AND COALESCE(%resourceField%.lastEppUpdateClientId, '') NOT LIKE"
+ " 'prober-%' "
+ (historyClass == DomainHistory.class
? "AND %resourceField%.tld IN "
+ "(SELECT id FROM Tld WHERE tldType = 'REAL')"
: ""))
.replace("%entity%", historyClass.getSimpleName())
.replace("%repoIdField%", repoIdFieldName)
.replace("%resourceField%", resourceFieldName),
ImmutableMap.of("watermark", watermark),
Object[].class,
row -> KV.of((String) row[0], (long) row[1])))
.setCoder(KvCoder.of(StringUtf8Coder.of(), VarLongCoder.of()));
}
<T extends EppResource> PCollection<VKey<T>> createInputs(Pipeline pipeline, Class<T> clazz) {
return pipeline.apply(
"Read all production " + clazz.getSimpleName() + " entities",
RegistryJpaIO.read(
createEppResourceQuery(clazz),
clazz.equals(DomainBase.class)
? ImmutableMap.of("tlds", pendings.keySet())
: ImmutableMap.of(),
String.class,
// TODO: consider adding coders for entities and pass them directly instead of using
// VKeys.
x -> VKey.create(clazz, x)));
private <T extends HistoryEntry> EppResource loadResourceByHistoryEntryId(
Class<T> historyEntryClazz, String repoId, long revisionId) {
try {
Class<?> idClazz = historyEntryClazz.getAnnotation(IdClass.class).value();
Serializable idObject =
(Serializable)
idClazz.getConstructor(String.class, long.class).newInstance(repoId, revisionId);
return jpaTm()
.transact(() -> jpaTm().loadByKey(VKey.createSql(historyEntryClazz, idObject)))
.getResourceAtPointInTime()
.map(resource -> resource.cloneProjectedAtTime(watermark))
.get();
} catch (NoSuchMethodException
| InvocationTargetException
| InstantiationException
| IllegalAccessException e) {
throw new RuntimeException(
String.format(
"Cannot load resource from %s with repoId %s and revisionId %s",
historyEntryClazz.getSimpleName(), repoId, revisionId),
e);
}
}
<T extends EppResource>
FlatMapElements<VKey<T>, KV<PendingDeposit, DepositFragment>> mapToFragments(Class<T> clazz) {
return FlatMapElements.into(
TypeDescriptors.kvs(
TypeDescriptor.of(PendingDeposit.class), TypeDescriptor.of(DepositFragment.class)))
.via(
(VKey<T> key) -> {
T resource = jpaTm().transact(() -> jpaTm().loadByKey(key));
// The set of all TLDs to which this resource should be emitted.
ImmutableSet<String> tlds =
clazz.equals(DomainBase.class)
? ImmutableSet.of(((DomainBase) resource).getTld())
: pendings.keySet();
// Get the set of all point-in-time watermarks we need, to minimize rewinding.
ImmutableSet<DateTime> dates =
tlds.stream()
.map(pendings::get)
.flatMap(ImmutableSet::stream)
.map(PendingDeposit::watermark)
.collect(toImmutableSet());
// Launch asynchronous fetches of point-in-time representations of resource.
ImmutableMap<DateTime, Supplier<EppResource>> resourceAtTimes =
ImmutableMap.copyOf(
Maps.asMap(dates, input -> loadAtPointInTimeAsync(resource, input)));
// Convert resource to an XML fragment for each watermark/mode pair lazily and cache
// the result.
RdeFragmenter fragmenter =
new RdeFragmenter(resourceAtTimes, new RdeMarshaller(mode));
List<KV<PendingDeposit, DepositFragment>> results = new ArrayList<>();
for (String tld : tlds) {
for (PendingDeposit pending : pendings.get(tld)) {
// Hosts and contacts don't get included in BRDA deposits.
if (pending.mode() == RdeMode.THIN && !clazz.equals(DomainBase.class)) {
continue;
/**
* Remove unreferenced resources by joining the (repoId, pendingDeposit) pair with the (repoId,
* revisionId) on the repoId.
*
* <p>The (repoId, pendingDeposit) pairs denote resources (contact, host) that are referenced from
* a domain, that are to be included in the corresponding pending deposit.
*
* <p>The (repoId, revisionId) paris come from the most recent history entry query, which can be
* used to load the embedded resources themselves.
*
* @return a pair of (repoId, ([pendingDeposit], [revisionId])) where neither the pendingDeposit
* nor the revisionId list is empty.
*/
private static PCollection<KV<String, CoGbkResult>> removeUnreferencedResource(
PCollection<KV<String, PendingDeposit>> referencedResources,
PCollection<KV<String, Long>> historyEntries,
Class<? extends EppResource> resourceClazz) {
String resourceName = resourceClazz.getSimpleName();
Class<? extends HistoryEntry> historyEntryClazz =
RESOURCE_TYPES_TO_HISTORY_TYPES.get(resourceClazz);
String historyEntryName = historyEntryClazz.getSimpleName();
Counter referencedResourceCounter = Metrics.counter("RDE", "Referenced" + resourceName);
return KeyedPCollectionTuple.of(PENDING_DEPOSIT, referencedResources)
.and(REVISION_ID, historyEntries)
.apply(
String.format(
"Join PendingDeposit with %s revision ID on %s", historyEntryName, resourceName),
CoGroupByKey.create())
.apply(
String.format("Remove unreferenced %s", resourceName),
Filter.by(
(KV<String, CoGbkResult> kv) -> {
boolean toInclude =
// If a resource does not have corresponding pending deposit, it is not
// referenced and should not be included.
kv.getValue().getAll(PENDING_DEPOSIT).iterator().hasNext()
// If a resource does not have revision id (this should not happen, as
// every referenced resource must be valid at watermark time, therefore
// be embedded in a history entry valid at watermark time, otherwise
// the domain cannot reference it), there is no way for us to find the
// history entry and load the embedded resource. So we ignore the resource
// to keep the downstream process simple.
&& kv.getValue().getAll(REVISION_ID).iterator().hasNext();
if (toInclude) {
referencedResourceCounter.inc();
}
Optional<DepositFragment> fragment =
fragmenter.marshal(pending.watermark(), pending.mode());
fragment.ifPresent(
depositFragment -> results.add(KV.of(pending, depositFragment)));
}
}
return results;
});
return toInclude;
}));
}
private PCollectionTuple processDomainHistories(PCollection<KV<String, Long>> domainHistories) {
Counter activeDomainCounter = Metrics.counter("RDE", "ActiveDomainBase");
Counter domainFragmentCounter = Metrics.counter("RDE", "DomainFragment");
Counter referencedContactCounter = Metrics.counter("RDE", "ReferencedContactResource");
Counter referencedHostCounter = Metrics.counter("RDE", "ReferencedHostResource");
return domainHistories.apply(
"Map DomainHistory to DepositFragment "
+ "and emit referenced ContactResource and HostResource",
ParDo.of(
new DoFn<KV<String, Long>, KV<PendingDeposit, DepositFragment>>() {
@ProcessElement
public void processElement(
@Element KV<String, Long> kv, MultiOutputReceiver receiver) {
activeDomainCounter.inc();
DomainBase domain =
(DomainBase)
loadResourceByHistoryEntryId(
DomainHistory.class, kv.getKey(), kv.getValue());
pendingDeposits.stream()
.filter(pendingDeposit -> pendingDeposit.tld().equals(domain.getTld()))
.forEach(
pendingDeposit -> {
// Domains are always deposited in both modes.
domainFragmentCounter.inc();
receiver
.get(DOMAIN_FRAGMENTS)
.output(
KV.of(
pendingDeposit,
marshaller.marshalDomain(domain, pendingDeposit.mode())));
// Contacts and hosts are only deposited in RDE, not BRDA.
if (pendingDeposit.mode() == RdeMode.FULL) {
HashSet<Serializable> contacts = new HashSet<>();
contacts.add(domain.getAdminContact().getSqlKey());
contacts.add(domain.getTechContact().getSqlKey());
contacts.add(domain.getRegistrant().getSqlKey());
// Billing contact is not mandatory.
if (domain.getBillingContact() != null) {
contacts.add(domain.getBillingContact().getSqlKey());
}
referencedContactCounter.inc(contacts.size());
contacts.forEach(
contactRepoId ->
receiver
.get(REFERENCED_CONTACTS)
.output(KV.of((String) contactRepoId, pendingDeposit)));
if (domain.getNsHosts() != null) {
referencedHostCounter.inc(domain.getNsHosts().size());
domain
.getNsHosts()
.forEach(
hostKey ->
receiver
.get(REFERENCED_HOSTS)
.output(
KV.of(
(String) hostKey.getSqlKey(),
pendingDeposit)));
}
}
});
}
})
.withOutputTags(
DOMAIN_FRAGMENTS, TupleTagList.of(REFERENCED_CONTACTS).and(REFERENCED_HOSTS)));
}
private PCollection<KV<PendingDeposit, DepositFragment>> processContactHistories(
PCollection<KV<String, PendingDeposit>> referencedContacts,
PCollection<KV<String, Long>> contactHistories) {
Counter contactFragmentCounter = Metrics.counter("RDE", "ContactFragment");
return removeUnreferencedResource(referencedContacts, contactHistories, ContactResource.class)
.apply(
"Map ContactResource to DepositFragment",
FlatMapElements.into(
kvs(
TypeDescriptor.of(PendingDeposit.class),
TypeDescriptor.of(DepositFragment.class)))
.via(
(KV<String, CoGbkResult> kv) -> {
ContactResource contact =
(ContactResource)
loadResourceByHistoryEntryId(
ContactHistory.class,
kv.getKey(),
kv.getValue().getOnly(REVISION_ID));
DepositFragment fragment = marshaller.marshalContact(contact);
ImmutableSet<KV<PendingDeposit, DepositFragment>> fragments =
Streams.stream(kv.getValue().getAll(PENDING_DEPOSIT))
// The same contact could be used by multiple domains, therefore
// matched to the same pending deposit multiple times.
.distinct()
.map(pendingDeposit -> KV.of(pendingDeposit, fragment))
.collect(toImmutableSet());
contactFragmentCounter.inc(fragments.size());
return fragments;
}));
}
private PCollectionTuple processHostHistories(
PCollection<KV<String, PendingDeposit>> referencedHosts,
PCollection<KV<String, Long>> hostHistories) {
Counter subordinateHostCounter = Metrics.counter("RDE", "SubordinateHostResource");
Counter externalHostCounter = Metrics.counter("RDE", "ExternalHostResource");
Counter externalHostFragmentCounter = Metrics.counter("RDE", "ExternalHostFragment");
return removeUnreferencedResource(referencedHosts, hostHistories, HostResource.class)
.apply(
"Map external DomainResource to DepositFragment and process subordinate domains",
ParDo.of(
new DoFn<KV<String, CoGbkResult>, KV<PendingDeposit, DepositFragment>>() {
@ProcessElement
public void processElement(
@Element KV<String, CoGbkResult> kv, MultiOutputReceiver receiver) {
HostResource host =
(HostResource)
loadResourceByHistoryEntryId(
HostHistory.class,
kv.getKey(),
kv.getValue().getOnly(REVISION_ID));
// When a host is subordinate, we need to find it's superordinate domain and
// include it in the deposit as well.
if (host.isSubordinate()) {
subordinateHostCounter.inc();
receiver
.get(SUPERORDINATE_DOMAINS)
.output(
// The output are pairs of
// (superordinateDomainRepoId,
// (subordinateHostRepoId, (pendingDeposit, revisionId))).
KV.of((String) host.getSuperordinateDomain().getSqlKey(), kv));
} else {
externalHostCounter.inc();
DepositFragment fragment = marshaller.marshalExternalHost(host);
Streams.stream(kv.getValue().getAll(PENDING_DEPOSIT))
// The same host could be used by multiple domains, therefore
// matched to the same pending deposit multiple times.
.distinct()
.forEach(
pendingDeposit -> {
externalHostFragmentCounter.inc();
receiver
.get(EXTERNAL_HOST_FRAGMENTS)
.output(KV.of(pendingDeposit, fragment));
});
}
}
})
.withOutputTags(EXTERNAL_HOST_FRAGMENTS, TupleTagList.of(SUPERORDINATE_DOMAINS)));
}
/**
* Process subordinate hosts by making a deposit fragment with pending transfer information
* obtained from its superordinate domain.
*
* @param superordinateDomains Pairs of (superordinateDomainRepoId, (subordinateHostRepoId,
* (pendingDeposit, revisionId))). This collection maps the subordinate host and the pending
* deposit to include it to its superordinate domain.
* @param domainHistories Pairs of (domainRepoId, revisionId). This collection helps us find the
* historical superordinate domain from its history entry and is obtained from calling {@link
* #getMostRecentHistoryEntries} for domains.
*/
private PCollection<KV<PendingDeposit, DepositFragment>> processSubordinateHosts(
PCollection<KV<String, KV<String, CoGbkResult>>> superordinateDomains,
PCollection<KV<String, Long>> domainHistories) {
Counter subordinateHostFragmentCounter = Metrics.counter("RDE", "SubordinateHostFragment");
Counter referencedSubordinateHostCounter = Metrics.counter("RDE", "ReferencedSubordinateHost");
return KeyedPCollectionTuple.of(HOST_TO_PENDING_DEPOSIT, superordinateDomains)
.and(REVISION_ID, domainHistories)
.apply(
"Join HostResource:PendingDeposits with DomainHistory on DomainResource",
CoGroupByKey.create())
.apply(
" Remove unreferenced DomainResource",
Filter.by(
kv -> {
boolean toInclude =
kv.getValue().getAll(HOST_TO_PENDING_DEPOSIT).iterator().hasNext()
&& kv.getValue().getAll(REVISION_ID).iterator().hasNext();
if (toInclude) {
referencedSubordinateHostCounter.inc();
}
return toInclude;
}))
.apply(
"Map subordinate HostResource to DepositFragment",
FlatMapElements.into(
kvs(
TypeDescriptor.of(PendingDeposit.class),
TypeDescriptor.of(DepositFragment.class)))
.via(
(KV<String, CoGbkResult> kv) -> {
DomainBase superordinateDomain =
(DomainBase)
loadResourceByHistoryEntryId(
DomainHistory.class,
kv.getKey(),
kv.getValue().getOnly(REVISION_ID));
ImmutableSet.Builder<KV<PendingDeposit, DepositFragment>> results =
new ImmutableSet.Builder<>();
for (KV<String, CoGbkResult> hostToPendingDeposits :
kv.getValue().getAll(HOST_TO_PENDING_DEPOSIT)) {
HostResource host =
(HostResource)
loadResourceByHistoryEntryId(
HostHistory.class,
hostToPendingDeposits.getKey(),
hostToPendingDeposits.getValue().getOnly(REVISION_ID));
DepositFragment fragment =
marshaller.marshalSubordinateHost(host, superordinateDomain);
Streams.stream(hostToPendingDeposits.getValue().getAll(PENDING_DEPOSIT))
.distinct()
.forEach(
pendingDeposit -> {
subordinateHostFragmentCounter.inc();
results.add(KV.of(pendingDeposit, fragment));
});
}
return results.build();
}));
}
/**
* Decodes the pipeline option extracted from the URL parameter sent by the pipeline launcher to
* the original TLD to pending deposit map.
* the original pending deposit set.
*/
@SuppressWarnings("unchecked")
static ImmutableSetMultimap<String, PendingDeposit> decodePendings(String encodedPending) {
static ImmutableSet<PendingDeposit> decodePendingDeposits(String encodedPendingDeposits) {
try (ObjectInputStream ois =
new ObjectInputStream(
new ByteArrayInputStream(
BaseEncoding.base64Url().omitPadding().decode(encodedPending)))) {
return (ImmutableSetMultimap<String, PendingDeposit>) ois.readObject();
BaseEncoding.base64Url().omitPadding().decode(encodedPendingDeposits)))) {
return (ImmutableSet<PendingDeposit>) ois.readObject();
} catch (IOException | ClassNotFoundException e) {
throw new IllegalArgumentException("Unable to parse encoded pending deposit map.", e);
}
}
/**
* Encodes the TLD to pending deposit map in an URL safe string that is sent to the pipeline
* worker by the pipeline launcher as a pipeline option.
* Encodes the pending deposit set in an URL safe string that is sent to the pipeline worker by
* the pipeline launcher as a pipeline option.
*/
static String encodePendings(ImmutableSetMultimap<String, PendingDeposit> pendings)
public static String encodePendingDeposits(ImmutableSet<PendingDeposit> pendingDeposits)
throws IOException {
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
ObjectOutputStream oos = new ObjectOutputStream(baos);
oos.writeObject(pendings);
oos.writeObject(pendingDeposits);
oos.flush();
return BaseEncoding.base64Url().omitPadding().encode(baos.toByteArray());
}
@@ -282,13 +684,50 @@ public class RdePipeline implements Serializable {
public static void main(String[] args) throws IOException, ClassNotFoundException {
PipelineOptionsFactory.register(RdePipelineOptions.class);
RdePipelineOptions options = PipelineOptionsFactory.fromArgs(args).as(RdePipelineOptions.class);
RdePipelineOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(RdePipelineOptions.class);
RegistryPipelineOptions.validateRegistryPipelineOptions(options);
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_COMMITTED);
DaggerRdePipeline_RdePipelineComponent.builder().options(options).build().rdePipeline().run();
}
/**
* A utility class that contains {@link TupleTag}s when {@link PCollectionTuple}s and {@link
* CoGbkResult}s are used.
*/
protected abstract static class TupleTags {
protected static final TupleTag<KV<PendingDeposit, DepositFragment>> DOMAIN_FRAGMENTS =
new TupleTag<KV<PendingDeposit, DepositFragment>>() {};
protected static final TupleTag<KV<String, PendingDeposit>> REFERENCED_CONTACTS =
new TupleTag<KV<String, PendingDeposit>>() {};
protected static final TupleTag<KV<String, PendingDeposit>> REFERENCED_HOSTS =
new TupleTag<KV<String, PendingDeposit>>() {};
protected static final TupleTag<KV<String, KV<String, CoGbkResult>>> SUPERORDINATE_DOMAINS =
new TupleTag<KV<String, KV<String, CoGbkResult>>>() {};
protected static final TupleTag<KV<PendingDeposit, DepositFragment>> EXTERNAL_HOST_FRAGMENTS =
new TupleTag<KV<PendingDeposit, DepositFragment>>() {};
protected static final TupleTag<PendingDeposit> PENDING_DEPOSIT =
new TupleTag<PendingDeposit>() {};
protected static final TupleTag<KV<String, CoGbkResult>> HOST_TO_PENDING_DEPOSIT =
new TupleTag<KV<String, CoGbkResult>>() {};
protected static final TupleTag<Long> REVISION_ID = new TupleTag<Long>() {};
}
@Singleton
@Component(modules = {CredentialModule.class, ConfigModule.class})
@Component(
modules = {
CredentialModule.class,
ConfigModule.class,
CloudTasksUtilsModule.class,
UtilsModule.class
})
interface RdePipelineComponent {
RdePipeline rdePipeline();

View File

@@ -31,9 +31,9 @@ public interface RdePipelineOptions extends RegistryPipelineOptions {
void setValidationMode(String value);
@Description("The GCS bucket where the encrypted RDE deposits will be uploaded to.")
String getGcsBucket();
String getRdeStagingBucket();
void setGcsBucket(String value);
void setRdeStagingBucket(String value);
@Description("The Base64-encoded PGP public key to encrypt the deposits.")
String getStagingKey();

View File

@@ -218,7 +218,7 @@ public class SafeBrowsingTransforms {
throws JSONException, IOException {
int statusCode = response.getStatusLine().getStatusCode();
if (statusCode != SC_OK) {
logger.atWarning().log("Got unexpected status code %s from response", statusCode);
logger.atWarning().log("Got unexpected status code %s from response.", statusCode);
} else {
// Unpack the response body
JSONObject responseBody =
@@ -227,7 +227,7 @@ public class SafeBrowsingTransforms {
new InputStreamReader(response.getEntity().getContent(), UTF_8)));
logger.atInfo().log("Got response: %s", responseBody.toString());
if (responseBody.length() == 0) {
logger.atInfo().log("Response was empty, no threats detected");
logger.atInfo().log("Response was empty, no threats detected.");
} else {
// Emit all DomainNameInfos with their API results.
JSONArray threatMatches = responseBody.getJSONArray("matches");

View File

@@ -16,6 +16,7 @@ package google.registry.beam.spec11;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.beam.BeamUtils.getQueryFromFile;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableSet;
@@ -30,6 +31,7 @@ import google.registry.model.domain.DomainBase;
import google.registry.model.reporting.Spec11ThreatMatch;
import google.registry.model.reporting.Spec11ThreatMatch.ThreatType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.VKey;
import google.registry.util.Retrier;
import google.registry.util.SqlTemplate;
import google.registry.util.UtilsModule;
@@ -41,6 +43,7 @@ import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
@@ -58,7 +61,8 @@ import org.json.JSONObject;
/**
* Definition of a Dataflow Flex template, which generates a given month's spec11 report.
*
* <p>To stage this template locally, run the {@code stage_beam_pipeline.sh} shell script.
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha
* --pipeline=spec11}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*
@@ -113,15 +117,43 @@ public class Spec11Pipeline implements Serializable {
}
static PCollection<DomainNameInfo> readFromCloudSql(Pipeline pipeline) {
Read<Object[], DomainNameInfo> read =
Read<Object[], KV<String, String>> read =
RegistryJpaIO.read(
"select d, r.emailAddress from Domain d join Registrar r on"
+ " d.currentSponsorClientId = r.clientIdentifier where r.type = 'REAL'"
+ " and d.deletionTime > now()",
"select d.repoId, r.emailAddress from Domain d join Registrar r on"
+ " d.currentSponsorClientId = r.clientIdentifier where r.type = 'REAL' and"
+ " d.deletionTime > now()",
false,
Spec11Pipeline::parseRow);
return pipeline.apply("Read active domains from Cloud SQL", read);
return pipeline
.apply("Read active domains from Cloud SQL", read)
.apply(
"Build DomainNameInfo",
ParDo.of(
new DoFn<KV<String, String>, DomainNameInfo>() {
@ProcessElement
public void processElement(
@Element KV<String, String> input, OutputReceiver<DomainNameInfo> output) {
DomainBase domainBase =
jpaTm()
.transact(
() ->
jpaTm()
.loadByKey(
VKey.createSql(DomainBase.class, input.getKey())));
String emailAddress = input.getValue();
if (emailAddress == null) {
emailAddress = "";
}
DomainNameInfo domainNameInfo =
DomainNameInfo.create(
domainBase.getDomainName(),
domainBase.getRepoId(),
domainBase.getCurrentSponsorRegistrarId(),
emailAddress);
output.output(domainNameInfo);
}
}));
}
static PCollection<DomainNameInfo> readFromBigQuery(
@@ -142,17 +174,8 @@ public class Spec11Pipeline implements Serializable {
.withTemplateCompatibility());
}
private static DomainNameInfo parseRow(Object[] row) {
DomainBase domainBase = (DomainBase) row[0];
String emailAddress = (String) row[1];
if (emailAddress == null) {
emailAddress = "";
}
return DomainNameInfo.create(
domainBase.getDomainName(),
domainBase.getRepoId(),
domainBase.getCurrentSponsorClientId(),
emailAddress);
private static KV<String, String> parseRow(Object[] row) {
return KV.of((String) row[0], (String) row[1]);
}
static void saveToSql(
@@ -199,15 +222,15 @@ public class Spec11Pipeline implements Serializable {
MapElements.into(TypeDescriptors.strings())
.via(
(KV<String, Iterable<EmailAndThreatMatch>> kv) -> {
String clientId = kv.getKey();
String registrarId = kv.getKey();
checkArgument(
kv.getValue().iterator().hasNext(),
String.format(
"Registrar with ID %s had no corresponding threats", clientId));
"Registrar with ID %s had no corresponding threats", registrarId));
String email = kv.getValue().iterator().next().email();
JSONObject output = new JSONObject();
try {
output.put(REGISTRAR_CLIENT_ID_FIELD, clientId);
output.put(REGISTRAR_CLIENT_ID_FIELD, registrarId);
output.put(REGISTRAR_EMAIL_FIELD, email);
JSONArray threatMatchArray = new JSONArray();
for (EmailAndThreatMatch emailAndThreatMatch : kv.getValue()) {

View File

@@ -632,7 +632,7 @@ public class BigqueryConnection implements AutoCloseable {
private static String summarizeCompletedJob(Job job) {
JobStatistics stats = job.getStatistics();
return String.format(
"Job took %,.3f seconds after a %,.3f second delay and processed %,d bytes (%s)",
"Job took %,.3f seconds after a %,.3f second delay and processed %,d bytes (%s).",
(stats.getEndTime() - stats.getStartTime()) / 1000.0,
(stats.getStartTime() - stats.getCreationTime()) / 1000.0,
stats.getTotalBytesProcessed(),
@@ -706,17 +706,17 @@ public class BigqueryConnection implements AutoCloseable {
}
/**
* Helper that creates a dataset with this name if it doesn't already exist, and returns true
* if creation took place.
* Helper that creates a dataset with this name if it doesn't already exist, and returns true if
* creation took place.
*/
public boolean createDatasetIfNeeded(String datasetName) throws IOException {
private boolean createDatasetIfNeeded(String datasetName) throws IOException {
if (!checkDatasetExists(datasetName)) {
bigquery.datasets()
.insert(getProjectId(), new Dataset().setDatasetReference(new DatasetReference()
.setProjectId(getProjectId())
.setDatasetId(datasetName)))
.execute();
logger.atInfo().log("Created dataset: %s:%s\n", getProjectId(), datasetName);
logger.atInfo().log("Created dataset: %s: %s.", getProjectId(), datasetName);
return true;
}
return false;
@@ -732,9 +732,8 @@ public class BigqueryConnection implements AutoCloseable {
.setDefaultDataset(getDataset())
.setDestinationTable(table))));
} catch (BigqueryJobFailureException e) {
if (e.getReason().equals("duplicate")) {
// Table already exists.
} else {
if (!e.getReason().equals("duplicate")) {
// Throw if it failed for any reason other than table already existing.
throw e;
}
}

View File

@@ -116,7 +116,7 @@ public class CheckedBigquery {
.setTableReference(table))
.execute();
logger.atInfo().log(
"Created BigQuery table %s:%s.%s",
"Created BigQuery table %s:%s.%s.",
table.getProjectId(), table.getDatasetId(), table.getTableId());
} catch (IOException e) {
// Swallow errors about a table that exists, and throw any other ones.

View File

@@ -22,10 +22,13 @@ import dagger.Provides;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.util.CloudTasksUtils.SerializableCloudTasksClient;
import google.registry.util.GoogleCredentialsBundle;
import google.registry.util.Retrier;
import java.io.IOException;
import javax.inject.Provider;
import java.io.Serializable;
import java.util.function.Supplier;
import javax.inject.Singleton;
/**
@@ -42,24 +45,35 @@ public abstract class CloudTasksUtilsModule {
public static CloudTasksUtils provideCloudTasksUtils(
@Config("projectId") String projectId,
@Config("locationId") String locationId,
// Use a provider so that we can use try-with-resources with the client, which implements
// Autocloseable.
Provider<CloudTasksClient> clientProvider,
SerializableCloudTasksClient client,
Retrier retrier) {
return new CloudTasksUtils(retrier, projectId, locationId, clientProvider);
return new CloudTasksUtils(retrier, projectId, locationId, client);
}
// Provides a supplier instead of using a Dagger @Provider because the latter is not serializable.
@Provides
public static Supplier<CloudTasksClient> provideCloudTasksClientSupplier(
@DefaultCredential GoogleCredentialsBundle credentials) {
return (Supplier<CloudTasksClient> & Serializable)
() -> {
CloudTasksClient client;
try {
client =
CloudTasksClient.create(
CloudTasksSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials.getGoogleCredentials()))
.build());
} catch (IOException e) {
throw new RuntimeException(e);
}
return client;
};
}
@Provides
public static CloudTasksClient provideCloudTasksClient(
@DefaultCredential GoogleCredentialsBundle credentials) {
try {
return CloudTasksClient.create(
CloudTasksSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials.getGoogleCredentials()))
.build());
} catch (IOException e) {
throw new RuntimeException(e);
}
public static SerializableCloudTasksClient provideSerializableCloudTasksClient(
final Supplier<CloudTasksClient> clientSupplier) {
return new GcpCloudTasksClient(clientSupplier);
}
}

View File

@@ -557,11 +557,23 @@ public final class RegistryConfig {
return config.registryPolicy.requireSslCertificates;
}
/**
* Returns the GCE machine type that a CPU-demanding pipeline should use.
*
* @see google.registry.beam.rde.RdePipeline
*/
@Provides
@Config("highPerformanceMachineType")
public static String provideHighPerformanceMachineType(RegistryConfigSettings config) {
return config.beam.highPerformanceMachineType;
}
/**
* Returns the default job region to run Apache Beam (Cloud Dataflow) jobs in.
*
* @see google.registry.beam.invoicing.InvoicingPipeline
* @see google.registry.beam.spec11.Spec11Pipeline
* @see google.registry.beam.invoicing.InvoicingPipeline
*/
@Provides
@Config("defaultJobRegion")
@@ -1138,8 +1150,8 @@ public final class RegistryConfig {
}
/**
* Returns the clientId of the registrar that admins are automatically logged in as if they
* aren't otherwise associated with one.
* Returns the ID of the registrar that admins are automatically logged in as if they aren't
* otherwise associated with one.
*/
@Provides
@Config("registryAdminClientId")
@@ -1306,6 +1318,18 @@ public final class RegistryConfig {
public static ImmutableSet<String> provideAllowedEcdsaCurves(RegistryConfigSettings config) {
return ImmutableSet.copyOf(config.sslCertificateValidation.allowedEcdsaCurves);
}
@Provides
@Config("minMonthsBeforeWipeOut")
public static int provideMinMonthsBeforeWipeOut(RegistryConfigSettings config) {
return config.contactHistory.minMonthsBeforeWipeOut;
}
@Provides
@Config("wipeOutQueryBatchSize")
public static int provideWipeOutQueryBatchSize(RegistryConfigSettings config) {
return config.contactHistory.wipeOutQueryBatchSize;
}
}
/** Returns the App Engine project ID, which is based off the environment name. */

View File

@@ -41,6 +41,7 @@ public class RegistryConfigSettings {
public Keyring keyring;
public RegistryTool registryTool;
public SslCertificateValidation sslCertificateValidation;
public ContactHistory contactHistory;
/** Configuration options that apply to the entire App Engine project. */
public static class AppEngine {
@@ -132,6 +133,7 @@ public class RegistryConfigSettings {
/** Configuration for Apache Beam (Cloud Dataflow). */
public static class Beam {
public String defaultJobRegion;
public String highPerformanceMachineType;
public String stagingBucketUrl;
}
@@ -234,4 +236,10 @@ public class RegistryConfigSettings {
public String expirationWarningEmailBodyText;
public String expirationWarningEmailSubjectText;
}
/** Configuration for contact history. */
public static class ContactHistory {
public int minMonthsBeforeWipeOut;
public int wipeOutQueryBatchSize;
}
}

View File

@@ -419,6 +419,14 @@ misc:
beam:
# The default region to run Apache Beam (Cloud Dataflow) jobs in.
defaultJobRegion: us-east1
# The GCE machine type to use when a job is CPU-intensive (e. g. RDE). Be sure
# to check the VM CPU quota for the job region. In a massively parallel
# pipeline this quota can be easily reached and needs to be raised, otherwise
# the job will run very slowly. Also note that there is a separate quota for
# external IPv4 address in a region, which means that machine type with higher
# core count per machine may be preferable in order to preserve IP addresses.
# See: https://cloud.google.com/compute/quotas#cpu_quota
highPerformanceMachineType: n2-standard-4
stagingBucketUrl: gcs-bucket-with-staged-templates
keyring:
@@ -442,6 +450,13 @@ registryTool:
# OAuth client secret used by the tool.
clientSecret: YOUR_CLIENT_SECRET
# Configuration options for handling contact history.
contactHistory:
# The number of months that a ContactHistory entity should be stored in the database.
minMonthsBeforeWipeOut: 18
# The batch size for querying ContactHistory table in the database.
wipeOutQueryBatchSize: 500
# Configuration options for checking SSL certificates.
sslCertificateValidation:
# A map specifying the maximum amount of days the certificate can be valid.

View File

@@ -14,18 +14,15 @@
package google.registry.cron;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import com.google.appengine.api.taskqueue.Queue;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.common.collect.ImmutableMultimap;
import google.registry.model.ofy.CommitLogBucket;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.TaskQueueUtils;
import java.time.Duration;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import java.util.Random;
import javax.inject.Inject;
/** Action for fanning out cron tasks for each commit log bucket. */
@@ -38,25 +35,27 @@ public final class CommitLogFanoutAction implements Runnable {
public static final String BUCKET_PARAM = "bucket";
private static final Random random = new Random();
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject TaskQueueUtils taskQueueUtils;
@Inject @Parameter("endpoint") String endpoint;
@Inject @Parameter("queue") String queue;
@Inject @Parameter("jitterSeconds") Optional<Integer> jitterSeconds;
@Inject CommitLogFanoutAction() {}
@Override
public void run() {
Queue taskQueue = getQueue(queue);
for (int bucketId : CommitLogBucket.getBucketIds()) {
long delay =
jitterSeconds.map(i -> random.nextInt((int) Duration.ofSeconds(i).toMillis())).orElse(0);
TaskOptions taskOptions =
TaskOptions.Builder.withUrl(endpoint)
.param(BUCKET_PARAM, Integer.toString(bucketId))
.countdownMillis(delay);
taskQueueUtils.enqueue(taskQueue, taskOptions);
cloudTasksUtils.enqueue(
queue,
CloudTasksUtils.createPostTask(
endpoint,
Service.BACKEND.toString(),
ImmutableMultimap.of(BUCKET_PARAM, Integer.toString(bucketId)),
clock,
jitterSeconds));
}
}
}

View File

@@ -141,7 +141,7 @@ public final class TldFanoutAction implements Runnable {
StringBuilder outputPayload =
new StringBuilder(
String.format("OK: Launched the following %d tasks in queue %s\n", tlds.size(), queue));
logger.atInfo().log("Launching %d tasks in queue %s", tlds.size(), queue);
logger.atInfo().log("Launching %d tasks in queue %s.", tlds.size(), queue);
if (tlds.isEmpty()) {
logger.atWarning().log("No TLDs to fan-out!");
}
@@ -153,7 +153,7 @@ public final class TldFanoutAction implements Runnable {
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'",
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri());
}
response.setContentType(PLAIN_TEXT_UTF_8);

View File

@@ -107,7 +107,7 @@ public class DnsQueue {
private TaskHandle addToQueue(
TargetType targetType, String targetName, String tld, Duration countdown) {
logger.atInfo().log(
"Adding task type=%s, target=%s, tld=%s to pull queue %s (%d tasks currently on queue)",
"Adding task type=%s, target=%s, tld=%s to pull queue %s (%d tasks currently on queue).",
targetType, targetName, tld, DNS_PULL_QUEUE_NAME, queue.fetchStatistics().getNumTasks());
return queue.add(
TaskOptions.Builder.withDefaults()
@@ -166,7 +166,7 @@ public class DnsQueue {
"There are %d tasks in the DNS queue '%s'.", numTasks, DNS_PULL_QUEUE_NAME);
return queue.leaseTasks(leaseDuration.getMillis(), MILLISECONDS, leaseTasksBatchSize);
} catch (TransientFailureException | DeadlineExceededException e) {
logger.atSevere().withCause(e).log("Failed leasing tasks too fast");
logger.atSevere().withCause(e).log("Failed leasing tasks too fast.");
return ImmutableList.of();
}
}
@@ -176,7 +176,7 @@ public class DnsQueue {
try {
queue.deleteTask(tasks);
} catch (TransientFailureException | DeadlineExceededException e) {
logger.atSevere().withCause(e).log("Failed deleting tasks too fast");
logger.atSevere().withCause(e).log("Failed deleting tasks too fast.");
}
}
}

View File

@@ -104,7 +104,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
new Duration(enqueuedTime, now));
logger.atInfo().log(
"publishDnsWriter latency statistics: TLD: %s, dnsWriter: %s, actionStatus: %s, "
+ "numItems: %d, timeSinceCreation: %s, timeInQueue: %s",
+ "numItems: %d, timeSinceCreation: %s, timeInQueue: %s.",
tld,
dnsWriter,
status,
@@ -144,7 +144,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
/** Adds all the domains and hosts in the batch back to the queue to be processed later. */
private void requeueBatch() {
logger.atInfo().log("Requeueing batch for retry");
logger.atInfo().log("Requeueing batch for retry.");
for (String domain : nullToEmpty(domains)) {
dnsQueue.addDomainRefreshTask(domain);
}
@@ -158,14 +158,14 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
// LockIndex should always be within [1, numPublishLocks]
if (lockIndex > numPublishLocks || lockIndex <= 0) {
logger.atSevere().log(
"Lock index should be within [1,%d], got %d instead", numPublishLocks, lockIndex);
"Lock index should be within [1,%d], got %d instead.", numPublishLocks, lockIndex);
return false;
}
// Check if the Registry object's num locks has changed since this task was batched
int registryNumPublishLocks = Registry.get(tld).getNumDnsPublishLocks();
if (registryNumPublishLocks != numPublishLocks) {
logger.atWarning().log(
"Registry numDnsPublishLocks %d out of sync with parameter %d",
"Registry numDnsPublishLocks %d out of sync with parameter %d.",
registryNumPublishLocks, numPublishLocks);
return false;
}
@@ -179,7 +179,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
DnsWriter writer = dnsWriterProxy.getByClassNameForTld(dnsWriter, tld);
if (writer == null) {
logger.atWarning().log("Couldn't get writer %s for TLD %s", dnsWriter, tld);
logger.atWarning().log("Couldn't get writer %s for TLD %s.", dnsWriter, tld);
recordActionResult(ActionStatus.BAD_WRITER);
requeueBatch();
return;
@@ -190,11 +190,11 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
for (String domain : nullToEmpty(domains)) {
if (!DomainNameUtils.isUnder(
InternetDomainName.from(domain), InternetDomainName.from(tld))) {
logger.atSevere().log("%s: skipping domain %s not under tld", tld, domain);
logger.atSevere().log("%s: skipping domain %s not under TLD.", tld, domain);
domainsRejected += 1;
} else {
writer.publishDomain(domain);
logger.atInfo().log("%s: published domain %s", tld, domain);
logger.atInfo().log("%s: published domain %s.", tld, domain);
domainsPublished += 1;
}
}
@@ -206,11 +206,11 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
for (String host : nullToEmpty(hosts)) {
if (!DomainNameUtils.isUnder(
InternetDomainName.from(host), InternetDomainName.from(tld))) {
logger.atSevere().log("%s: skipping host %s not under tld", tld, host);
logger.atSevere().log("%s: skipping host %s not under TLD.", tld, host);
hostsRejected += 1;
} else {
writer.publishHost(host);
logger.atInfo().log("%s: published host %s", tld, host);
logger.atInfo().log("%s: published host %s.", tld, host);
hostsPublished += 1;
}
}
@@ -233,7 +233,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
tld, dnsWriter, commitStatus, duration, domainsPublished, hostsPublished);
logger.atInfo().log(
"writer.commit() statistics: TLD: %s, dnsWriter: %s, commitStatus: %s, duration: %s, "
+ "domainsPublished: %d, domainsRejected: %d, hostsPublished: %d, hostsRejected: %d",
+ "domainsPublished: %d, domainsRejected: %d, hostsPublished: %d, hostsRejected: %d.",
tld,
dnsWriter,
commitStatus,

View File

@@ -180,11 +180,11 @@ public class CloudDnsWriter extends BaseDnsWriter {
desiredRecords.put(absoluteDomainName, domainRecords.build());
logger.atFine().log(
"Will write %d records for domain %s", domainRecords.build().size(), absoluteDomainName);
"Will write %d records for domain '%s'.", domainRecords.build().size(), absoluteDomainName);
}
private void publishSubordinateHost(String hostName) {
logger.atInfo().log("Publishing glue records for %s", hostName);
logger.atInfo().log("Publishing glue records for host '%s'.", hostName);
// Canonicalize name
String absoluteHostName = getAbsoluteHostName(hostName);
@@ -250,7 +250,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
// Host not managed by our registry, no need to update DNS.
if (!tld.isPresent()) {
logger.atSevere().log("publishHost called for invalid host %s", hostName);
logger.atSevere().log("publishHost called for invalid host '%s'.", hostName);
return;
}
@@ -273,7 +273,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
ImmutableMap<String, ImmutableSet<ResourceRecordSet>> desiredRecordsCopy =
ImmutableMap.copyOf(desiredRecords);
retrier.callWithRetry(() -> mutateZone(desiredRecordsCopy), ZoneStateException.class);
logger.atInfo().log("Wrote to Cloud DNS");
logger.atInfo().log("Wrote to Cloud DNS.");
}
/** Returns the glue records for in-bailiwick nameservers for the given domain+records. */
@@ -329,7 +329,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
*/
private Map<String, List<ResourceRecordSet>> getResourceRecordsForDomains(
Set<String> domainNames) {
logger.atFine().log("Fetching records for %s", domainNames);
logger.atFine().log("Fetching records for domain '%s'.", domainNames);
// As per Concurrent.transform() - if numThreads or domainNames.size() < 2, it will not use
// threading.
return ImmutableMap.copyOf(
@@ -381,11 +381,11 @@ public class CloudDnsWriter extends BaseDnsWriter {
ImmutableSet<ResourceRecordSet> intersection =
Sets.intersection(additions, deletions).immutableCopy();
logger.atInfo().log(
"There are %d common items out of the %d items in 'additions' and %d items in 'deletions'",
"There are %d common items out of the %d items in 'additions' and %d items in 'deletions'.",
intersection.size(), additions.size(), deletions.size());
// Exit early if we have nothing to update - dnsConnection doesn't work on empty changes
if (additions.equals(deletions)) {
logger.atInfo().log("Returning early because additions is the same as deletions");
logger.atInfo().log("Returning early because additions are the same as deletions.");
return;
}
Change change =

View File

@@ -157,6 +157,12 @@
<url-pattern>/_dr/cron/readDnsQueue</url-pattern>
</servlet-mapping>
<!-- Replicates SQL transactions to Datastore during the Registry 3.0 migration. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/cron/replicateToDatastore</url-pattern>
</servlet-mapping>
<!-- Publishes DNS updates. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
@@ -391,6 +397,13 @@
<url-pattern>/_dr/task/relockDomain</url-pattern>
</servlet-mapping>
<!-- Background action to wipe out PII fields of ContactHistory entities that
have been in the database for a certain period of time. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/task/wipeOutContactHistoryPii</url-pattern>
</servlet-mapping>
<!-- Action to wipeout Cloud SQL data -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>

View File

@@ -348,4 +348,14 @@
<schedule>every 3 minutes</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/task/wipeOutContactHistoryPii]]></url>
<description>
This job runs weekly to wipe out PII fields of ContactHistory entities
that have been in the database for a certain period of time.
</description>
<schedule>every monday 15:00</schedule>
<target>backend</target>
</cron>
</cronentries>

View File

@@ -246,4 +246,14 @@
<schedule>every 3 minutes</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/task/wipeOutContactHistoryPii]]></url>
<description>
This job runs weekly to wipe out PII fields of ContactHistory entities
that have been in the database for a certain period of time.
</description>
<schedule>every monday 15:00</schedule>
<target>backend</target>
</cron>
</cronentries>

View File

@@ -80,7 +80,7 @@ public class BackupDatastoreAction implements Runnable {
logger.atInfo().log(message);
response.setPayload(message);
} catch (Throwable e) {
throw new InternalServerErrorException("Exception occurred while backing up datastore.", e);
throw new InternalServerErrorException("Exception occurred while backing up Datastore", e);
}
}
}

View File

@@ -118,10 +118,10 @@ public class BigqueryPollJobAction implements Runnable {
// Check if the job ended with an error.
if (job.getStatus().getErrorResult() != null) {
logger.atSevere().log("Bigquery job failed - %s - %s", jobRefString, job);
logger.atSevere().log("Bigquery job failed - %s - %s.", jobRefString, job);
return false;
}
logger.atInfo().log("Bigquery job succeeded - %s", jobRefString);
logger.atInfo().log("Bigquery job succeeded - %s.", jobRefString);
return true;
}

View File

@@ -171,10 +171,10 @@ public class CheckBackupAction implements Runnable {
ImmutableSet.copyOf(intersection(backup.getKinds(), kindsToLoad));
String message = String.format("Datastore backup %s complete - ", backupName);
if (exportedKindsToLoad.isEmpty()) {
message += "no kinds to load into BigQuery";
message += "no kinds to load into BigQuery.";
} else {
enqueueUploadBackupTask(backupId, backup.getExportFolderUrl(), exportedKindsToLoad);
message += "BigQuery load task enqueued";
message += "BigQuery load task enqueued.";
}
logger.atInfo().log(message);
response.setPayload(message);

View File

@@ -85,7 +85,7 @@ public class ExportDomainListsAction implements Runnable {
@Override
public void run() {
ImmutableSet<String> realTlds = getTldsOfType(TldType.REAL);
logger.atInfo().log("Exporting domain lists for tlds %s", realTlds);
logger.atInfo().log("Exporting domain lists for TLDs %s.", realTlds);
if (tm().isOfy()) {
mrRunner
.setJobName("Export domain lists")
@@ -145,7 +145,7 @@ public class ExportDomainListsAction implements Runnable {
Registry registry = Registry.get(tld);
if (registry.getDriveFolderId() == null) {
logger.atInfo().log(
"Skipping registered domains export for TLD %s because Drive folder isn't specified",
"Skipping registered domains export for TLD %s because Drive folder isn't specified.",
tld);
} else {
String resultMsg =

View File

@@ -110,11 +110,11 @@ public class ExportPremiumTermsAction implements Runnable {
private Optional<String> checkConfig(Registry registry) {
if (isNullOrEmpty(registry.getDriveFolderId())) {
logger.atInfo().log(
"Skipping premium terms export for TLD %s because Drive folder isn't specified", tld);
"Skipping premium terms export for TLD %s because Drive folder isn't specified.", tld);
return Optional.of("Skipping export because no Drive folder is associated with this TLD");
}
if (!registry.getPremiumListName().isPresent()) {
logger.atInfo().log("No premium terms to export for TLD %s", tld);
logger.atInfo().log("No premium terms to export for TLD '%s'.", tld);
return Optional.of("No premium lists configured");
}
return Optional.empty();

View File

@@ -65,11 +65,11 @@ public class ExportReservedTermsAction implements Runnable {
String resultMsg;
if (registry.getReservedListNames().isEmpty() && isNullOrEmpty(registry.getDriveFolderId())) {
resultMsg = "No reserved lists configured";
logger.atInfo().log("No reserved terms to export for TLD %s", tld);
logger.atInfo().log("No reserved terms to export for TLD '%s'.", tld);
} else if (registry.getDriveFolderId() == null) {
resultMsg = "Skipping export because no Drive folder is associated with this TLD";
logger.atInfo().log(
"Skipping reserved terms export for TLD %s because Drive folder isn't specified", tld);
"Skipping reserved terms export for TLD %s because Drive folder isn't specified.", tld);
} else {
resultMsg = driveConnection.createOrUpdateFile(
RESERVED_TERMS_FILENAME,

View File

@@ -19,7 +19,7 @@ import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.POST;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.RegistrarUtils.normalizeClientId;
import static google.registry.util.RegistrarUtils.normalizeRegistrarId;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
@@ -96,16 +96,15 @@ public final class SyncGroupMembersAction implements Runnable {
}
/**
* Returns the Google Groups email address for the given registrar clientId and
* RegistrarContact.Type
* Returns the Google Groups email address for the given registrar ID and RegistrarContact.Type.
*/
public static String getGroupEmailAddressForContactType(
String clientId, RegistrarContact.Type type, String gSuiteDomainName) {
// Take the registrar's clientId, make it lowercase, and remove all characters that aren't
String registrarId, RegistrarContact.Type type, String gSuiteDomainName) {
// Take the registrar's ID, make it lowercase, and remove all characters that aren't
// alphanumeric, hyphens, or underscores.
return String.format(
"%s-%s-contacts@%s",
normalizeClientId(clientId), type.getDisplayName(), gSuiteDomainName);
normalizeRegistrarId(registrarId), type.getDisplayName(), gSuiteDomainName);
}
/**
@@ -176,8 +175,8 @@ public final class SyncGroupMembersAction implements Runnable {
long totalAdded = 0;
long totalRemoved = 0;
for (final RegistrarContact.Type type : RegistrarContact.Type.values()) {
groupKey = getGroupEmailAddressForContactType(
registrar.getClientId(), type, gSuiteDomainName);
groupKey =
getGroupEmailAddressForContactType(registrar.getRegistrarId(), type, gSuiteDomainName);
Set<String> currentMembers = groupsConnection.getMembersOfGroup(groupKey);
Set<String> desiredMembers =
registrarContacts
@@ -195,12 +194,14 @@ public final class SyncGroupMembersAction implements Runnable {
}
}
logger.atInfo().log(
"Successfully synced contacts for registrar %s: added %d and removed %d",
registrar.getClientId(), totalAdded, totalRemoved);
"Successfully synced contacts for registrar %s: added %d and removed %d.",
registrar.getRegistrarId(), totalAdded, totalRemoved);
} catch (IOException e) {
// Package up exception and re-throw with attached additional relevant info.
String msg = String.format("Couldn't sync contacts for registrar %s to group %s",
registrar.getClientId(), groupKey);
String msg =
String.format(
"Couldn't sync contacts for registrar %s to group %s",
registrar.getRegistrarId(), groupKey);
throw new RuntimeException(msg, e);
}
}

View File

@@ -150,8 +150,7 @@ public class UpdateSnapshotViewAction implements Runnable {
if (e.getDetails() != null && e.getDetails().getCode() == 404) {
bigquery.tables().insert(ref.getProjectId(), ref.getDatasetId(), table).execute();
} else {
logger.atWarning().withCause(e).log(
"UpdateSnapshotViewAction failed, caught exception %s", e.getDetails());
logger.atWarning().withCause(e).log("UpdateSnapshotViewAction errored out.");
}
}
}

View File

@@ -109,7 +109,7 @@ public class UploadDatastoreBackupAction implements Runnable {
String message = uploadBackup(backupId, backupFolderUrl, Splitter.on(',').split(backupKinds));
logger.atInfo().log("Loaded backup successfully: %s", message);
} catch (Throwable e) {
logger.atSevere().withCause(e).log("Error loading backup");
logger.atSevere().withCause(e).log("Error loading backup.");
if (e instanceof IllegalArgumentException) {
throw new BadRequestException("Error calling load backup: " + e.getMessage(), e);
} else {
@@ -148,12 +148,12 @@ public class UploadDatastoreBackupAction implements Runnable {
getQueue(UpdateSnapshotViewAction.QUEUE));
builder.append(String.format(" - %s:%s\n", projectId, jobId));
logger.atInfo().log("Submitted load job %s:%s", projectId, jobId);
logger.atInfo().log("Submitted load job %s:%s.", projectId, jobId);
}
return builder.toString();
}
static String sanitizeForBigquery(String backupId) {
private static String sanitizeForBigquery(String backupId) {
return backupId.replaceAll("[^a-zA-Z0-9_]", "_");
}

View File

@@ -117,7 +117,7 @@ class SheetSynchronizer {
BatchUpdateValuesResponse response =
sheetsService.spreadsheets().values().batchUpdate(spreadsheetId, updateRequest).execute();
Integer cellsUpdated = response.getTotalUpdatedCells();
logger.atInfo().log("Updated %d originalVals", cellsUpdated != null ? cellsUpdated : 0);
logger.atInfo().log("Updated %d originalVals.", cellsUpdated != null ? cellsUpdated : 0);
}
// Append extra rows if necessary
@@ -140,7 +140,7 @@ class SheetSynchronizer {
.setInsertDataOption("INSERT_ROWS")
.execute();
logger.atInfo().log(
"Appended %d rows to range %s",
"Appended %d rows to range %s.",
data.size() - originalVals.size(), appendResponse.getTableRange());
// Clear the extra rows if necessary
} else if (data.size() < originalVals.size()) {
@@ -155,7 +155,7 @@ class SheetSynchronizer {
new ClearValuesRequest())
.execute();
logger.atInfo().log(
"Cleared %d rows from range %s",
"Cleared %d rows from range %s.",
originalVals.size() - data.size(), clearResponse.getClearedRange());
}
}

View File

@@ -82,7 +82,7 @@ class SyncRegistrarsSheet {
new Ordering<Registrar>() {
@Override
public int compare(Registrar left, Registrar right) {
return left.getClientId().compareTo(right.getClientId());
return left.getRegistrarId().compareTo(right.getRegistrarId());
}
}.immutableSortedCopy(Registrar.loadAllCached()).stream()
.filter(
@@ -116,7 +116,7 @@ class SyncRegistrarsSheet {
// and you'll need to remove deleted columns probably like a week after
// deployment.
//
builder.put("clientIdentifier", convert(registrar.getClientId()));
builder.put("clientIdentifier", convert(registrar.getRegistrarId()));
builder.put("registrarName", convert(registrar.getRegistrarName()));
builder.put("state", convert(registrar.getState()));
builder.put("ianaIdentifier", convert(registrar.getIanaIdentifier()));

View File

@@ -150,7 +150,7 @@ public class CheckApiAction implements Runnable {
return fail(e.getResult().getMsg());
} catch (Exception e) {
metricBuilder.status(UNKNOWN_ERROR);
logger.atWarning().withCause(e).log("Unknown error");
logger.atWarning().withCause(e).log("Unknown error.");
return fail("Invalid request");
}
}

View File

@@ -61,7 +61,7 @@ public final class EppController {
boolean isDryRun,
boolean isSuperuser,
byte[] inputXmlBytes) {
eppMetricBuilder.setClientId(Optional.ofNullable(sessionMetadata.getClientId()));
eppMetricBuilder.setRegistrarId(Optional.ofNullable(sessionMetadata.getRegistrarId()));
try {
EppInput eppInput;
try {
@@ -76,7 +76,7 @@ public final class EppController {
JSONValue.toJSONString(
ImmutableMap.<String, Object>of(
"clientId",
nullToEmpty(sessionMetadata.getClientId()),
nullToEmpty(sessionMetadata.getRegistrarId()),
"resultCode",
e.getResult().getCode().code,
"resultMessage",
@@ -130,12 +130,12 @@ public final class EppController {
} catch (EppException | EppExceptionInProviderException e) {
// The command failed. Send the client an error message, but only log at INFO since many of
// these failures are innocuous or due to client error, so there's nothing we have to change.
logger.atInfo().withCause(e).log("Flow returned failure response");
logger.atInfo().withCause(e).log("Flow returned failure response.");
EppException eppEx = (EppException) (e instanceof EppException ? e : e.getCause());
return getErrorResponse(eppEx.getResult(), flowComponent.trid());
} catch (Throwable e) {
// Something bad and unexpected happened. Send the client a generic error, and log at SEVERE.
logger.atSevere().withCause(e).log("Unexpected failure in flow execution");
logger.atSevere().withCause(e).log("Unexpected failure in flow execution.");
return getErrorResponse(Result.create(Code.COMMAND_FAILED), flowComponent.trid());
}
}
@@ -143,9 +143,6 @@ public final class EppController {
/** Creates a response indicating an EPP failure. */
@VisibleForTesting
static EppOutput getErrorResponse(Result result, Trid trid) {
return EppOutput.create(new EppResponse.Builder()
.setResult(result)
.setTrid(trid)
.build());
return EppOutput.create(new EppResponse.Builder().setResult(result).setTrid(trid).build());
}
}

View File

@@ -25,10 +25,12 @@ import google.registry.model.eppinput.EppInput.InnerCommand;
import google.registry.model.eppinput.EppInput.ResourceCommandWrapper;
import google.registry.model.eppoutput.Result;
import google.registry.model.eppoutput.Result.Code;
import google.registry.persistence.transaction.TransactionManagerFactory.ReadOnlyModeException;
import java.lang.annotation.Documented;
import java.lang.annotation.Inherited;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import javax.annotation.Nullable;
/** Exception used to propagate all failures containing one or more EPP responses. */
public abstract class EppException extends Exception {
@@ -37,7 +39,12 @@ public abstract class EppException extends Exception {
/** Create an EppException with a custom message. */
private EppException(String message) {
super(message);
this(message, null);
}
/** Create an EppException with a custom message and cause. */
private EppException(String message, @Nullable Throwable cause) {
super(message, cause);
Code code = getClass().getAnnotation(EppResultCode.class).value();
Preconditions.checkState(!code.isSuccess());
this.result = Result.create(code, message);
@@ -255,4 +262,12 @@ public abstract class EppException extends Exception {
super("Specified protocol version is not implemented");
}
}
/** Registry is currently undergoing maintenance and is in read-only mode. */
@EppResultCode(Code.COMMAND_FAILED)
public static class ReadOnlyModeEppException extends EppException {
ReadOnlyModeEppException(ReadOnlyModeException cause) {
super("Registry is currently undergoing maintenance and is in read-only mode", cause);
}
}
}

View File

@@ -88,7 +88,7 @@ public class EppMetrics {
String eppStatusCode =
metric.getStatus().isPresent() ? String.valueOf(metric.getStatus().get().code) : "";
eppRequestsByRegistrar.increment(
metric.getCommandName().orElse(""), metric.getClientId().orElse(""), eppStatusCode);
metric.getCommandName().orElse(""), metric.getRegistrarId().orElse(""), eppStatusCode);
eppRequestsByTld.increment(
metric.getCommandName().orElse(""), metric.getTld().orElse(""), eppStatusCode);
}

View File

@@ -84,7 +84,7 @@ public class EppRequestHandler {
response.setHeader(ProxyHttpHeaders.LOGGED_IN, "true");
}
} catch (Exception e) {
logger.atWarning().withCause(e).log("handleEppCommand general exception");
logger.atWarning().withCause(e).log("handleEppCommand general exception.");
response.setStatus(SC_BAD_REQUEST);
}
}

View File

@@ -38,7 +38,10 @@ public class EppToolAction implements Runnable {
public static final String PATH = "/_dr/epptool";
@Inject @Parameter("clientId") String clientId;
@Inject
@Parameter("clientId")
String registrarId;
@Inject @Parameter("superuser") boolean isSuperuser;
@Inject @Parameter("dryRun") boolean isDryRun;
@Inject @Parameter("xml") String xml;
@@ -49,8 +52,7 @@ public class EppToolAction implements Runnable {
public void run() {
eppRequestHandler.executeEpp(
new StatelessRequestSessionMetadata(
clientId,
ProtocolDefinition.getVisibleServiceExtensionUris()),
registrarId, ProtocolDefinition.getVisibleServiceExtensionUris()),
new PasswordOnlyTransportCredentials(),
EppRequestSource.TOOL,
isDryRun,

View File

@@ -26,7 +26,7 @@ import com.google.common.flogger.FluentLogger;
import google.registry.flows.EppException.CommandUseErrorException;
import google.registry.flows.EppException.SyntaxErrorException;
import google.registry.flows.EppException.UnimplementedExtensionException;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.Superuser;
import google.registry.flows.exceptions.OnlyToolCanPassMetadataException;
import google.registry.flows.exceptions.UnauthorizedForSuperuserExtensionException;
@@ -55,7 +55,7 @@ public final class ExtensionManager {
@Inject EppInput eppInput;
@Inject SessionMetadata sessionMetadata;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @Superuser boolean isSuperuser;
@Inject Class<? extends Flow> flowClass;
@Inject EppRequestSource eppRequestSource;
@@ -100,8 +100,8 @@ public final class ExtensionManager {
throw new UndeclaredServiceExtensionException(undeclaredUrisThatError);
}
logger.atInfo().log(
"Client %s is attempting to run %s without declaring URIs %s on login",
clientId, flowClass.getSimpleName(), undeclaredUris);
"Client %s is attempting to run %s without declaring URIs %s on login.",
registrarId, flowClass.getSimpleName(), undeclaredUris);
}
private static final ImmutableSet<EppRequestSource> ALLOWED_METADATA_EPP_REQUEST_SOURCES =

View File

@@ -164,11 +164,11 @@ public class FlowModule {
@Provides
@FlowScope
@ClientId
static String provideClientId(SessionMetadata sessionMetadata) {
// Treat a missing clientId as null so we can always inject a non-null value. All we do with the
// clientId is log it (as "") or detect its absence, both of which work fine with empty.
return Strings.nullToEmpty(sessionMetadata.getClientId());
@RegistrarId
static String provideRegistrarId(SessionMetadata sessionMetadata) {
// Treat a missing registrarId as null so we can always inject a non-null value. All we do with
// the registrarId is log it (as "") or detect its absence, both of which work fine with empty.
return Strings.nullToEmpty(sessionMetadata.getRegistrarId());
}
@Provides
@@ -220,14 +220,14 @@ public class FlowModule {
Trid trid,
byte[] inputXmlBytes,
boolean isSuperuser,
String clientId,
String registrarId,
EppInput eppInput) {
builder
.setModificationTime(tm().getTransactionTime())
.setTrid(trid)
.setXmlBytes(inputXmlBytes)
.setBySuperuser(isSuperuser)
.setClientId(clientId);
.setRegistrarId(registrarId);
Optional<MetadataExtension> metadataExtension =
eppInput.getSingleExtension(MetadataExtension.class);
metadataExtension.ifPresent(
@@ -249,10 +249,10 @@ public class FlowModule {
Trid trid,
@InputXml byte[] inputXmlBytes,
@Superuser boolean isSuperuser,
@ClientId String clientId,
@RegistrarId String registrarId,
EppInput eppInput) {
return makeHistoryEntryBuilder(
new ContactHistory.Builder(), trid, inputXmlBytes, isSuperuser, clientId, eppInput);
new ContactHistory.Builder(), trid, inputXmlBytes, isSuperuser, registrarId, eppInput);
}
/**
@@ -266,10 +266,10 @@ public class FlowModule {
Trid trid,
@InputXml byte[] inputXmlBytes,
@Superuser boolean isSuperuser,
@ClientId String clientId,
@RegistrarId String registrarId,
EppInput eppInput) {
return makeHistoryEntryBuilder(
new HostHistory.Builder(), trid, inputXmlBytes, isSuperuser, clientId, eppInput);
new HostHistory.Builder(), trid, inputXmlBytes, isSuperuser, registrarId, eppInput);
}
/**
@@ -283,10 +283,10 @@ public class FlowModule {
Trid trid,
@InputXml byte[] inputXmlBytes,
@Superuser boolean isSuperuser,
@ClientId String clientId,
@RegistrarId String registrarId,
EppInput eppInput) {
return makeHistoryEntryBuilder(
new DomainHistory.Builder(), trid, inputXmlBytes, isSuperuser, clientId, eppInput);
new DomainHistory.Builder(), trid, inputXmlBytes, isSuperuser, registrarId, eppInput);
}
/**
@@ -322,7 +322,7 @@ public class FlowModule {
/** Dagger qualifier for registrar client id. */
@Qualifier
@Documented
public @interface ClientId {}
public @interface RegistrarId {}
/** Dagger qualifier for the target id (foreign key) for single resource flows. */
@Qualifier

View File

@@ -23,8 +23,8 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.InputXml;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.model.eppcommon.Trid;
import google.registry.model.eppinput.EppInput;
@@ -45,7 +45,7 @@ public class FlowReporter {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Trid trid;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @InputXml byte[] inputXmlBytes;
@Inject EppInput eppInput;
@Inject Class<? extends Flow> flowClass;
@@ -64,7 +64,7 @@ public class FlowReporter {
JSONValue.toJSONString(
new ImmutableMap.Builder<String, Object>()
.put("serverTrid", trid.getServerTransactionId())
.put("clientId", clientId)
.put("clientId", registrarId)
.put("commandType", eppInput.getCommandType())
.put("resourceType", eppInput.getResourceType().orElse(""))
.put("flowClassName", flowClass.getSimpleName())

View File

@@ -19,15 +19,17 @@ import static google.registry.xml.XmlTransformer.prettyPrint;
import com.google.common.base.Strings;
import com.google.common.flogger.FluentLogger;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.EppException.ReadOnlyModeEppException;
import google.registry.flows.FlowModule.DryRun;
import google.registry.flows.FlowModule.InputXml;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.Superuser;
import google.registry.flows.FlowModule.Transactional;
import google.registry.flows.session.LoginFlow;
import google.registry.model.eppcommon.Trid;
import google.registry.model.eppoutput.EppOutput;
import google.registry.monitoring.whitebox.EppMetric;
import google.registry.persistence.transaction.TransactionManagerFactory.ReadOnlyModeException;
import javax.inject.Inject;
import javax.inject.Provider;
@@ -38,7 +40,7 @@ public class FlowRunner {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject TransportCredentials credentials;
@Inject EppRequestSource eppRequestSource;
@Inject Provider<Flow> flowProvider;
@@ -59,7 +61,7 @@ public class FlowRunner {
logger.atInfo().log(
COMMAND_LOG_FORMAT,
trid.getServerTransactionId(),
clientId,
registrarId,
sessionMetadata,
prettyXml.replace("\n", "\n\t"),
credentials,
@@ -74,8 +76,8 @@ public class FlowRunner {
if (!isTransactional) {
EppOutput eppOutput = EppOutput.create(flowProvider.get().run());
if (flowClass.equals(LoginFlow.class)) {
// In LoginFlow, clientId isn't known until after the flow executes, so save it then.
eppMetricBuilder.setClientId(sessionMetadata.getClientId());
// In LoginFlow, registrarId isn't known until after the flow executes, so save it then.
eppMetricBuilder.setRegistrarId(sessionMetadata.getRegistrarId());
}
return eppOutput;
}
@@ -97,6 +99,8 @@ public class FlowRunner {
return e.output;
} catch (EppRuntimeException e) {
throw e.getCause();
} catch (ReadOnlyModeException e) {
throw new ReadOnlyModeEppException(e);
}
}

View File

@@ -47,8 +47,8 @@ public final class FlowUtils {
private FlowUtils() {}
/** Validate that there is a logged in client. */
public static void validateClientIsLoggedIn(String clientId) throws EppException {
if (clientId.isEmpty()) {
public static void validateRegistrarIsLoggedIn(String registrarId) throws EppException {
if (registrarId.isEmpty()) {
throw new NotLoggedInException();
}
}

View File

@@ -25,7 +25,7 @@ import javax.servlet.http.HttpSession;
/** A metadata class that is a wrapper around {@link HttpSession}. */
public class HttpSessionMetadata implements SessionMetadata {
private static final String CLIENT_ID = "CLIENT_ID";
private static final String REGISTRAR_ID = "REGISTRAR_ID";
private static final String SERVICE_EXTENSIONS = "SERVICE_EXTENSIONS";
private static final String FAILED_LOGIN_ATTEMPTS = "FAILED_LOGIN_ATTEMPTS";
@@ -41,8 +41,8 @@ public class HttpSessionMetadata implements SessionMetadata {
}
@Override
public String getClientId() {
return (String) session.getAttribute(CLIENT_ID);
public String getRegistrarId() {
return (String) session.getAttribute(REGISTRAR_ID);
}
@Override
@@ -57,8 +57,8 @@ public class HttpSessionMetadata implements SessionMetadata {
}
@Override
public void setClientId(String clientId) {
session.setAttribute(CLIENT_ID, clientId);
public void setRegistrarId(String registrarId) {
session.setAttribute(REGISTRAR_ID, registrarId);
}
@Override
@@ -79,7 +79,7 @@ public class HttpSessionMetadata implements SessionMetadata {
@Override
public String toString() {
return toStringHelper(getClass())
.add("clientId", getClientId())
.add("clientId", getRegistrarId())
.add("failedLoginAttempts", getFailedLoginAttempts())
.add("serviceExtensionUris", Joiner.on('.').join(nullToEmpty(getServiceExtensionUris())))
.toString();

View File

@@ -69,10 +69,10 @@ public final class ResourceFlowUtils {
*/
private static final int FAILFAST_CHECK_COUNT = 5;
/** Check that the given clientId corresponds to the owner of given resource. */
public static void verifyResourceOwnership(String myClientId, EppResource resource)
/** Check that the given registrarId corresponds to the owner of given resource. */
public static void verifyResourceOwnership(String myRegistrarId, EppResource resource)
throws EppException {
if (!myClientId.equals(resource.getPersistedCurrentSponsorClientId())) {
if (!myRegistrarId.equals(resource.getPersistedCurrentSponsorRegistrarId())) {
throw new ResourceNotOwnedException();
}
}
@@ -138,8 +138,8 @@ public final class ResourceFlowUtils {
}
public static <R extends EppResource & ResourceWithTransferData> void verifyTransferInitiator(
String clientId, R resource) throws NotTransferInitiatorException {
if (!resource.getTransferData().getGainingClientId().equals(clientId)) {
String registrarId, R resource) throws NotTransferInitiatorException {
if (!resource.getTransferData().getGainingRegistrarId().equals(registrarId)) {
throw new NotTransferInitiatorException();
}
}
@@ -155,12 +155,12 @@ public final class ResourceFlowUtils {
}
public static <R extends EppResource> void verifyResourceDoesNotExist(
Class<R> clazz, String targetId, DateTime now, String clientId) throws EppException {
Class<R> clazz, String targetId, DateTime now, String registrarId) throws EppException {
VKey<R> key = loadAndGetKey(clazz, targetId, now);
if (key != null) {
R resource = tm().loadByKey(key);
// These are similar exceptions, but we can track them internally as log-based metrics.
if (Objects.equals(clientId, resource.getPersistedCurrentSponsorClientId())) {
if (Objects.equals(registrarId, resource.getPersistedCurrentSponsorRegistrarId())) {
throw new ResourceAlreadyExistsForThisClientException(targetId);
} else {
throw new ResourceCreateContentionException(targetId);

View File

@@ -26,13 +26,13 @@ public interface SessionMetadata {
*/
void invalidate();
String getClientId();
String getRegistrarId();
Set<String> getServiceExtensionUris();
int getFailedLoginAttempts();
void setClientId(String clientId);
void setRegistrarId(String registrarId);
void setServiceExtensionUris(Set<String> serviceExtensionUris);

View File

@@ -25,12 +25,12 @@ import java.util.Set;
/** A read-only {@link SessionMetadata} that doesn't support login/logout. */
public class StatelessRequestSessionMetadata implements SessionMetadata {
private final String clientId;
private final String registrarId;
private final ImmutableSet<String> serviceExtensionUris;
public StatelessRequestSessionMetadata(
String clientId, ImmutableSet<String> serviceExtensionUris) {
this.clientId = checkNotNull(clientId);
String registrarId, ImmutableSet<String> serviceExtensionUris) {
this.registrarId = checkNotNull(registrarId);
this.serviceExtensionUris = checkNotNull(serviceExtensionUris);
}
@@ -40,8 +40,8 @@ public class StatelessRequestSessionMetadata implements SessionMetadata {
}
@Override
public String getClientId() {
return clientId;
public String getRegistrarId() {
return registrarId;
}
@Override
@@ -55,7 +55,7 @@ public class StatelessRequestSessionMetadata implements SessionMetadata {
}
@Override
public void setClientId(String clientId) {
public void setRegistrarId(String registrarId) {
throw new UnsupportedOperationException();
}
@@ -77,7 +77,7 @@ public class StatelessRequestSessionMetadata implements SessionMetadata {
@Override
public String toString() {
return toStringHelper(getClass())
.add("clientId", getClientId())
.add("clientId", getRegistrarId())
.add("failedLoginAttempts", getFailedLoginAttempts())
.add("serviceExtensionUris", Joiner.on('.').join(nullToEmpty(getServiceExtensionUris())))
.toString();

View File

@@ -97,7 +97,7 @@ public class TlsCredentials implements TransportCredentials {
if (ipAddressAllowList.isEmpty()) {
logger.atInfo().log(
"Skipping IP allow list check because %s doesn't have an IP allow list.",
registrar.getClientId());
registrar.getRegistrarId());
return;
}
// In the rare unexpected case that the client inet address wasn't passed along at all, then
@@ -113,7 +113,7 @@ public class TlsCredentials implements TransportCredentials {
logger.atInfo().log(
"Authentication error: IP address %s is not allow-listed for registrar %s; allow list is:"
+ " %s",
clientInetAddr, registrar.getClientId(), ipAddressAllowList);
clientInetAddr, registrar.getRegistrarId(), ipAddressAllowList);
throw new BadRegistrarIpAddressException();
}
@@ -132,7 +132,8 @@ public class TlsCredentials implements TransportCredentials {
// Check that the request included the certificate hash
if (!clientCertificateHash.isPresent()) {
logger.atInfo().log(
"Request from registrar %s did not include X-SSL-Certificate.", registrar.getClientId());
"Request from registrar %s did not include X-SSL-Certificate.",
registrar.getRegistrarId());
throw new MissingRegistrarCertificateException();
}
// Check if the certificate hash is equal to the one on file for the registrar.
@@ -141,7 +142,7 @@ public class TlsCredentials implements TransportCredentials {
logger.atWarning().log(
"Non-matching certificate hash (%s) for %s, wanted either %s or %s.",
clientCertificateHash,
registrar.getClientId(),
registrar.getRegistrarId(),
registrar.getClientCertificateHash(),
registrar.getFailoverClientCertificateHash());
throw new BadRegistrarCertificateException();
@@ -156,7 +157,7 @@ public class TlsCredentials implements TransportCredentials {
} catch (InsecureCertificateException e) {
logger.atWarning().log(
"Registrar certificate used for %s does not meet certificate requirements: %s",
registrar.getClientId(), e.getMessage());
registrar.getRegistrarId(), e.getMessage());
throw new CertificateContainsSecurityViolationsException(e);
}
}

View File

@@ -33,7 +33,6 @@ import java.security.interfaces.ECPublicKey;
import java.security.interfaces.RSAPublicKey;
import java.util.Date;
import java.util.stream.Collectors;
import javax.annotation.Nullable;
import javax.inject.Inject;
import org.bouncycastle.jcajce.provider.asymmetric.util.EC5Util;
import org.bouncycastle.jce.ECNamedCurveTable;
@@ -228,7 +227,7 @@ public class CertificateChecker {
/** Returns whether the client should receive a notification email. */
public boolean shouldReceiveExpiringNotification(
@Nullable DateTime lastExpiringNotificationSentDate, String certificateStr) {
DateTime lastExpiringNotificationSentDate, String certificateStr) {
X509Certificate certificate = getCertificate(certificateStr);
DateTime now = clock.nowUtc();
// expiration date is one day after lastValidDate
@@ -238,13 +237,13 @@ public class CertificateChecker {
}
/*
* Client should receive a notification if :
* 1) client has never received notification and the certificate has entered
* the expiring period, OR
* 1) client has never received notification (lastExpiringNotificationSentDate is initially
* set to START_OF_TIME) and the certificate has entered the expiring period, OR
* 2) client has received notification but the interval between now and
* lastExpiringNotificationSentDate is greater than expirationWarningIntervalDays.
*/
return !lastValidDate.after(now.plusDays(expirationWarningDays).toDate())
&& (lastExpiringNotificationSentDate == null
&& (lastExpiringNotificationSentDate.equals(START_OF_TIME)
|| !lastExpiringNotificationSentDate
.plusDays(expirationWarningIntervalDays)
.toDate()

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.verifyTargetIdCount;
import static google.registry.model.EppResourceUtils.checkResourcesExist;
@@ -24,7 +24,7 @@ import google.registry.config.RegistryConfig.Config;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.Flow;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.model.contact.ContactCommand.Check;
import google.registry.model.contact.ContactResource;
@@ -42,12 +42,13 @@ import javax.inject.Inject;
* <p>This flows can check the existence of multiple contacts simultaneously.
*
* @error {@link google.registry.flows.exceptions.TooManyResourceChecksException}
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
*/
@ReportingSpec(ActivityReportField.CONTACT_CHECK)
public final class ContactCheckFlow implements Flow {
@Inject ResourceCommand resourceCommand;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject ExtensionManager extensionManager;
@Inject Clock clock;
@Inject @Config("maxChecks") int maxChecks;
@@ -57,7 +58,7 @@ public final class ContactCheckFlow implements Flow {
@Override
public final EppResponse run() throws EppException {
extensionManager.validate(); // There are no legal extensions for this flow.
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
ImmutableList<String> targetIds = ((Check) resourceCommand).getTargetIds();
verifyTargetIdCount(targetIds, maxChecks);
ImmutableSet<String> existingIds =

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.verifyResourceDoesNotExist;
import static google.registry.flows.contact.ContactFlowUtils.validateAsciiPostalInfo;
import static google.registry.flows.contact.ContactFlowUtils.validateContactAgainstPolicy;
@@ -27,7 +27,7 @@ import com.googlecode.objectify.Key;
import google.registry.config.RegistryConfig.Config;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
@@ -50,6 +50,8 @@ import org.joda.time.DateTime;
/**
* An EPP flow that creates a new contact.
*
* @error {@link google.registry.flows.EppException.ReadOnlyModeEppException}
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link ResourceAlreadyExistsForThisClientException}
* @error {@link ResourceCreateContentionException}
* @error {@link ContactFlowUtils.BadInternationalizedPostalInfoException}
@@ -60,7 +62,7 @@ public final class ContactCreateFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject ExtensionManager extensionManager;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject ContactHistory.Builder historyBuilder;
@Inject EppResponse.Builder responseBuilder;
@@ -71,16 +73,16 @@ public final class ContactCreateFlow implements TransactionalFlow {
public final EppResponse run() throws EppException {
extensionManager.register(MetadataExtension.class);
extensionManager.validate();
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
Create command = (Create) resourceCommand;
DateTime now = tm().getTransactionTime();
verifyResourceDoesNotExist(ContactResource.class, targetId, now, clientId);
verifyResourceDoesNotExist(ContactResource.class, targetId, now, registrarId);
ContactResource newContact =
new ContactResource.Builder()
.setContactId(targetId)
.setAuthInfo(command.getAuthInfo())
.setCreationClientId(clientId)
.setPersistedCurrentSponsorClientId(clientId)
.setCreationRegistrarId(registrarId)
.setPersistedCurrentSponsorRegistrarId(registrarId)
.setRepoId(createRepoId(allocateId(), roidSuffix))
.setFaxNumber(command.getFax())
.setVoiceNumber(command.getVoice())

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.checkLinkedDomains;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
import static google.registry.flows.ResourceFlowUtils.verifyNoDisallowedStatuses;
@@ -31,7 +31,7 @@ import com.google.common.collect.ImmutableSet;
import google.registry.batch.AsyncTaskEnqueuer;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.Superuser;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
@@ -60,6 +60,8 @@ import org.joda.time.DateTime;
* references to the host before the deletion is allowed to proceed. A poll message will be written
* with the success or failure message when the process is complete.
*
* @error {@link google.registry.flows.EppException.ReadOnlyModeEppException}
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceDoesNotExistException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
* @error {@link google.registry.flows.exceptions.ResourceStatusProhibitsOperationException}
@@ -75,7 +77,7 @@ public final class ContactDeleteFlow implements TransactionalFlow {
StatusValue.SERVER_DELETE_PROHIBITED);
@Inject ExtensionManager extensionManager;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject Trid trid;
@Inject @Superuser boolean isSuperuser;
@@ -91,21 +93,21 @@ public final class ContactDeleteFlow implements TransactionalFlow {
public final EppResponse run() throws EppException {
extensionManager.register(MetadataExtension.class);
extensionManager.validate();
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
DateTime now = tm().getTransactionTime();
checkLinkedDomains(targetId, now, ContactResource.class, DomainBase::getReferencedContacts);
ContactResource existingContact = loadAndVerifyExistence(ContactResource.class, targetId, now);
verifyNoDisallowedStatuses(existingContact, DISALLOWED_STATUSES);
verifyOptionalAuthInfo(authInfo, existingContact);
if (!isSuperuser) {
verifyResourceOwnership(clientId, existingContact);
verifyResourceOwnership(registrarId, existingContact);
}
Type historyEntryType;
Code resultCode;
ContactResource newContact;
if (tm().isOfy()) {
asyncTaskEnqueuer.enqueueAsyncDelete(
existingContact, tm().getTransactionTime(), clientId, trid, isSuperuser);
existingContact, tm().getTransactionTime(), registrarId, trid, isSuperuser);
newContact = existingContact.asBuilder().addStatusValue(StatusValue.PENDING_DELETE).build();
historyEntryType = Type.CONTACT_PENDING_DELETE;
resultCode = SUCCESS_WITH_ACTION_PENDING;
@@ -113,7 +115,7 @@ public final class ContactDeleteFlow implements TransactionalFlow {
// Handle pending transfers on contact deletion.
newContact =
existingContact.getStatusValues().contains(StatusValue.PENDING_TRANSFER)
? denyPendingTransfer(existingContact, SERVER_CANCELLED, now, clientId)
? denyPendingTransfer(existingContact, SERVER_CANCELLED, now, registrarId)
: existingContact;
// Wipe out PII on contact deletion.
newContact =

View File

@@ -73,7 +73,7 @@ public class ContactFlowUtils {
DateTime now,
Key<ContactHistory> contactHistoryKey) {
return new PollMessage.OneTime.Builder()
.setClientId(transferData.getGainingClientId())
.setRegistrarId(transferData.getGainingRegistrarId())
.setEventTime(transferData.getPendingTransferExpirationTime())
.setMsg(transferData.getTransferStatus().getMessage())
.setResponseData(
@@ -92,7 +92,7 @@ public class ContactFlowUtils {
static PollMessage createLosingTransferPollMessage(
String targetId, TransferData transferData, Key<ContactHistory> contactHistoryKey) {
return new PollMessage.OneTime.Builder()
.setClientId(transferData.getLosingClientId())
.setRegistrarId(transferData.getLosingRegistrarId())
.setEventTime(transferData.getPendingTransferExpirationTime())
.setMsg(transferData.getTransferStatus().getMessage())
.setResponseData(ImmutableList.of(createTransferResponse(targetId, transferData)))
@@ -105,8 +105,8 @@ public class ContactFlowUtils {
String targetId, TransferData transferData) {
return new ContactTransferResponse.Builder()
.setContactId(targetId)
.setGainingClientId(transferData.getGainingClientId())
.setLosingClientId(transferData.getLosingClientId())
.setGainingRegistrarId(transferData.getGainingRegistrarId())
.setLosingRegistrarId(transferData.getLosingRegistrarId())
.setPendingTransferExpirationTime(transferData.getPendingTransferExpirationTime())
.setTransferRequestTime(transferData.getTransferRequestTime())
.setTransferStatus(transferData.getTransferStatus())

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
import static google.registry.flows.ResourceFlowUtils.verifyResourceOwnership;
import static google.registry.model.EppResourceUtils.isLinked;
@@ -23,7 +23,7 @@ import com.google.common.collect.ImmutableSet;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.Flow;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.Superuser;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.annotations.ReportingSpec;
@@ -46,6 +46,7 @@ import org.joda.time.DateTime;
* ever been transferred. Any registrar can see any contact's information, but the authInfo is only
* visible to the registrar that owns the contact or to a registrar that already supplied it.
*
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceDoesNotExistException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
*/
@@ -54,7 +55,7 @@ public final class ContactInfoFlow implements Flow {
@Inject ExtensionManager extensionManager;
@Inject Clock clock;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject Optional<AuthInfo> authInfo;
@Inject @Superuser boolean isSuperuser;
@@ -67,13 +68,13 @@ public final class ContactInfoFlow implements Flow {
public final EppResponse run() throws EppException {
DateTime now = clock.nowUtc();
extensionManager.validate(); // There are no legal extensions for this flow.
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
ContactResource contact = loadAndVerifyExistence(ContactResource.class, targetId, now);
if (!isSuperuser) {
verifyResourceOwnership(clientId, contact);
verifyResourceOwnership(registrarId, contact);
}
boolean includeAuthInfo =
clientId.equals(contact.getCurrentSponsorClientId()) || authInfo.isPresent();
registrarId.equals(contact.getCurrentSponsorRegistrarId()) || authInfo.isPresent();
ImmutableSet.Builder<StatusValue> statusValues = new ImmutableSet.Builder<>();
statusValues.addAll(contact.getStatusValues());
if (isLinked(contact.createVKey(), now)) {
@@ -89,10 +90,10 @@ public final class ContactInfoFlow implements Flow {
.setVoiceNumber(contact.getVoiceNumber())
.setFaxNumber(contact.getFaxNumber())
.setEmailAddress(contact.getEmailAddress())
.setCurrentSponsorClientId(contact.getCurrentSponsorClientId())
.setCreationClientId(contact.getCreationClientId())
.setCurrentSponsorClientId(contact.getCurrentSponsorRegistrarId())
.setCreationClientId(contact.getCreationRegistrarId())
.setCreationTime(contact.getCreationTime())
.setLastEppUpdateClientId(contact.getLastEppUpdateClientId())
.setLastEppUpdateClientId(contact.getLastEppUpdateRegistrarId())
.setLastEppUpdateTime(contact.getLastEppUpdateTime())
.setLastTransferTime(contact.getLastTransferTime())
.setAuthInfo(includeAuthInfo ? contact.getAuthInfo() : null)

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
import static google.registry.flows.ResourceFlowUtils.verifyHasPendingTransfer;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfo;
@@ -29,7 +29,7 @@ import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
@@ -54,6 +54,8 @@ import org.joda.time.DateTime;
* transfer is automatically approved. Within that window, this flow allows the losing client to
* explicitly approve the transfer request, which then becomes effective immediately.
*
* @error {@link google.registry.flows.EppException.ReadOnlyModeEppException}
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceDoesNotExistException}
@@ -64,7 +66,7 @@ public final class ContactTransferApproveFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject ExtensionManager extensionManager;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject Optional<AuthInfo> authInfo;
@Inject ContactHistory.Builder historyBuilder;
@@ -79,12 +81,12 @@ public final class ContactTransferApproveFlow implements TransactionalFlow {
public final EppResponse run() throws EppException {
extensionManager.register(MetadataExtension.class);
extensionManager.validate();
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
DateTime now = tm().getTransactionTime();
ContactResource existingContact = loadAndVerifyExistence(ContactResource.class, targetId, now);
verifyOptionalAuthInfo(authInfo, existingContact);
verifyHasPendingTransfer(existingContact);
verifyResourceOwnership(clientId, existingContact);
verifyResourceOwnership(registrarId, existingContact);
ContactResource newContact =
approvePendingTransfer(existingContact, TransferStatus.CLIENT_APPROVED, now);
ContactHistory contactHistory =

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
import static google.registry.flows.ResourceFlowUtils.verifyHasPendingTransfer;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfo;
@@ -29,7 +29,7 @@ import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
@@ -54,6 +54,8 @@ import org.joda.time.DateTime;
* transfer is automatically approved. Within that window, this flow allows the gaining client to
* withdraw the transfer request.
*
* @error {@link google.registry.flows.EppException.ReadOnlyModeEppException}
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceDoesNotExistException}
* @error {@link google.registry.flows.exceptions.NotPendingTransferException}
@@ -65,7 +67,7 @@ public final class ContactTransferCancelFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject ExtensionManager extensionManager;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject ContactHistory.Builder historyBuilder;
@Inject EppResponse.Builder responseBuilder;
@@ -75,14 +77,14 @@ public final class ContactTransferCancelFlow implements TransactionalFlow {
public final EppResponse run() throws EppException {
extensionManager.register(MetadataExtension.class);
extensionManager.validate();
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
DateTime now = tm().getTransactionTime();
ContactResource existingContact = loadAndVerifyExistence(ContactResource.class, targetId, now);
verifyOptionalAuthInfo(authInfo, existingContact);
verifyHasPendingTransfer(existingContact);
verifyTransferInitiator(clientId, existingContact);
verifyTransferInitiator(registrarId, existingContact);
ContactResource newContact =
denyPendingTransfer(existingContact, TransferStatus.CLIENT_CANCELLED, now, clientId);
denyPendingTransfer(existingContact, TransferStatus.CLIENT_CANCELLED, now, registrarId);
ContactHistory contactHistory =
historyBuilder.setType(CONTACT_TRANSFER_CANCEL).setContact(newContact).build();
// Create a poll message for the losing client.

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfo;
import static google.registry.flows.contact.ContactFlowUtils.createTransferResponse;
@@ -22,7 +22,7 @@ import static google.registry.flows.contact.ContactFlowUtils.createTransferRespo
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.Flow;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.flows.exceptions.NoTransferHistoryToQueryException;
@@ -40,11 +40,12 @@ import javax.inject.Inject;
*
* <p>The "gaining" registrar requests a transfer from the "losing" (aka current) registrar. The
* losing registrar has a "transfer" time period to respond (by default five days) after which the
* transfer is automatically approved. This flow can be used by the gaining or losing registrars
* (or anyone with the correct authId) to see the status of a transfer, which may still be pending
* or may have been approved, rejected, cancelled or implicitly approved by virtue of the transfer
* transfer is automatically approved. This flow can be used by the gaining or losing registrars (or
* anyone with the correct authId) to see the status of a transfer, which may still be pending or
* may have been approved, rejected, cancelled or implicitly approved by virtue of the transfer
* period expiring.
*
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceDoesNotExistException}
* @error {@link google.registry.flows.exceptions.NoTransferHistoryToQueryException}
@@ -55,7 +56,7 @@ public final class ContactTransferQueryFlow implements Flow {
@Inject ExtensionManager extensionManager;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject Clock clock;
@Inject EppResponse.Builder responseBuilder;
@@ -64,7 +65,7 @@ public final class ContactTransferQueryFlow implements Flow {
@Override
public final EppResponse run() throws EppException {
extensionManager.validate(); // There are no legal extensions for this flow.
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
ContactResource contact =
loadAndVerifyExistence(ContactResource.class, targetId, clock.nowUtc());
verifyOptionalAuthInfo(authInfo, contact);
@@ -76,8 +77,8 @@ public final class ContactTransferQueryFlow implements Flow {
// Note that the authorization info on the command (if present) has already been verified. If
// it's present, then the other checks are unnecessary.
if (!authInfo.isPresent()
&& !clientId.equals(contact.getTransferData().getGainingClientId())
&& !clientId.equals(contact.getTransferData().getLosingClientId())) {
&& !registrarId.equals(contact.getTransferData().getGainingRegistrarId())
&& !registrarId.equals(contact.getTransferData().getLosingRegistrarId())) {
throw new NotAuthorizedToViewTransferException();
}
return responseBuilder

View File

@@ -14,7 +14,7 @@
package google.registry.flows.contact;
import static google.registry.flows.FlowUtils.validateClientIsLoggedIn;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
import static google.registry.flows.ResourceFlowUtils.verifyHasPendingTransfer;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfo;
@@ -29,7 +29,7 @@ import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.FlowModule.RegistrarId;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
@@ -53,6 +53,8 @@ import org.joda.time.DateTime;
* transfer is automatically approved. Within that window, this flow allows the losing client to
* reject the transfer request.
*
* @error {@link google.registry.flows.EppException.ReadOnlyModeEppException}
* @error {@link google.registry.flows.FlowUtils.NotLoggedInException}
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceDoesNotExistException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
@@ -63,7 +65,7 @@ public final class ContactTransferRejectFlow implements TransactionalFlow {
@Inject ExtensionManager extensionManager;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String clientId;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject ContactHistory.Builder historyBuilder;
@Inject EppResponse.Builder responseBuilder;
@@ -73,14 +75,14 @@ public final class ContactTransferRejectFlow implements TransactionalFlow {
public final EppResponse run() throws EppException {
extensionManager.register(MetadataExtension.class);
extensionManager.validate();
validateClientIsLoggedIn(clientId);
validateRegistrarIsLoggedIn(registrarId);
DateTime now = tm().getTransactionTime();
ContactResource existingContact = loadAndVerifyExistence(ContactResource.class, targetId, now);
verifyOptionalAuthInfo(authInfo, existingContact);
verifyHasPendingTransfer(existingContact);
verifyResourceOwnership(clientId, existingContact);
verifyResourceOwnership(registrarId, existingContact);
ContactResource newContact =
denyPendingTransfer(existingContact, TransferStatus.CLIENT_REJECTED, now, clientId);
denyPendingTransfer(existingContact, TransferStatus.CLIENT_REJECTED, now, registrarId);
ContactHistory contactHistory =
historyBuilder.setType(CONTACT_TRANSFER_REJECT).setContact(newContact).build();
PollMessage gainingPollMessage =

Some files were not shown because too many files have changed in this diff Show More