1
0
mirror of https://github.com/google/nomulus synced 2026-01-31 01:52:28 +00:00

Compare commits

..

55 Commits

Author SHA1 Message Date
Weimin Yu
757803e985 Revise host.inet_addresses query to use gin index (#1550)
* Revise host.inet_addresses query to use gin index
2022-03-09 23:32:17 -05:00
Weimin Yu
4b1f4f96e3 Reorganize new schema changes (#1551)
* Reorganize new schema changes

Reorganized new schema changes and make each flyway script update a
single table.

Each flyway script is executed in a single database transaction so that
the script can be rolled back in one shot. It acquires a shared lock on
all tables touched by the script. This is deadlock-prone because in a
busy database, there may be user queries that attempt to lock the same
set of tables, but in different order. By limiting each script to one
table, we avoid the problem.

We should have some a presubmit check to enforce this rule.

All changes have been deployed to Sandbox out-of-band. When doing so,
we changed all CREATE INDEX statements to CREATE INDEX IF NOT EXISTS.

Future deployments should be able to proceed normally.
2022-03-09 20:47:24 -05:00
Ben McIlwain
bd49e8b238 Add anti-deadlock instructions to DB update README (#1552) 2022-03-09 18:50:15 -05:00
Weimin Yu
6249a8e118 Revise Host index on inet_addresses (#1549)
* Revise Host index on inet_addresses

The index on the 'inet_addresses array column should be of gin or gist
type, which index individual array elements. We use gin for now since
host updates are not often, and gin has better accuracy.

Since flyway script V108__... has not been deployed, we  edit the file
in place instead of adding a new script.

This will be followed up with a modified query that can take advantage
of the gin index. Until then we don't expect to see performance
improvement.

The suspected bottlenect query in the whois path is:

select * from "Host" where 'non-ip-string' = any(inet_address) and
deletion_time < now();

It needs to be revised into:

select * from "Host" where array['non-ip-string'] <@ inet_address and
deletion_time < now();

The combined change reduces the query time from 90ms to 30ms in Sandbox,
and from 150ms to 40ms in production.

It is unclear if this solves all problem with whois latency.
2022-03-08 14:22:01 -05:00
Ben McIlwain
9b7bb12cd1 Add deletionTime/inetAddresses indexes to Host table to support WHOIS (#1548)
* Add deletionTime/inetAddresses indexes to Host table to support WHOIS

Weimin identified these as missing, and being the cause of slowdowns in
NameserverLookupByIpCommand that we're seeing in sandbox.

This is the first of two PRs, adding just the Flyway/schema changes. The
second PR adding the Java object model changes is #1547.
2022-03-07 16:07:11 -05:00
Michael Muller
71d13bab71 Improve Transaction gap processing (#1546)
Skip multiple gaps in one pass and write the correct transaction id to
datastore.
2022-03-07 13:26:18 -05:00
Ben McIlwain
8db28b7e61 Add domainRepoId indexes to billing events (#1545)
* Add domainRepoId indexes to billing events #1544

The query analyzer identified this is a missing index on the BillingEvent table,
and I added it for recurrences and cancellations as well as it's likely to be a
problem for them too. "Give me all the billing events associated with a given
domain by its repo ID" seems like a pretty common use case for the DB (and does
appear to be used by our invoicing pipeline).

This is the first of two PRs that makes just the DB changes. The second PR
(#1544) will add the Java code changes, and will be committed after this one is
deployed.
2022-03-07 12:21:40 -05:00
Lai Jiang
5a7dc307c5 Update the the latest LTS Node version (#1543)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1543)
<!-- Reviewable:end -->
2022-03-07 11:49:18 -05:00
Ben McIlwain
67278af3cb Add 9 more indexes to SQL schema (#1541)
* Add 9 more indexes to SQL schema

This indexes were identified as missing by PostgreSQL's query analyzer in our
sandbox environment (where we get enough realistic EPP traffic to identify these
deficiencies).

This is the first of two PRs -- the second PR (#1540) will be merged only after
this one is live in production. Note that this is the PR that actually modifies
the database though, so once this one is deployed we will already have the
benefit of the new indexes.
2022-03-05 17:57:36 -05:00
sarahcaseybot
1eafc983ab Check signature length in DS records (#1538)
* Check signature length in DS records

* Small fixes

* Add unit tests

* Formatting fix
2022-03-04 15:18:14 -05:00
gbrodman
f1bbdc5a0b Use built-in Java URL connections instead of UrlFetchService (#1535)
- Use the standard HttpsURLConnection to write/read data
- Rewrite RdeReporter, Nordn*Action, and Marksdb classes and related
  tests to conform to the new format
- Remove FakeURLFetchService and ForwardingUrlFetchService as they weren't used
- Refactor UrlFetchException to UrlConnectionException
- Refactor UrlFetchUtils to UrlConnectionUtils

I will need to test this on Alpha. Fortunately the connections that
don't require auth (e.g. TMDB downloading) should be testable.
2022-03-04 14:16:22 -05:00
Michael Muller
b146301495 Allow replicateToDatastore to skip gaps (#1539)
* Allow replicateToDatastore to skip gaps

As it turns out, gaps in the transaction id sequence number are expected
because rollbacks do not rollback sequence numbers.

To deal with this, stop checking these.

This change is not adequate in and of itself, as it is possible for a gap to
be introduced if two transactions are committed out of order of their sequence
number.  We are currently discussing several strategies to mitigate this.

* Remove println, add a record verification
2022-03-04 09:04:13 -05:00
Weimin Yu
437a747eae Pass stack trace to validate_datastore user (#1537)
* Pass stack trace to validate_datastore user
2022-03-03 16:10:31 -05:00
Weimin Yu
a620b37c80 Fix hanging test (#1536)
* Fix hanging test

Tests using the TestServerExtension may hang forever if an underlying
component (e.g., testcontainer for psql) fails. This may be the cause
of the some kokoro runs that timeed out after three hours.
2022-03-03 14:43:32 -05:00
Rachel Guan
267cbeb95b Inject CloudTasksUtil to AsyncTaskEnqueuer (#1522)
* Inject CloudTasksUtil to AsyncTasksEnqueuer

* Rebase

* Remove QUEUE_ASYNC_DELETE from AsyncTasksEnqueuer

* Refactor create() 

* Remove AppEngineServiceUtil depdendency from AsyncTaskEnqueuer
2022-03-02 11:31:45 -05:00
Weimin Yu
b9fcabbc36 Save db discrepancies to GCS (#1527)
* Save db discrepancies to GCS
2022-02-28 16:52:09 -05:00
Weimin Yu
4f33de10f3 Add a tools command to launch SQL validation job (#1526)
* Add a tools command to launch SQL validation job

Stopping using Pipeline.run().waitUntilFinish in
ValidateDatastorePipeline. Flex-templalate does not support blocking
wait in the main thread.

This PR adds a new ValidateSqlCommand that launches the pipeline and
maintains the SQL snapshot while the pipeline is running.

This PR also added more parameters to both ValidateSqlCommand and
ValidateDatastoreCommand:
- The -c option to supply an optional incremental comparison start time
- The -r option to supply an optional release tag that is not 'live',
  e.g., nomulus-DDDDYYMM-RC00

If the manual launch option (-m) is enabled, the commands will print the
gcloud command that can launch the pipeline.

Tested with sandbox, qa and the dev project.
2022-02-28 13:14:57 -05:00
Michael Muller
d6bb83f6d3 Nullify contact fields in setContactFields (#1533)
When setting contact fields from a set of DesignatedContact's, nullify the
existing fields so they don't stick around if they're not in the new set.
2022-02-28 10:57:35 -05:00
Michael Muller
a8d3d22c5a Don't reset the update time for TLD updates (#1532)
* Don't reset the update time for TLD updates

It turns out that the reason that the Registrar update timestamp isn't updated
for some of the tests is because the record is updated unchanged.  We can
avoid this problem by not trying to update the registrar to the same value.
So in this case, if the registrar alreay contains the TLD we're adding, don't
try to add it.
2022-02-25 13:09:36 -05:00
Rachel Guan
fac659b520 Inject CloudTasksUtils to DomainLockUtils (#1519)
* Move enqueueDomainRelock to DomainLockUtils

* Rebase and improve PR

* Inject CloudTaskUtils to DomainLockUtils
2022-02-25 11:38:38 -05:00
gbrodman
178702ded3 Fix DTR creation in one location and clean up replay comparison (#1529)
* Fix DTR creation in one location and clean up replay comparison
2022-02-23 11:07:10 -05:00
Lai Jiang
59bca1a9ed Disable sending cert expiration emails on sandbox (#1528) 2022-02-22 14:46:27 -05:00
Michael Muller
f8198fa590 Do full database comparison during replay tests (#1524)
* Fix entity delete replication, compare db @ replay

Replay tests currently only verify that the contents of a transaction are
can be successfully replicated to the other database.  They do not verify that
the contents of both databases are equivalent.  As a result, we miss any
changes omitted from the transaction (as was the case with entity deletions).

This change adds a final database comparison to ReplayExtension so we can
safely say that the databases are in the same state.

This comparison is introduced in part as a unit test for the one-line fix for
replication of an "entity delete" operation (where we delete using an entity
object instead of the object's key) which so far has only affected PollMessage
deletion.  The fix is also included in this commit within
JpaTransactionManagerImpl.

* Exclude tests and entities with failing comparisons

* Get all tests to pass and fix more timestamp

Fix most of the unit tests that were broken by this change.

- Fix timestamp updates after grace period changes in DomainContent and for
  TLD changes in Registrar.
- Reenable full database comparison for most DomainCreateFlowTest's.
- Make some test entities NonReplicated so they don't break when used with
  jpaTm().delete()
- Diable checking of a few more entity types that are failing comparisons.
- Add some formatting fixes.

* Remove unnecessary "NoDatabaseCompare"

I turns out that after other fixes/elisions we no longer need these for
any tests in DomainCreateFlowTest.

* Changes for review

* Remove old "compare" flag.

* Reformatted.
2022-02-22 10:49:57 -05:00
Lai Jiang
bbac81996b Make a few quality-of-life improvements in CloudTasksUtils (#1521)
* Make a few quality-of-life improvements in CloudTasksUtils

1. Update the method names. There are too many overloaded methods and it
   is hard to figure out which one does which without checking the
   javadoc.

2. Added a method in the task matcher to specify the delay time in
   DateTime, so the caller does not need to convert it to Timestamp.

3. Remove the expilict dependency on a clock when enqueueing a task with
   delay, the clock is now injected directly into the util instance
   itself.
2022-02-18 20:21:56 -05:00
Ben McIlwain
52c759d1db Disable prober data deletion cron job in prod & sandbox (#1525)
* Disable prober data deletion cron job in prod & sandbox

This is going to unnecessarily make the database migration more complex, and we
don't need them that badly. We'll re-enable these cron jobs once we've written
the new version of this action that handles Cloud SQL correctly (the current
version only does Datastore anyway).
2022-02-17 08:46:40 -08:00
Weimin Yu
453af87615 Ignore prober data when comparing databases (#1523)
* Ignore prober data when comparing databases

Completely ignore prober data when comparing Datastore and SQL.

Prober data deletions are not propagated from Datastore to SQL. It is
difficult to distinguish soft-deletes from normal updates, therefore
difficult to avoid false positives when looking for differences.
2022-02-15 12:01:20 -05:00
Ben McIlwain
d0d7515c0a Make NordnUploadAction resilient to duplicate task queue tasks (#1516)
This is necessary because the Cloud Tasks API is not transactionally enrolled,
so it's possible that multiple tasks might end up being enqueued. We need to be
able to handle them.
2022-02-14 14:59:46 -05:00
Michael Muller
2c70127573 Fix update timestamps for DomainContent types (#1517)
* Fix update timestamps for DomainContent types

We expect update timestamps to be updated whenever a containing entity is
modified and persisted, but unfortunately Hibernate doesn't seem to do this --
instead it appears to regard such an entity as unchanged.

To work around this, we explicitly reset the update timestamp whenever a
nested collection is modified in the Builder.

Note that this change only solves the problem for DomainContent.  All other
entitities containing UpdateAutoTimestamp will need to be audited and
instrumented with a similar change.

* Fix a handful of tests broken by this change

* Reformatted.
2022-02-14 11:31:03 -05:00
Rachel Guan
d3fc6063c9 Use CloudTasksUtils to enqueue in RegistrarSettingsAction (#1467)
* Use CloudTaskUtils to enqueue

* Add CloudTasksUtilsModule to FrontendComponent

* Fix Uri query issue

* Remove header and check service in matcher

* Use a ThreadLocal boolean in TestServer to determine enqueueing

* Extract enqueuing and email sending from tm().transact()
2022-02-10 11:16:28 -05:00
Weimin Yu
82802ec85c Compare datastore to sql action (#1507)
* Add action to DB comparison pipeline

Add a backend Action in Nomulus server that lanuches the pipeline for
comparing datastore (secondary) with Cloud SQL (primary).

* Save progress

* Revert test changes

* Add pipeline launching
2022-02-10 10:43:36 -05:00
Rachel Guan
e53594a626 Fix protobuf-java-util dependency (#1518) 2022-02-09 14:11:09 -05:00
Rachel Guan
e6577e3f23 Use CloudTasksUtil to enqueue task in IcannReportingStagingAction (#1489)
* Use CloudTasksUtil to enqueue task

* Use schedule time helper and add schedule time comparison
2022-02-09 12:33:56 -05:00
Michael Muller
c9da36be9f Fix create/update timestamp replay problems (#1515)
* Fix create/update timestamp replay problems

When CreateAutoTimestamp and UpdateAutoTimestamp are inserted into a
Transaction, their values are not populated in the same way as when they are
stored in the course of an SQL commit.  This results in different timestamp
values between SQL and datastore during the SQL -> DS replay.

Fix this by providing these values from the JPA transaction time when we're
doing transaction serialization.

This change also removes the initialization of the Ofy clock in
ExpandRecurringBillingEventsActionTest.  It's not necessary as the
ReplayExtension already takes care of this and doing it after the
ReplayExtension as we were breaks a test now that the update timestamps are
correct.
2022-02-09 08:48:51 -05:00
Rachel Guan
2ccae00dae Remove ReportingUtils and use CloudTasksUtil to enqueue tasks in GenerateInvoicesAction and GenerateSpec11ReportAction (#1491)
* Remove ReportingUtils and use CloudTaskUtil to enqueue 

* Use schedule time helper to enqueue and update schedule time comparison

* Fix comment, indentation in gradle file and improve time comparison
2022-02-08 17:48:47 -05:00
Rachel Guan
00c8b6a76d Change from TaskQueueUtils to CloudTasksUtils in LoadTestAction (#1468)
* Change from TaskQueueUtils to CloudTasksUtils in LoadTestAction

* Put X_CSRF_TOKEN in task headers

* Fix schedule time and gradle issue

* Remove TaskQueue constant dependency

* Double run seconds

* Add comment for X_CSRF_TOKEN
2022-02-08 17:44:24 -05:00
Lai Jiang
09dca28122 Make EscrowDepositEncryptor work with BRDA deposits (#1512)
Also make it possible to specify a revision number.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1512)
<!-- Reviewable:end -->
2022-02-07 12:40:00 -05:00
Weimin Yu
b412bdef9f Fix flaky RdeStagingActionDatastoreTest (#1514)
* Fix flaky RdeStagingActionDatastoreTest

Fixed the most common cause that makes one method flaky (Clock and
timestamp problem). Added a TODO to rethink test case.

Also added notes on tasks potentially enqueued multiple times.
2022-02-04 10:40:52 -05:00
Rachel Guan
62e5de8a3a Add support for delay of duration when scheduling a task (#1493)
* Add support for delay by duration when scheduling task

* Fix comments

* Add test for negative duration

* Change delay parameter type to duration
2022-02-03 22:25:39 -05:00
Lai Jiang
fa9b784c5c Correctly delete all stopped versions except for the most recent 3 (#1511)
The gcloud command does some weird stuff with sorting when custom format
is used. Here we instead rely on linux sort and head command to sort the
versions list.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1511)
<!-- Reviewable:end -->
2022-02-03 16:04:58 -05:00
Weimin Yu
e2bd72a74e Add an index on Host.host_name column (#1510)
* Add an index on Host.host_name column

This field is queried during host creation and needs an index to speed
up the query.

Since Hibernate does not explicitly refer to indexes, we can change the
code and schema in one PR.
2022-02-03 15:57:15 -05:00
gbrodman
28d41488b1 Use the built-in replicaJpaTm() in RDAP (#1506)
* Use the built-in replicaJpaTm() in RDAP

This includes a test for the replica-simulating transaction manager and
removal of any replica-specific code in RDAP tests, because it's
unnecessary due to the existing tests.
2022-02-03 11:14:26 -05:00
Weimin Yu
1107b9f2e3 Count duplicates when comparing Databases (#1509)
* Count duplicates when comparing Databases

Cursors may have duplicates in Datastore if imported across projects.
Count them instead of throwing.
2022-02-03 10:59:03 -05:00
Lai Jiang
9624b483d4 Copy the latest revision of BRDA during upload (#1508)
The revision was hardcoded to 0, which caused problem when we need to
re-run BRDA.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1508)
<!-- Reviewable:end -->
2022-02-02 21:54:42 -05:00
Rachel Guan
365937f22d Change from TaskQueueUtils to CloudTasksUtils in RdeStaging (#1411)
* Change from TaskQueueUtils to CloudTaskUtils in RdeStaging
2022-02-01 20:41:56 -05:00
sarahcaseybot
d5db6c16bc Add DS validation to match Cloud DNS (#1487)
* Add DS validation to match Cloud DNS

* Add checks to flows

* Add some flow tests

* Add tests for DomainCreateFlow

* Add tests for UpdateDomainCommand

* Fix docs test

* Small fixes

* Remove builder from tests
2022-02-01 15:25:00 -05:00
Lai Jiang
c1ad06afd1 Allow the beam parameter in RDE standard mode (#1505)
Standard mode will determine the watermarks based on the cursors and
kick off subsequent uploading steps. In order to run both the Beam and
the Mapreduce pipeline in parallel, we need to allow setting the beam
parameter when in standard mode. This changes should have been part of
https://github.com/google/nomulus/pull/1500.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1505)
<!-- Reviewable:end -->
2022-01-31 14:20:23 -05:00
gbrodman
b24670f33a Use the replica jpaTm in FKI and EppResource cache methods (#1503)
The cached methods are only used in situations where we don't really
care about being 100% synchronously up to date (e.g. whois), and they're
not used frequently anyway, so it's safe to use the replica in these
locations.
2022-01-28 18:05:18 -05:00
Weimin Yu
1253fa479a Release ValidateSqlPipeline as container image (#1504)
* Release ValidateSqlPipeline as container image
2022-01-28 14:57:31 -05:00
Weimin Yu
5f0dd24906 Release ValidateDatastorePipeline (#1501)
* Release ValidateDatastorePipeline
2022-01-26 13:38:19 -05:00
Ben McIlwain
e25885e25f Remove obsolete scrap commands (#1502) 2022-01-25 15:23:00 -05:00
gbrodman
cbdf4704ba Add missing @Overrides (#1499)
Not sure how this snuck through
2022-01-24 16:58:38 -05:00
Weimin Yu
207c7e7ca8 Compare migration data with SQL as primary DB (#1497)
* Compare migration data with SQL as primary DB

Add a BEAM pipeline that compares the secondary Datastore against SQL.
This is a dumb pipeline to be launched by a driver (in a followup PR).
Manually tested pipeline in sandbox.

Also updated the ValidateSqlPipeline and the snapshot finder class so
that an appropriate Datastore export is found (one that ends before the
replay checkpoint value).
2022-01-24 11:20:48 -05:00
Lai Jiang
b3a0eb6bd8 Add a cron job to run the RDE Beam pipeline in parallel with MapReduce (#1500) 2022-01-21 23:36:13 -05:00
gbrodman
c602aa6e67 Use the read-only replica for JPA invoicing (#1494)
* Use the read-only replica for JPA invoicing
2022-01-20 20:50:10 +00:00
gbrodman
c6008b65a0 Use a read-only replica SQL instance in RdapDomainSearchAction (#1495)
We can use it more places later but this can serve as a template. We
should inject the connection to the read-only replica (only created
once) to the constructor of the action, then use that instead of the
regular transaction manager.

We add a transaction manager that simulates the read-only-replica
behavior for testing purposes as well.

In addition, we set the transaction isolation level to READ COMMITTED
for this transaction manager (this is fine since we're never writing to
it). Postgres requires this for replica SQL access (it fails if we try
to use SERIALIZABLE) transactions. We didn't see this with the pipelines
before since those already had transaction isolation level overrides
2022-01-20 15:39:07 -05:00
208 changed files with 5192 additions and 3419 deletions

View File

@@ -55,7 +55,7 @@ plugins {
node {
download = true
version = "14.15.5"
version = "16.14.0"
npmVersion = "6.14.11"
}

View File

@@ -41,4 +41,20 @@ public interface Sleeper {
* @see com.google.common.util.concurrent.Uninterruptibles#sleepUninterruptibly
*/
void sleepUninterruptibly(ReadableDuration duration);
/**
* Puts the current thread to interruptible sleep.
*
* <p>This is a convenience method for {@link #sleep} that properly converts an {@link
* InterruptedException} to a {@link RuntimeException}.
*/
default void sleepInterruptibly(ReadableDuration duration) {
try {
sleep(duration);
} catch (InterruptedException e) {
// Restore current thread's interrupted state.
Thread.currentThread().interrupt();
throw new RuntimeException("Interrupted.", e);
}
}
}

View File

@@ -319,6 +319,7 @@ dependencies {
testCompile deps['com.google.appengine:appengine-testing']
testCompile deps['com.google.guava:guava-testlib']
testCompile deps['com.google.monitoring-client:contrib']
testCompile deps['com.google.protobuf:protobuf-java-util']
testCompile deps['com.google.truth:truth']
testCompile deps['com.google.truth.extensions:truth-java8-extension']
testCompile deps['org.checkerframework:checker-qual']
@@ -706,7 +707,7 @@ createToolTask(
'initSqlPipeline', 'google.registry.beam.initsql.InitSqlPipeline')
createToolTask(
'validateSqlPipeline', 'google.registry.beam.comparedb.ValidateSqlPipeline')
'validateDatabasePipeline', 'google.registry.beam.comparedb.ValidateDatabasePipeline')
createToolTask(
@@ -793,6 +794,11 @@ if (environment == 'alpha') {
mainClass: 'google.registry.beam.rde.RdePipeline',
metaData : 'google/registry/beam/rde_pipeline_metadata.json'
],
validateDatabase :
[
mainClass: 'google.registry.beam.comparedb.ValidateDatabasePipeline',
metaData: 'google/registry/beam/validate_database_pipeline_metadata.json'
],
]
project.tasks.create("stageBeamPipelines") {
doLast {

View File

@@ -21,6 +21,7 @@ import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_
import static google.registry.backup.RestoreCommitLogsAction.BUCKET_OVERRIDE_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.FROM_TIME_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.TO_TIME_PARAM;
import static google.registry.backup.SyncDatastoreToSqlSnapshotAction.SQL_SNAPSHOT_ID_PARAM;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredDatetimeParameter;
import static google.registry.request.RequestParameters.extractRequiredParameter;
@@ -98,6 +99,12 @@ public final class BackupModule {
return extractRequiredDatetimeParameter(req, TO_TIME_PARAM);
}
@Provides
@Parameter(SQL_SNAPSHOT_ID_PARAM)
static String provideSqlSnapshotId(HttpServletRequest req) {
return extractRequiredParameter(req, SQL_SNAPSHOT_ID_PARAM);
}
@Provides
@Backups
static ListeningExecutorService provideListeningExecutorService() {

View File

@@ -30,6 +30,7 @@ import google.registry.request.Action.Service;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
@@ -64,33 +65,47 @@ public final class CommitLogCheckpointAction implements Runnable {
@Override
public void run() {
createCheckPointAndStartAsyncExport();
}
/**
* Creates a {@link CommitLogCheckpoint} and initiates an asynchronous export task.
*
* @return the {@code CommitLogCheckpoint} to be exported
*/
public Optional<CommitLogCheckpoint> createCheckPointAndStartAsyncExport() {
final CommitLogCheckpoint checkpoint = strategy.computeCheckpoint();
logger.atInfo().log(
"Generated candidate checkpoint for time: %s", checkpoint.getCheckpointTime());
ofyTm()
.transact(
() -> {
DateTime lastWrittenTime = CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
logger.atInfo().log(
"Newer checkpoint already written at time: %s", lastWrittenTime);
return;
}
auditedOfy()
.saveIgnoringReadOnlyWithoutBackup()
.entities(
checkpoint, CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
cloudTasksUtils.enqueue(
QUEUE_NAME,
CloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
});
boolean isCheckPointPersisted =
ofyTm()
.transact(
() -> {
DateTime lastWrittenTime =
CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
logger.atInfo().log(
"Newer checkpoint already written at time: %s", lastWrittenTime);
return false;
}
auditedOfy()
.saveIgnoringReadOnlyWithoutBackup()
.entities(
checkpoint,
CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
cloudTasksUtils.enqueue(
QUEUE_NAME,
cloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
return true;
});
return isCheckPointPersisted ? Optional.of(checkpoint) : Optional.empty();
}
}

View File

@@ -76,7 +76,10 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final Duration LEASE_LENGTH = standardHours(1);
public static final String REPLAY_TO_SQL_LOCK_NAME =
ReplayCommitLogsToSqlAction.class.getSimpleName();
public static final Duration REPLAY_TO_SQL_LOCK_LEASE_LENGTH = standardHours(1);
// Stop / pause where we are if we've been replaying for more than five minutes to avoid GAE
// request timeouts
private static final Duration REPLAY_TIMEOUT_DURATION = Duration.standardMinutes(5);
@@ -115,7 +118,11 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
}
Optional<Lock> lock =
Lock.acquireSql(
this.getClass().getSimpleName(), null, LEASE_LENGTH, requestStatusChecker, false);
REPLAY_TO_SQL_LOCK_NAME,
null,
REPLAY_TO_SQL_LOCK_LEASE_LENGTH,
requestStatusChecker,
false);
if (!lock.isPresent()) {
String message = "Can't acquire SQL commit log replay lock, aborting.";
logger.atSevere().log(message);

View File

@@ -0,0 +1,187 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.backup;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.ofy.CommitLogCheckpoint;
import google.registry.model.replay.ReplicateToDatastoreAction;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Sleeper;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Synchronizes Datastore to a given SQL snapshot when SQL is the primary database.
*
* <p>The caller takes the responsibility for:
*
* <ul>
* <li>verifying the current migration stage
* <li>acquiring the {@link ReplicateToDatastoreAction#REPLICATE_TO_DATASTORE_LOCK_NAME
* replication lock}, and
* <li>while holding the lock, creating an SQL snapshot and invoking this action with the snapshot
* id
* </ul>
*
* The caller may release the replication lock upon receiving the response from this action. Please
* refer to {@link google.registry.tools.ValidateDatastoreCommand} for more information on usage.
*
* <p>This action plays SQL transactions up to the user-specified snapshot, creates a new CommitLog
* checkpoint, and exports all CommitLogs to GCS up to this checkpoint. The timestamp of this
* checkpoint can be used to recreate a Datastore snapshot that is equivalent to the given SQL
* snapshot. If this action succeeds, the checkpoint timestamp is included in the response (the
* format of which is defined by {@link #SUCCESS_RESPONSE_TEMPLATE}).
*/
@Action(
service = Service.BACKEND,
path = SyncDatastoreToSqlSnapshotAction.PATH,
method = Action.Method.POST,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
@DeleteAfterMigration
public class SyncDatastoreToSqlSnapshotAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
public static final String PATH = "/_dr/task/syncDatastoreToSqlSnapshot";
public static final String SUCCESS_RESPONSE_TEMPLATE =
"Datastore is up-to-date with provided SQL snapshot (%s). CommitLog timestamp is (%s).";
static final String SQL_SNAPSHOT_ID_PARAM = "sqlSnapshotId";
private static final int COMMITLOGS_PRESENCE_CHECK_ATTEMPTS = 10;
private static final Duration COMMITLOGS_PRESENCE_CHECK_DELAY = Duration.standardSeconds(6);
private final Response response;
private final Sleeper sleeper;
@Config("commitLogGcsBucket")
private final String gcsBucket;
private final GcsDiffFileLister gcsDiffFileLister;
private final LatestDatastoreSnapshotFinder datastoreSnapshotFinder;
private final CommitLogCheckpointAction commitLogCheckpointAction;
private final String sqlSnapshotId;
@Inject
SyncDatastoreToSqlSnapshotAction(
Response response,
Sleeper sleeper,
@Config("commitLogGcsBucket") String gcsBucket,
GcsDiffFileLister gcsDiffFileLister,
LatestDatastoreSnapshotFinder datastoreSnapshotFinder,
CommitLogCheckpointAction commitLogCheckpointAction,
@Parameter(SQL_SNAPSHOT_ID_PARAM) String sqlSnapshotId) {
this.response = response;
this.sleeper = sleeper;
this.gcsBucket = gcsBucket;
this.gcsDiffFileLister = gcsDiffFileLister;
this.datastoreSnapshotFinder = datastoreSnapshotFinder;
this.commitLogCheckpointAction = commitLogCheckpointAction;
this.sqlSnapshotId = sqlSnapshotId;
}
@Override
public void run() {
logger.atInfo().log("Datastore validation invoked. SqlSnapshotId is %s.", sqlSnapshotId);
try {
CommitLogCheckpoint checkpoint = ensureDatabasesComparable(sqlSnapshotId);
response.setStatus(SC_OK);
response.setPayload(
String.format(SUCCESS_RESPONSE_TEMPLATE, sqlSnapshotId, checkpoint.getCheckpointTime()));
return;
} catch (Throwable e) {
logger.atSevere().withCause(e).log("Failed to sync Datastore to SQL.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(getStackTrace(e));
}
}
private static String getStackTrace(Throwable e) {
try {
ByteArrayOutputStream bis = new ByteArrayOutputStream();
PrintStream printStream = new PrintStream(bis);
e.printStackTrace(printStream);
printStream.close();
return bis.toString();
} catch (RuntimeException re) {
return re.getMessage();
}
}
private CommitLogCheckpoint ensureDatabasesComparable(String sqlSnapshotId) {
// Replicate SQL transaction to Datastore, up to when this snapshot is taken.
int playbacks = ReplicateToDatastoreAction.replayAllTransactions(Optional.of(sqlSnapshotId));
logger.atInfo().log("Played %s SQL transactions.", playbacks);
Optional<CommitLogCheckpoint> checkpoint = exportCommitLogs();
if (!checkpoint.isPresent()) {
throw new RuntimeException("Cannot create CommitLog checkpoint");
}
logger.atInfo().log(
"CommitLog checkpoint created at %s.", checkpoint.get().getCheckpointTime());
verifyCommitLogsPersisted(checkpoint.get());
return checkpoint.get();
}
private Optional<CommitLogCheckpoint> exportCommitLogs() {
// Trigger an async CommitLog export to GCS. Will check file availability later.
// Although we can add support to synchronous execution, it can disrupt the export cadence
// when the system is busy
Optional<CommitLogCheckpoint> checkpoint =
commitLogCheckpointAction.createCheckPointAndStartAsyncExport();
// Failure to create checkpoint most likely caused by race with cron-triggered checkpointing.
// Retry once.
if (!checkpoint.isPresent()) {
commitLogCheckpointAction.createCheckPointAndStartAsyncExport();
}
return checkpoint;
}
private void verifyCommitLogsPersisted(CommitLogCheckpoint checkpoint) {
DateTime exportStartTime =
datastoreSnapshotFinder
.getSnapshotInfo(checkpoint.getCheckpointTime().toInstant())
.exportInterval()
.getStart();
logger.atInfo().log("Found Datastore export at %s", exportStartTime);
for (int attempts = 0; attempts < COMMITLOGS_PRESENCE_CHECK_ATTEMPTS; attempts++) {
try {
gcsDiffFileLister.listDiffFiles(gcsBucket, exportStartTime, checkpoint.getCheckpointTime());
return;
} catch (IllegalStateException e) {
// Gap in commitlog files. Fall through to sleep and retry.
logger.atInfo().log("Commitlog files not yet found on GCS.");
}
sleeper.sleepInterruptibly(COMMITLOGS_PRESENCE_CHECK_DELAY);
}
throw new RuntimeException("Cannot find all commitlog files.");
}
}

View File

@@ -22,15 +22,17 @@ import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.common.base.Joiner;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Multimap;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.EppResource;
import google.registry.model.domain.RegistryLock;
import google.registry.model.eppcommon.Trid;
import google.registry.model.host.HostResource;
import google.registry.persistence.VKey;
import google.registry.util.AppEngineServiceUtils;
import google.registry.request.Action.Service;
import google.registry.util.CloudTasksUtils;
import google.registry.util.Retrier;
import javax.inject.Inject;
import javax.inject.Named;
@@ -59,25 +61,23 @@ public final class AsyncTaskEnqueuer {
private static final Duration MAX_ASYNC_ETA = Duration.standardDays(30);
private final Duration asyncDeleteDelay;
private final Queue asyncActionsPushQueue;
private final Queue asyncDeletePullQueue;
private final Queue asyncDnsRefreshPullQueue;
private final AppEngineServiceUtils appEngineServiceUtils;
private final Retrier retrier;
private CloudTasksUtils cloudTasksUtils;
@Inject
public AsyncTaskEnqueuer(
@Named(QUEUE_ASYNC_ACTIONS) Queue asyncActionsPushQueue,
@Named(QUEUE_ASYNC_DELETE) Queue asyncDeletePullQueue,
@Named(QUEUE_ASYNC_HOST_RENAME) Queue asyncDnsRefreshPullQueue,
@Config("asyncDeleteFlowMapreduceDelay") Duration asyncDeleteDelay,
AppEngineServiceUtils appEngineServiceUtils,
CloudTasksUtils cloudTasksUtils,
Retrier retrier) {
this.asyncActionsPushQueue = asyncActionsPushQueue;
this.asyncDeletePullQueue = asyncDeletePullQueue;
this.asyncDnsRefreshPullQueue = asyncDnsRefreshPullQueue;
this.asyncDeleteDelay = asyncDeleteDelay;
this.appEngineServiceUtils = appEngineServiceUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.retrier = retrier;
}
@@ -103,19 +103,17 @@ public final class AsyncTaskEnqueuer {
entityKey, firstResave, MAX_ASYNC_ETA);
return;
}
logger.atInfo().log("Enqueuing async re-save of %s to run at %s.", entityKey, whenToResave);
String backendHostname = appEngineServiceUtils.getServiceHostname("backend");
TaskOptions task =
TaskOptions.Builder.withUrl(ResaveEntityAction.PATH)
.method(Method.POST)
.header("Host", backendHostname)
.countdownMillis(etaDuration.getMillis())
.param(PARAM_RESOURCE_KEY, entityKey.stringify())
.param(PARAM_REQUESTED_TIME, now.toString());
Multimap<String, String> params = ArrayListMultimap.create();
params.put(PARAM_RESOURCE_KEY, entityKey.stringify());
params.put(PARAM_REQUESTED_TIME, now.toString());
if (whenToResave.size() > 1) {
task.param(PARAM_RESAVE_TIMES, Joiner.on(',').join(whenToResave.tailSet(firstResave, false)));
params.put(PARAM_RESAVE_TIMES, Joiner.on(',').join(whenToResave.tailSet(firstResave, false)));
}
addTaskToQueueWithRetry(asyncActionsPushQueue, task);
logger.atInfo().log("Enqueuing async re-save of %s to run at %s.", entityKey, whenToResave);
cloudTasksUtils.enqueue(
QUEUE_ASYNC_ACTIONS,
cloudTasksUtils.createPostTaskWithDelay(
ResaveEntityAction.PATH, Service.BACKEND.toString(), params, etaDuration));
}
/** Enqueues a task to asynchronously delete a contact or host, by key. */
@@ -152,32 +150,6 @@ public final class AsyncTaskEnqueuer {
.param(PARAM_REQUESTED_TIME, now.toString()));
}
/**
* Enqueues a task to asynchronously re-lock a registry-locked domain after it was unlocked.
*
* <p>Note: the relockDuration must be present on the lock object.
*/
public void enqueueDomainRelock(RegistryLock lock) {
checkArgument(
lock.getRelockDuration().isPresent(),
"Lock with ID %s not configured for relock",
lock.getRevisionId());
enqueueDomainRelock(lock.getRelockDuration().get(), lock.getRevisionId(), 0);
}
/** Enqueues a task to asynchronously re-lock a registry-locked domain after it was unlocked. */
void enqueueDomainRelock(Duration countdown, long lockRevisionId, int previousAttempts) {
String backendHostname = appEngineServiceUtils.getServiceHostname("backend");
addTaskToQueueWithRetry(
asyncActionsPushQueue,
TaskOptions.Builder.withUrl(RelockDomainAction.PATH)
.method(Method.POST)
.header("Host", backendHostname)
.param(RelockDomainAction.OLD_UNLOCK_REVISION_ID_PARAM, String.valueOf(lockRevisionId))
.param(RelockDomainAction.PREVIOUS_ATTEMPTS_PARAM, String.valueOf(previousAttempts))
.countdownMillis(countdown.getMillis()));
}
/**
* Adds a task to a queue with retrying, to avoid aborting the entire flow over a transient issue
* enqueuing a task.

View File

@@ -88,7 +88,6 @@ public class RelockDomainAction implements Runnable {
private final SendEmailService sendEmailService;
private final DomainLockUtils domainLockUtils;
private final Response response;
private final AsyncTaskEnqueuer asyncTaskEnqueuer;
@Inject
public RelockDomainAction(
@@ -99,8 +98,7 @@ public class RelockDomainAction implements Runnable {
@Config("supportEmail") String supportEmail,
SendEmailService sendEmailService,
DomainLockUtils domainLockUtils,
Response response,
AsyncTaskEnqueuer asyncTaskEnqueuer) {
Response response) {
this.oldUnlockRevisionId = oldUnlockRevisionId;
this.previousAttempts = previousAttempts;
this.alertRecipientAddress = alertRecipientAddress;
@@ -109,7 +107,6 @@ public class RelockDomainAction implements Runnable {
this.sendEmailService = sendEmailService;
this.domainLockUtils = domainLockUtils;
this.response = response;
this.asyncTaskEnqueuer = asyncTaskEnqueuer;
}
@Override
@@ -245,8 +242,7 @@ public class RelockDomainAction implements Runnable {
}
}
Duration timeBeforeRetry = previousAttempts < ATTEMPTS_BEFORE_SLOWDOWN ? TEN_MINUTES : ONE_HOUR;
asyncTaskEnqueuer.enqueueDomainRelock(
timeBeforeRetry, oldUnlockRevisionId, previousAttempts + 1);
domainLockUtils.enqueueDomainRelock(timeBeforeRetry, oldUnlockRevisionId, previousAttempts + 1);
}
private void sendSuccessEmail(RegistryLock oldLock) {

View File

@@ -53,11 +53,11 @@ public class LatestDatastoreSnapshotFinder {
}
/**
* Finds information of the most recent Datastore snapshot, including the GCS folder of the
* exported data files and the start and stop times of the export. The folder of the CommitLogs is
* also included in the return.
* Finds information of the most recent Datastore snapshot that ends strictly before {@code
* exportEndTimeUpperBound}, including the GCS folder of the exported data files and the start and
* stop times of the export. The folder of the CommitLogs is also included in the return.
*/
public DatastoreSnapshotInfo getSnapshotInfo() {
public DatastoreSnapshotInfo getSnapshotInfo(Instant exportEndTimeUpperBound) {
String bucketName = RegistryConfig.getDatastoreBackupsBucket().substring("gs://".length());
/**
* Find the bucket-relative path to the overall metadata file of the last Datastore export.
@@ -65,7 +65,8 @@ public class LatestDatastoreSnapshotFinder {
* return value is like
* "2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata".
*/
Optional<String> metaFilePathOptional = findMostRecentExportMetadataFile(bucketName, 2);
Optional<String> metaFilePathOptional =
findNewestExportMetadataFileBeforeTime(bucketName, exportEndTimeUpperBound, 5);
if (!metaFilePathOptional.isPresent()) {
throw new NoSuchElementException("No exports found over the past 2 days.");
}
@@ -85,8 +86,9 @@ public class LatestDatastoreSnapshotFinder {
}
/**
* Finds the bucket-relative path of the overall export metadata file, in the given bucket,
* searching back up to {@code lookBackDays} days, including today.
* Finds the latest Datastore export that ends strictly before {@code endTimeUpperBound} and
* returns the bucket-relative path of the overall export metadata file, in the given bucket. The
* search goes back for up to {@code lookBackDays} days in time, including today.
*
* <p>The overall export metadata file is the last file created during a Datastore export. All
* data has been exported by the creation time of this file. The name of this file, like that of
@@ -95,7 +97,8 @@ public class LatestDatastoreSnapshotFinder {
* <p>An example return value: {@code
* 2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata}.
*/
private Optional<String> findMostRecentExportMetadataFile(String bucketName, int lookBackDays) {
private Optional<String> findNewestExportMetadataFileBeforeTime(
String bucketName, Instant endTimeUpperBound, int lookBackDays) {
DateTime today = clock.nowUtc();
for (int day = 0; day < lookBackDays; day++) {
String dateString = today.minusDays(day).toString("yyyy-MM-dd");
@@ -107,7 +110,11 @@ public class LatestDatastoreSnapshotFinder {
.sorted(Comparator.<String>naturalOrder().reversed())
.findFirst();
if (metaFilePath.isPresent()) {
return metaFilePath;
BlobInfo blobInfo = gcsUtils.getBlobInfo(BlobId.of(bucketName, metaFilePath.get()));
Instant exportEndTime = new Instant(blobInfo.getCreateTime());
if (exportEndTime.isBefore(endTimeUpperBound)) {
return metaFilePath;
}
}
} catch (IOException ioe) {
throw new RuntimeException(ioe);
@@ -118,12 +125,12 @@ public class LatestDatastoreSnapshotFinder {
/** Holds information about a Datastore snapshot. */
@AutoValue
abstract static class DatastoreSnapshotInfo {
abstract String exportDir();
public abstract static class DatastoreSnapshotInfo {
public abstract String exportDir();
abstract String commitLogDir();
public abstract String commitLogDir();
abstract Interval exportInterval();
public abstract Interval exportInterval();
static DatastoreSnapshotInfo create(
String exportDir, String commitLogDir, Interval exportOperationInterval) {

View File

@@ -0,0 +1,206 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static com.google.common.base.Verify.verify;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.beam.common.RegistryPipelineWorkerInitializer;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder.DatastoreSnapshotInfo;
import google.registry.beam.comparedb.ValidateSqlUtils.CompareSqlEntity;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.replay.SqlEntity;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.util.SystemClock;
import java.io.Serializable;
import java.util.Optional;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.WithKeys;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Validates the asynchronous data replication process between Datastore and Cloud SQL.
*
* <p>This pipeline is to be launched by {@link google.registry.tools.ValidateDatastoreCommand} or
* {@link google.registry.tools.ValidateSqlCommand}.
*/
@DeleteAfterMigration
public class ValidateDatabasePipeline {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Specifies the extra CommitLogs to load before the start of a Database export. */
private static final Duration COMMITLOG_START_TIME_MARGIN = Duration.standardMinutes(10);
private final ValidateDatabasePipelineOptions options;
private final LatestDatastoreSnapshotFinder datastoreSnapshotFinder;
public ValidateDatabasePipeline(
ValidateDatabasePipelineOptions options,
LatestDatastoreSnapshotFinder datastoreSnapshotFinder) {
this.options = options;
this.datastoreSnapshotFinder = datastoreSnapshotFinder;
}
@VisibleForTesting
void run(Pipeline pipeline) {
DateTime latestCommitLogTime = DateTime.parse(options.getLatestCommitLogTimestamp());
DatastoreSnapshotInfo mostRecentExport =
datastoreSnapshotFinder.getSnapshotInfo(latestCommitLogTime.toInstant());
logger.atInfo().log(
"Comparing datastore export at %s and commitlog timestamp %s.",
mostRecentExport.exportDir(), latestCommitLogTime);
Optional<String> outputPath =
Optional.ofNullable(options.getDiffOutputGcsBucket())
.map(
bucket ->
String.format(
"gs://%s/validate_database/%s/diffs.txt",
bucket, new SystemClock().nowUtc()));
outputPath.ifPresent(path -> logger.atInfo().log("Discrepancies will be logged to %s", path));
setupPipeline(
pipeline,
Optional.ofNullable(options.getSqlSnapshotId()),
mostRecentExport,
latestCommitLogTime,
Optional.ofNullable(options.getComparisonStartTimestamp()).map(DateTime::parse),
outputPath);
pipeline.run();
}
static void setupPipeline(
Pipeline pipeline,
Optional<String> sqlSnapshotId,
DatastoreSnapshotInfo mostRecentExport,
DateTime latestCommitLogTime,
Optional<DateTime> compareStartTime,
Optional<String> diffOutputPath) {
pipeline
.getCoderRegistry()
.registerCoderForClass(SqlEntity.class, SerializableCoder.of(Serializable.class));
PCollectionTuple datastoreSnapshot =
DatastoreSnapshots.loadDatastoreSnapshotByKind(
pipeline,
mostRecentExport.exportDir(),
mostRecentExport.commitLogDir(),
mostRecentExport.exportInterval().getStart().minus(COMMITLOG_START_TIME_MARGIN),
// Increase by 1ms since we want to include commitLogs latestCommitLogTime but
// this parameter is exclusive.
latestCommitLogTime.plusMillis(1),
DatastoreSnapshots.ALL_DATASTORE_KINDS,
compareStartTime);
PCollectionTuple cloudSqlSnapshot =
SqlSnapshots.loadCloudSqlSnapshotByType(
pipeline, SqlSnapshots.ALL_SQL_ENTITIES, sqlSnapshotId, compareStartTime);
verify(
datastoreSnapshot.getAll().keySet().equals(cloudSqlSnapshot.getAll().keySet()),
"Expecting the same set of types in both snapshots.");
PCollectionList<String> diffLogs = PCollectionList.empty(pipeline);
for (Class<? extends SqlEntity> clazz : SqlSnapshots.ALL_SQL_ENTITIES) {
TupleTag<SqlEntity> tag = ValidateSqlUtils.createSqlEntityTupleTag(clazz);
verify(
datastoreSnapshot.has(tag), "Missing %s in Datastore snapshot.", clazz.getSimpleName());
verify(cloudSqlSnapshot.has(tag), "Missing %s in Cloud SQL snapshot.", clazz.getSimpleName());
diffLogs =
diffLogs.and(
PCollectionList.of(datastoreSnapshot.get(tag))
.and(cloudSqlSnapshot.get(tag))
.apply(
"Combine from both snapshots: " + clazz.getSimpleName(),
Flatten.pCollections())
.apply(
"Assign primary key to merged " + clazz.getSimpleName(),
WithKeys.of(ValidateDatabasePipeline::getPrimaryKeyString)
.withKeyType(strings()))
.apply("Group by primary key " + clazz.getSimpleName(), GroupByKey.create())
.apply("Compare " + clazz.getSimpleName(), ParDo.of(new CompareSqlEntity())));
}
if (diffOutputPath.isPresent()) {
diffLogs
.apply("Gather diff logs", Flatten.pCollections())
.apply(
"Output diffs",
TextIO.write()
.to(diffOutputPath.get())
/**
* Output to a single file for ease of use since diffs should be few. If this
* assumption turns out not to be false, user should abort the pipeline and
* investigate why.
*/
.withoutSharding()
.withDelimiter((Strings.repeat("-", 80) + "\n").toCharArray()));
}
}
private static String getPrimaryKeyString(SqlEntity sqlEntity) {
// SqlEntity.getPrimaryKeyString only works with entities registered with Hibernate.
// We are using the BulkQueryJpaTransactionManager, which does not recognize DomainBase and
// DomainHistory. See BulkQueryEntities.java for more information.
if (sqlEntity instanceof DomainBase) {
return "DomainBase_" + ((DomainBase) sqlEntity).getRepoId();
}
if (sqlEntity instanceof DomainHistory) {
return "DomainHistory_" + ((DomainHistory) sqlEntity).getDomainHistoryId().toString();
}
return sqlEntity.getPrimaryKeyString();
}
public static void main(String[] args) {
ValidateDatabasePipelineOptions options =
PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(ValidateDatabasePipelineOptions.class);
RegistryPipelineOptions.validateRegistryPipelineOptions(options);
// Defensively set important options.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
options.setJpaTransactionManagerType(JpaTransactionManagerType.BULK_QUERY);
// Set up JPA in the pipeline harness (the locally executed part of the main() method). Reuse
// code in RegistryPipelineWorkerInitializer, which only applies to pipeline worker VMs.
new RegistryPipelineWorkerInitializer().beforeProcessing(options);
LatestDatastoreSnapshotFinder datastoreSnapshotFinder =
DaggerLatestDatastoreSnapshotFinder_LatestDatastoreSnapshotFinderFinderComponent.create()
.datastoreSnapshotInfoFinder();
new ValidateDatabasePipeline(options, datastoreSnapshotFinder).run(Pipeline.create(options));
}
}

View File

@@ -1,4 +1,4 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -18,10 +18,25 @@ import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.model.annotations.DeleteAfterMigration;
import javax.annotation.Nullable;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.Validation;
/** BEAM pipeline options for {@link ValidateSqlPipeline}. */
/** BEAM pipeline options for {@link ValidateDatabasePipeline}. */
@DeleteAfterMigration
public interface ValidateSqlPipelineOptions extends RegistryPipelineOptions {
public interface ValidateDatabasePipelineOptions extends RegistryPipelineOptions {
@Description(
"The id of the SQL snapshot to be compared with Datastore. "
+ "If null, the current state of the SQL database is used.")
@Nullable
String getSqlSnapshotId();
void setSqlSnapshotId(String snapshotId);
@Description("The latest CommitLogs to load, in ISO8601 format.")
@Validation.Required
String getLatestCommitLogTimestamp();
void setLatestCommitLogTimestamp(String commitLogEndTimestamp);
@Description(
"For history entries and EPP resources, only those modified strictly after this time are "
@@ -31,4 +46,10 @@ public interface ValidateSqlPipelineOptions extends RegistryPipelineOptions {
String getComparisonStartTimestamp();
void setComparisonStartTimestamp(String comparisonStartTimestamp);
@Description("The GCS bucket where discrepancies found during comparison should be logged.")
@Nullable
String getDiffOutputGcsBucket();
void setDiffOutputGcsBucket(String gcsBucket);
}

View File

@@ -1,244 +0,0 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static com.google.common.base.Verify.verify;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import com.google.common.base.Stopwatch;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.beam.common.RegistryPipelineWorkerInitializer;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder.DatastoreSnapshotInfo;
import google.registry.beam.comparedb.ValidateSqlUtils.CompareSqlEntity;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.replay.SqlEntity;
import google.registry.model.replay.SqlReplayCheckpoint;
import google.registry.model.server.Lock;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.TransactionManagerFactory;
import google.registry.util.RequestStatusChecker;
import java.io.Serializable;
import java.util.Optional;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult.State;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.WithKeys;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Validates the asynchronous data replication process from Datastore (primary storage) to Cloud SQL
* (secondary storage).
*/
@DeleteAfterMigration
public class ValidateSqlPipeline {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Specifies the extra CommitLogs to load before the start of a Database export. */
private static final Duration COMMITLOG_START_TIME_MARGIN = Duration.standardMinutes(10);
/**
* Name of the lock used by the commitlog replay process.
*
* <p>See {@link google.registry.backup.ReplayCommitLogsToSqlAction} for more information.
*/
private static final String COMMITLOG_REPLAY_LOCK_NAME = "ReplayCommitLogsToSqlAction";
private static final Duration REPLAY_LOCK_LEASE_LENGTH = Duration.standardHours(1);
private static final java.time.Duration REPLAY_LOCK_ACQUIRE_TIMEOUT =
java.time.Duration.ofMinutes(6);
private static final java.time.Duration REPLAY_LOCK_ACQUIRE_DELAY =
java.time.Duration.ofSeconds(30);
private final ValidateSqlPipelineOptions options;
private final DatastoreSnapshotInfo mostRecentExport;
public ValidateSqlPipeline(
ValidateSqlPipelineOptions options, DatastoreSnapshotInfo mostRecentExport) {
this.options = options;
this.mostRecentExport = mostRecentExport;
}
void run() {
run(Pipeline.create(options));
}
@VisibleForTesting
void run(Pipeline pipeline) {
// TODO(weiminyu): ensure migration stage is DATASTORE_PRIMARY or DATASTORE_PRIMARY_READ_ONLY
Optional<Lock> lock = acquireCommitLogReplayLock();
if (lock.isPresent()) {
logger.atInfo().log("Acquired CommitLog Replay lock.");
} else {
throw new RuntimeException("Failed to acquire CommitLog Replay lock.");
}
try {
DateTime latestCommitLogTime =
TransactionManagerFactory.jpaTm().transact(() -> SqlReplayCheckpoint.get());
Preconditions.checkState(
latestCommitLogTime.isAfter(mostRecentExport.exportInterval().getEnd()),
"Cannot recreate Datastore snapshot since target time is in the middle of an export.");
try (DatabaseSnapshot databaseSnapshot = DatabaseSnapshot.createSnapshot()) {
// Eagerly release the commitlog replay lock so that replay can resume.
lock.ifPresent(Lock::releaseSql);
lock = Optional.empty();
setupPipeline(pipeline, Optional.of(databaseSnapshot.getSnapshotId()), latestCommitLogTime);
State state = pipeline.run().waitUntilFinish();
if (!State.DONE.equals(state)) {
throw new IllegalStateException("Unexpected pipeline state: " + state);
}
}
} finally {
lock.ifPresent(Lock::releaseSql);
}
}
void setupPipeline(
Pipeline pipeline, Optional<String> sqlSnapshotId, DateTime latestCommitLogTime) {
pipeline
.getCoderRegistry()
.registerCoderForClass(SqlEntity.class, SerializableCoder.of(Serializable.class));
Optional<DateTime> compareStartTime =
Optional.ofNullable(options.getComparisonStartTimestamp()).map(DateTime::parse);
PCollectionTuple datastoreSnapshot =
DatastoreSnapshots.loadDatastoreSnapshotByKind(
pipeline,
mostRecentExport.exportDir(),
mostRecentExport.commitLogDir(),
mostRecentExport.exportInterval().getStart().minus(COMMITLOG_START_TIME_MARGIN),
// Increase by 1ms since we want to include commitLogs latestCommitLogTime but
// this parameter is exclusive.
latestCommitLogTime.plusMillis(1),
DatastoreSnapshots.ALL_DATASTORE_KINDS,
compareStartTime);
PCollectionTuple cloudSqlSnapshot =
SqlSnapshots.loadCloudSqlSnapshotByType(
pipeline, SqlSnapshots.ALL_SQL_ENTITIES, sqlSnapshotId, compareStartTime);
verify(
datastoreSnapshot.getAll().keySet().equals(cloudSqlSnapshot.getAll().keySet()),
"Expecting the same set of types in both snapshots.");
for (Class<? extends SqlEntity> clazz : SqlSnapshots.ALL_SQL_ENTITIES) {
TupleTag<SqlEntity> tag = ValidateSqlUtils.createSqlEntityTupleTag(clazz);
verify(
datastoreSnapshot.has(tag), "Missing %s in Datastore snapshot.", clazz.getSimpleName());
verify(cloudSqlSnapshot.has(tag), "Missing %s in Cloud SQL snapshot.", clazz.getSimpleName());
PCollectionList.of(datastoreSnapshot.get(tag))
.and(cloudSqlSnapshot.get(tag))
.apply("Combine from both snapshots: " + clazz.getSimpleName(), Flatten.pCollections())
.apply(
"Assign primary key to merged " + clazz.getSimpleName(),
WithKeys.of(ValidateSqlPipeline::getPrimaryKeyString).withKeyType(strings()))
.apply("Group by primary key " + clazz.getSimpleName(), GroupByKey.create())
.apply("Compare " + clazz.getSimpleName(), ParDo.of(new CompareSqlEntity()));
}
}
private static String getPrimaryKeyString(SqlEntity sqlEntity) {
// SqlEntity.getPrimaryKeyString only works with entities registered with Hibernate.
// We are using the BulkQueryJpaTransactionManager, which does not recognize DomainBase and
// DomainHistory. See BulkQueryEntities.java for more information.
if (sqlEntity instanceof DomainBase) {
return "DomainBase_" + ((DomainBase) sqlEntity).getRepoId();
}
if (sqlEntity instanceof DomainHistory) {
return "DomainHistory_" + ((DomainHistory) sqlEntity).getDomainHistoryId().toString();
}
return sqlEntity.getPrimaryKeyString();
}
private static Optional<Lock> acquireCommitLogReplayLock() {
Stopwatch stopwatch = Stopwatch.createStarted();
while (stopwatch.elapsed().minus(REPLAY_LOCK_ACQUIRE_TIMEOUT).isNegative()) {
Optional<Lock> lock = tryAcquireCommitLogReplayLock();
if (lock.isPresent()) {
return lock;
}
logger.atInfo().log("Failed to acquired CommitLog Replay lock. Will retry...");
try {
Thread.sleep(REPLAY_LOCK_ACQUIRE_DELAY.toMillis());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException("Interrupted.");
}
}
return Optional.empty();
}
private static Optional<Lock> tryAcquireCommitLogReplayLock() {
return Lock.acquireSql(
COMMITLOG_REPLAY_LOCK_NAME,
null,
REPLAY_LOCK_LEASE_LENGTH,
getLockingRequestStatusChecker(),
false);
}
/**
* Returns a fake implementation of {@link RequestStatusChecker} that is required for lock
* acquisition. The default implementation is AppEngine-specific and is unusable on GCE.
*/
private static RequestStatusChecker getLockingRequestStatusChecker() {
return new RequestStatusChecker() {
@Override
public String getLogId() {
return "ValidateSqlPipeline";
}
@Override
public boolean isRunning(String requestLogId) {
return true;
}
};
}
public static void main(String[] args) {
ValidateSqlPipelineOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(ValidateSqlPipelineOptions.class);
// Defensively set important options.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
options.setJpaTransactionManagerType(JpaTransactionManagerType.BULK_QUERY);
// Reuse Dataflow worker initialization code to set up JPA in the pipeline harness.
new RegistryPipelineWorkerInitializer().beforeProcessing(options);
DatastoreSnapshotInfo mostRecentExport =
DaggerLatestDatastoreSnapshotFinder_LatestDatastoreSnapshotFinderFinderComponent.create()
.datastoreSnapshotInfoFinder()
.getSnapshotInfo();
new ValidateSqlPipeline(options, mostRecentExport).run(Pipeline.create(options));
}
}

View File

@@ -20,10 +20,8 @@ import static google.registry.persistence.transaction.TransactionManagerFactory.
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.initsql.Transforms;
import google.registry.config.RegistryEnvironment;
import google.registry.model.BackupGroupRoot;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
@@ -38,6 +36,7 @@ import google.registry.model.poll.PollMessage;
import google.registry.model.registrar.Registrar;
import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.HistoryEntry;
import google.registry.util.DiffUtils;
import java.lang.reflect.Field;
import java.math.BigInteger;
import java.util.HashMap;
@@ -52,10 +51,9 @@ import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.TupleTag;
/** Helpers for use by {@link ValidateSqlPipeline}. */
/** Helpers for use by {@link ValidateDatabasePipeline}. */
@DeleteAfterMigration
final class ValidateSqlUtils {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private ValidateSqlUtils() {}
@@ -67,7 +65,7 @@ final class ValidateSqlUtils {
* Query template for finding the median value of the {@code history_revision_id} column in one of
* the History tables.
*
* <p>The {@link ValidateSqlPipeline} uses this query to parallelize the query to some of the
* <p>The {@link ValidateDatabasePipeline} uses this query to parallelize the query to some of the
* history tables. Although the {@code repo_id} column is the leading column in the primary keys
* of these tables, in practice and with production data, division by {@code history_revision_id}
* works slightly faster for unknown reasons.
@@ -99,13 +97,12 @@ final class ValidateSqlUtils {
return new TupleTag<SqlEntity>(actualType.getSimpleName()) {};
}
static class CompareSqlEntity extends DoFn<KV<String, Iterable<SqlEntity>>, Void> {
static class CompareSqlEntity extends DoFn<KV<String, Iterable<SqlEntity>>, String> {
private final HashMap<String, Counter> totalCounters = new HashMap<>();
private final HashMap<String, Counter> missingCounters = new HashMap<>();
private final HashMap<String, Counter> unequalCounters = new HashMap<>();
private final HashMap<String, Counter> badEntityCounters = new HashMap<>();
private volatile boolean logPrinted = false;
private final HashMap<String, Counter> duplicateEntityCounters = new HashMap<>();
private String getCounterKey(Class<?> clazz) {
return PollMessage.class.isAssignableFrom(clazz) ? "PollMessage" : clazz.getSimpleName();
@@ -120,80 +117,86 @@ final class ValidateSqlUtils {
counterKey, Metrics.counter("CompareDB", "Missing In One DB: " + counterKey));
unequalCounters.put(counterKey, Metrics.counter("CompareDB", "Not Equal:" + counterKey));
badEntityCounters.put(counterKey, Metrics.counter("CompareDB", "Bad Entities:" + counterKey));
duplicateEntityCounters.put(
counterKey, Metrics.counter("CompareDB", "Duplicate Entities:" + counterKey));
}
String duplicateEntityLog(String key, ImmutableList<SqlEntity> entities) {
return String.format("%s: %d entities.", key, entities.size());
}
String unmatchedEntityLog(String key, SqlEntity entry) {
// For a PollMessage only found in Datastore, key is not enough to query for it.
return String.format("Missing in one DB:\n%s", entry instanceof PollMessage ? entry : key);
}
/**
* A rudimentary debugging helper that prints the first pair of unequal entities in each worker.
* This will be removed when we start exporting such entities to GCS.
*/
void logDiff(String key, Object entry0, Object entry1) {
if (logPrinted) {
return;
}
logPrinted = true;
String unEqualEntityLog(String key, SqlEntity entry0, SqlEntity entry1) {
Map<String, Object> fields0 = ((ImmutableObject) entry0).toDiffableFieldMap();
Map<String, Object> fields1 = ((ImmutableObject) entry1).toDiffableFieldMap();
StringBuilder sb = new StringBuilder();
fields0.forEach(
(field, value) -> {
if (fields1.containsKey(field)) {
if (!Objects.equals(value, fields1.get(field))) {
sb.append(field + " not match: " + value + " -> " + fields1.get(field) + "\n");
}
} else {
sb.append(field + "Not found in entity 2\n");
}
});
fields1.forEach(
(field, value) -> {
if (!fields0.containsKey(field)) {
sb.append(field + "Not found in entity 1\n");
}
});
logger.atWarning().log(key + " " + sb.toString());
return key + " " + DiffUtils.prettyPrintEntityDeepDiff(fields0, fields1);
}
String badEntitiesLog(String key, SqlEntity entry0, SqlEntity entry1) {
Map<String, Object> fields0 = ((ImmutableObject) entry0).toDiffableFieldMap();
Map<String, Object> fields1 = ((ImmutableObject) entry1).toDiffableFieldMap();
return String.format(
"Failed to parse one or both entities for key %s:\n%s\n",
key, DiffUtils.prettyPrintEntityDeepDiff(fields0, fields1));
}
@ProcessElement
public void processElement(@Element KV<String, Iterable<SqlEntity>> kv) {
public void processElement(
@Element KV<String, Iterable<SqlEntity>> kv, OutputReceiver<String> out) {
ImmutableList<SqlEntity> entities = ImmutableList.copyOf(kv.getValue());
verify(!entities.isEmpty(), "Can't happen: no value for key %s.", kv.getKey());
verify(entities.size() <= 2, "Unexpected duplicates for key %s", kv.getKey());
String counterKey = getCounterKey(entities.get(0).getClass());
ensureCounterExists(counterKey);
totalCounters.get(counterKey).inc();
if (entities.size() > 2) {
// Duplicates may happen with Cursors if imported across projects. Its key in Datastore, the
// id field, encodes the project name and is not fixed by the importing job.
duplicateEntityCounters.get(counterKey).inc();
out.output(duplicateEntityLog(kv.getKey(), entities) + "\n");
return;
}
if (entities.size() == 1) {
if (isSpecialCaseProberEntity(entities.get(0))) {
return;
}
missingCounters.get(counterKey).inc();
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
logPrinted = true;
logger.atWarning().log("Unexpected single entity: %s", kv.getKey());
}
out.output(unmatchedEntityLog(kv.getKey(), entities.get(0)) + "\n");
return;
}
SqlEntity entity0 = entities.get(0);
SqlEntity entity1 = entities.get(1);
if (isSpecialCaseProberEntity(entity0) && isSpecialCaseProberEntity(entity1)) {
// Ignore prober-related data: their deletions are not propagated from Datastore to SQL.
// When code reaches here, in most cases it involves one soft deleted entity in Datastore
// and an SQL entity with its pre-deletion status.
return;
}
SqlEntity entity0;
SqlEntity entity1;
try {
entity0 = normalizeEntity(entities.get(0));
entity1 = normalizeEntity(entities.get(1));
entity0 = normalizeEntity(entity0);
entity1 = normalizeEntity(entity1);
} catch (Exception e) {
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
logPrinted = true;
badEntityCounters.get(counterKey).inc();
}
badEntityCounters.get(counterKey).inc();
out.output(badEntitiesLog(kv.getKey(), entity0, entity1));
return;
}
if (!Objects.equals(entity0, entity1)) {
unequalCounters.get(counterKey).inc();
logDiff(kv.getKey(), entities.get(0), entities.get(1));
out.output(unEqualEntityLog(kv.getKey(), entities.get(0), entities.get(1)));
}
}
}
@@ -218,15 +221,6 @@ final class ValidateSqlUtils {
*/
static SqlEntity normalizeEppResource(SqlEntity eppResource) {
try {
if (isSpecialCaseProberEntity(eppResource)) {
// Clearing some timestamps. See isSpecialCaseProberEntity() for reasons.
Field lastUpdateTime = BackupGroupRoot.class.getDeclaredField("updateTimestamp");
lastUpdateTime.setAccessible(true);
lastUpdateTime.set(eppResource, null);
Field deletionTime = EppResource.class.getDeclaredField("deletionTime");
deletionTime.setAccessible(true);
deletionTime.set(eppResource, null);
}
Field authField =
eppResource instanceof DomainContent
? DomainContent.class.getDeclaredField("authInfo")

View File

@@ -248,6 +248,9 @@ public class RdeIO {
// Now that we're done, output roll the cursor forward.
if (key.manual()) {
logger.atInfo().log("Manual operation; not advancing cursor or enqueuing upload task.");
// Temporary measure to run RDE in beam in parallel with the daily MapReduce based RDE runs.
} else if (tm().isOfy()) {
logger.atInfo().log("Ofy is primary TM; not advancing cursor or enqueuing upload task.");
} else {
outputReceiver.output(KV.of(key, revision));
}
@@ -294,10 +297,14 @@ public class RdeIO {
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), revision);
// Enqueueing a task is a side effect that is not undone if the transaction rolls
// back. So this may result in multiple copies of the same task being processed.
// This is fine because the RdeUploadAction is guarded by a lock and tracks progress
// by cursor. The BrdaCopyAction writes a file to GCS, which is an atomic action.
if (key.mode() == RdeMode.FULL) {
cloudTasksUtils.enqueue(
RDE_UPLOAD_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeUploadAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(
@@ -308,7 +315,7 @@ public class RdeIO {
} else {
cloudTasksUtils.enqueue(
BRDA_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
BrdaCopyAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(

View File

@@ -21,6 +21,7 @@ import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.util.CloudTasksUtils.SerializableCloudTasksClient;
@@ -46,8 +47,9 @@ public abstract class CloudTasksUtilsModule {
@Config("projectId") String projectId,
@Config("locationId") String locationId,
SerializableCloudTasksClient client,
Retrier retrier) {
return new CloudTasksUtils(retrier, projectId, locationId, client);
Retrier retrier,
Clock clock) {
return new CloudTasksUtils(retrier, clock, projectId, locationId, client);
}
// Provides a supplier instead of using a Dagger @Provider because the latter is not serializable.

View File

@@ -20,7 +20,6 @@ import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import javax.inject.Inject;
@@ -35,7 +34,6 @@ public final class CommitLogFanoutAction implements Runnable {
public static final String BUCKET_PARAM = "bucket";
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject @Parameter("endpoint") String endpoint;
@@ -43,18 +41,15 @@ public final class CommitLogFanoutAction implements Runnable {
@Inject @Parameter("jitterSeconds") Optional<Integer> jitterSeconds;
@Inject CommitLogFanoutAction() {}
@Override
public void run() {
for (int bucketId : CommitLogBucket.getBucketIds()) {
cloudTasksUtils.enqueue(
queue,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTaskWithJitter(
endpoint,
Service.BACKEND.toString(),
ImmutableMultimap.of(BUCKET_PARAM, Integer.toString(bucketId)),
clock,
jitterSeconds));
}
}

View File

@@ -45,7 +45,6 @@ import google.registry.request.ParameterMap;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import java.util.stream.Stream;
@@ -98,7 +97,6 @@ public final class TldFanoutAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject Response response;
@Inject @Parameter(ENDPOINT_PARAM) String endpoint;
@@ -159,7 +157,7 @@ public final class TldFanoutAction implements Runnable {
params = ArrayListMultimap.create(params);
params.put(RequestParameters.PARAM_TLD, tld);
}
return CloudTasksUtils.createPostTask(
endpoint, Service.BACKEND.toString(), params, clock, jitterSeconds);
return cloudTasksUtils.createPostTaskWithJitter(
endpoint, Service.BACKEND.toString(), params, jitterSeconds);
}
}

View File

@@ -422,6 +422,12 @@ have been in the database for a certain period of time. -->
<url-pattern>/_dr/task/createSyntheticHistoryEntries</url-pattern>
</servlet-mapping>
<!-- Action to sync Datastore to a snapshot of the primary SQL database. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/task/syncDatastoreToSqlSnapshot</url-pattern>
</servlet-mapping>
<!-- Security config -->
<security-constraint>
<web-resource-collection>

View File

@@ -36,6 +36,19 @@
<target>backend</target>
</cron>
<cron>
<url>/_dr/task/rdeStaging?beam=true</url>
<description>
This job generates a full RDE escrow deposit as a single gigantic XML
document using the Beam pipeline regardless of the current TM
configuration and streams it to cloud storage. It does not trigger the
subsequent upload tasks and is meant to run parallel with the main cron
job in order to compare the results from both runs.
</description>
<schedule>every 8 hours from 00:07 to 20:00</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=rde-upload&endpoint=/_dr/task/rdeUpload&forEachRealTld]]></url>
<description>
@@ -240,16 +253,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<description>

View File

@@ -168,15 +168,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/task/sendExpiringCertificateNotificationEmail]]></url>
<description>
This job runs an action that sends emails to partners if their certificates are expiring soon.
</description>
<schedule>every day 04:30</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=export-snapshot&endpoint=/_dr/task/backupDatastore&runInEmpty]]></url>
<description>
@@ -191,16 +182,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<description>

View File

@@ -14,8 +14,6 @@
package google.registry.export.sheet;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static com.google.common.net.MediaType.PLAIN_TEXT_UTF_8;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_BAD_REQUEST;
@@ -23,7 +21,6 @@ import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_NO_CONTENT;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action;
@@ -100,7 +97,7 @@ public class SyncRegistrarsSheetAction implements Runnable {
}
public static final String PATH = "/_dr/task/syncRegistrarsSheet";
private static final String QUEUE = "sheet";
public static final String QUEUE = "sheet";
private static final String LOCK_NAME = "Synchronize registrars sheet";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@@ -144,11 +141,4 @@ public class SyncRegistrarsSheetAction implements Runnable {
Result.LOCKED.send(response, null);
}
}
/**
* Enqueues a sync registrar sheet task targeting the App Engine service specified by hostname.
*/
public static void enqueueRegistrarSheetSync(String hostname) {
getQueue(QUEUE).add(withUrl(PATH).method(Method.GET).header("Host", hostname));
}
}

View File

@@ -169,6 +169,7 @@ import org.joda.time.Duration;
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredDuringEarlyAccessProgramException}
* @error {@link DomainFlowUtils.FeesRequiredForPremiumNameException}
* @error {@link DomainFlowUtils.InvalidDsRecordException}
* @error {@link DomainFlowUtils.InvalidIdnDomainLabelException}
* @error {@link DomainFlowUtils.InvalidPunycodeException}
* @error {@link DomainFlowUtils.InvalidTcnIdChecksumException}

View File

@@ -129,6 +129,7 @@ import google.registry.model.tld.label.ReservedList;
import google.registry.model.tmch.ClaimsListDao;
import google.registry.persistence.VKey;
import google.registry.tldconfig.idn.IdnLabelValidator;
import google.registry.tools.DigestType;
import google.registry.util.Idn;
import java.math.BigDecimal;
import java.util.Collection;
@@ -144,6 +145,7 @@ import org.joda.money.CurrencyUnit;
import org.joda.money.Money;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.xbill.DNS.DNSSEC.Algorithm;
/** Static utility functions for domain flows. */
public class DomainFlowUtils {
@@ -293,13 +295,59 @@ public class DomainFlowUtils {
/** Check that the DS data that will be set on a domain is valid. */
static void validateDsData(Set<DelegationSignerData> dsData) throws EppException {
if (dsData != null && dsData.size() > MAX_DS_RECORDS_PER_DOMAIN) {
throw new TooManyDsRecordsException(
String.format(
"A maximum of %s DS records are allowed per domain.", MAX_DS_RECORDS_PER_DOMAIN));
if (dsData != null) {
if (dsData.size() > MAX_DS_RECORDS_PER_DOMAIN) {
throw new TooManyDsRecordsException(
String.format(
"A maximum of %s DS records are allowed per domain.", MAX_DS_RECORDS_PER_DOMAIN));
}
ImmutableList<DelegationSignerData> invalidAlgorithms =
dsData.stream()
.filter(ds -> !validateAlgorithm(ds.getAlgorithm()))
.collect(toImmutableList());
if (!invalidAlgorithms.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid algorithm wire value: %s",
invalidAlgorithms));
}
ImmutableList<DelegationSignerData> invalidDigestTypes =
dsData.stream()
.filter(ds -> !DigestType.fromWireValue(ds.getDigestType()).isPresent())
.collect(toImmutableList());
if (!invalidDigestTypes.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid digest type: %s",
invalidDigestTypes));
}
ImmutableList<DelegationSignerData> digestsWithInvalidDigestLength =
dsData.stream()
.filter(
ds ->
DigestType.fromWireValue(ds.getDigestType()).isPresent()
&& (ds.getDigest().length
!= DigestType.fromWireValue(ds.getDigestType()).get().getBytes()))
.collect(toImmutableList());
if (!digestsWithInvalidDigestLength.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid digest length: %s",
digestsWithInvalidDigestLength));
}
}
}
public static boolean validateAlgorithm(int alg) {
if (alg > 255 || alg < 0) {
return false;
}
// Algorithms that are reserved or unassigned will just return a string representation of their
// integer wire value.
String algorithm = Algorithm.string(alg);
return !algorithm.equals(Integer.toString(alg));
}
/** We only allow specifying years in a period. */
static Period verifyUnitIsYears(Period period) throws EppException {
if (!checkNotNull(period).getUnit().equals(Period.Unit.YEARS)) {
@@ -1077,7 +1125,16 @@ public class DomainFlowUtils {
// Only cancel fields which are cancelable
if (cancelableFields.contains(record.getReportField())) {
int cancelledAmount = -1 * record.getReportAmount();
recordsBuilder.add(record.asBuilder().setReportAmount(cancelledAmount).build());
// NB: It's necessary to create a new DomainTransactionRecord from scratch so that we
// don't retain the ID of the previous record to cancel. If we keep the ID, Hibernate
// will remove that record from the DB entirely as the record will be re-parented on
// this DomainHistory being created now.
recordsBuilder.add(
DomainTransactionRecord.create(
record.getTld(),
record.getReportingTime(),
record.getReportField(),
cancelledAmount));
}
}
}
@@ -1217,6 +1274,13 @@ public class DomainFlowUtils {
}
}
/** Domain has an invalid DS record. */
static class InvalidDsRecordException extends ParameterValuePolicyErrorException {
public InvalidDsRecordException(String message) {
super(message);
}
}
/** Domain name is under tld which doesn't exist. */
static class TldDoesNotExistException extends ParameterValueRangeErrorException {
public TldDoesNotExistException(String tld) {

View File

@@ -114,6 +114,7 @@ import org.joda.time.DateTime;
* @error {@link DomainFlowUtils.EmptySecDnsUpdateException}
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredForNonFreeOperationException}
* @error {@link DomainFlowUtils.InvalidDsRecordException}
* @error {@link DomainFlowUtils.LinkedResourcesDoNotExistException}
* @error {@link DomainFlowUtils.LinkedResourceInPendingDeleteProhibitsOperationException}
* @error {@link DomainFlowUtils.MaxSigLifeChangeNotSupportedException}

View File

@@ -14,8 +14,6 @@
package google.registry.loadtest;
import static com.google.appengine.api.taskqueue.QueueConstants.maxTasksPerAdd;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.Lists.partition;
@@ -24,16 +22,20 @@ import static google.registry.util.ResourceUtils.readResourceUtf8;
import static java.util.Arrays.asList;
import static org.joda.time.DateTimeZone.UTC;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.Iterators;
import com.google.common.flogger.FluentLogger;
import com.google.protobuf.Timestamp;
import google.registry.config.RegistryEnvironment;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.security.XsrfTokenManager;
import google.registry.util.TaskQueueUtils;
import google.registry.util.CloudTasksUtils;
import java.time.Instant;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
@@ -62,6 +64,7 @@ public class LoadTestAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final int NUM_QUEUES = 10;
private static final int MAX_TASKS_PER_LOAD = 100;
private static final int ARBITRARY_VALID_HOST_LENGTH = 40;
private static final int MAX_CONTACT_LENGTH = 13;
private static final int MAX_DOMAIN_LABEL_LENGTH = 63;
@@ -146,7 +149,7 @@ public class LoadTestAction implements Runnable {
@Parameter("hostInfos")
int hostInfosPerSecond;
@Inject TaskQueueUtils taskQueueUtils;
@Inject CloudTasksUtils cloudTasksUtils;
private final String xmlContactCreateTmpl;
private final String xmlContactCreateFail;
@@ -208,7 +211,7 @@ public class LoadTestAction implements Runnable {
ImmutableList<String> contactNames = contactNamesBuilder.build();
ImmutableList<String> hostPrefixes = hostPrefixesBuilder.build();
ImmutableList.Builder<TaskOptions> tasks = new ImmutableList.Builder<>();
ImmutableList.Builder<Task> tasks = new ImmutableList.Builder<>();
for (int offsetSeconds = 0; offsetSeconds < runSeconds; offsetSeconds++) {
DateTime startSecond = initialStartSecond.plusSeconds(offsetSeconds);
// The first "failed" creates might actually succeed if the object doesn't already exist, but
@@ -254,7 +257,7 @@ public class LoadTestAction implements Runnable {
.collect(toImmutableList()),
startSecond));
}
ImmutableList<TaskOptions> taskOptions = tasks.build();
ImmutableList<Task> taskOptions = tasks.build();
enqueue(taskOptions);
logger.atInfo().log("Added %d total load test tasks.", taskOptions.size());
}
@@ -322,28 +325,51 @@ public class LoadTestAction implements Runnable {
return name.toString();
}
private List<TaskOptions> createTasks(List<String> xmls, DateTime start) {
ImmutableList.Builder<TaskOptions> tasks = new ImmutableList.Builder<>();
private List<Task> createTasks(List<String> xmls, DateTime start) {
ImmutableList.Builder<Task> tasks = new ImmutableList.Builder<>();
for (int i = 0; i < xmls.size(); i++) {
// Space tasks evenly within across a second.
int offsetMillis = (int) (1000.0 / xmls.size() * i);
Instant scheduleTime =
Instant.ofEpochMilli(start.plusMillis((int) (1000.0 / xmls.size() * i)).getMillis());
tasks.add(
TaskOptions.Builder.withUrl("/_dr/epptool")
.etaMillis(start.getMillis() + offsetMillis)
.header(X_CSRF_TOKEN, xsrfToken)
.param("clientId", registrarId)
.param("superuser", Boolean.FALSE.toString())
.param("dryRun", Boolean.FALSE.toString())
.param("xml", xmls.get(i)));
Task.newBuilder()
.setAppEngineHttpRequest(
cloudTasksUtils
.createPostTask(
"/_dr/epptool",
Service.TOOLS.toString(),
ImmutableMultimap.of(
"clientId",
registrarId,
"superuser",
Boolean.FALSE.toString(),
"dryRun",
Boolean.FALSE.toString(),
"xml",
xmls.get(i)))
.toBuilder()
.getAppEngineHttpRequest()
.toBuilder()
// instead of adding the X_CSRF_TOKEN to params, this remains as part of
// headers because of the existing setup for authentication in {@link
// google.registry.request.auth.LegacyAuthenticationMechanism}
.putHeaders(X_CSRF_TOKEN, xsrfToken)
.build())
.setScheduleTime(
Timestamp.newBuilder()
.setSeconds(scheduleTime.getEpochSecond())
.setNanos(scheduleTime.getNano())
.build())
.build());
}
return tasks.build();
}
private void enqueue(List<TaskOptions> tasks) {
List<List<TaskOptions>> chunks = partition(tasks, maxTasksPerAdd());
private void enqueue(List<Task> tasks) {
List<List<Task>> chunks = partition(tasks, MAX_TASKS_PER_LOAD);
// Farm out tasks to multiple queues to work around queue qps quotas.
for (int i = 0; i < chunks.size(); i++) {
taskQueueUtils.enqueue(getQueue("load" + (i % NUM_QUEUES)), chunks.get(i));
cloudTasksUtils.enqueue("load" + (i % NUM_QUEUES), chunks.get(i));
}
}
}

View File

@@ -60,4 +60,25 @@ public abstract class BackupGroupRoot extends ImmutableObject implements UnsafeS
protected void copyUpdateTimestamp(BackupGroupRoot other) {
this.updateTimestamp = PreconditionsUtils.checkArgumentNotNull(other, "other").updateTimestamp;
}
/**
* Resets the {@link #updateTimestamp} to force Hibernate to persist it.
*
* <p>This method is for use in setters in derived builders that do not result in the derived
* object being persisted.
*/
protected void resetUpdateTimestamp() {
this.updateTimestamp = UpdateAutoTimestamp.create(null);
}
/**
* Sets the {@link #updateTimestamp}.
*
* <p>This method is for use in the few places where we need to restore the update timestamp after
* mutating a collection in order to force the new timestamp to be persisted when it ordinarily
* wouldn't.
*/
protected void setUpdateTimestamp(UpdateAutoTimestamp timestamp) {
updateTimestamp = timestamp;
}
}

View File

@@ -21,6 +21,7 @@ import static com.google.common.collect.Sets.union;
import static google.registry.config.RegistryConfig.getEppResourceCachingDuration;
import static google.registry.config.RegistryConfig.getEppResourceMaxCachedEntries;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.CollectionUtils.nullToEmptyImmutableCopy;
@@ -361,6 +362,16 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable {
return thisCastToDerived();
}
/**
* Set the update timestamp.
*
* <p>This is provided at EppResource since BackupGroupRoot doesn't have a Builder.
*/
public B setUpdateTimestamp(UpdateAutoTimestamp updateTimestamp) {
getInstance().setUpdateTimestamp(updateTimestamp);
return thisCastToDerived();
}
/** Build the resource, nullifying empty strings and sets and setting defaults. */
@Override
public T build() {
@@ -380,13 +391,13 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable {
@Override
public EppResource load(VKey<? extends EppResource> key) {
return tm().doTransactionless(() -> tm().loadByKey(key));
return replicaTm().doTransactionless(() -> replicaTm().loadByKey(key));
}
@Override
public Map<VKey<? extends EppResource>, EppResource> loadAll(
Iterable<? extends VKey<? extends EppResource>> keys) {
return tm().doTransactionless(() -> tm().loadByKeys(keys));
return replicaTm().doTransactionless(() -> replicaTm().loadByKeys(keys));
}
};

View File

@@ -74,6 +74,8 @@ public class BulkQueryEntities {
builder.setGracePeriods(gracePeriods);
builder.setDsData(delegationSignerData);
builder.setNameservers(nsHosts);
// Restore the original update timestamp (this gets cleared when we set nameservers or DS data).
builder.setUpdateTimestamp(domainBaseLite.getUpdateTimestamp());
return builder.build();
}
@@ -100,6 +102,9 @@ public class BulkQueryEntities {
dsDataHistories.stream()
.map(DelegationSignerData::create)
.collect(toImmutableSet()))
// Restore the original update timestamp (this gets cleared when we set nameservers or
// DS data).
.setUpdateTimestamp(domainHistoryLite.domainContent.getUpdateTimestamp())
.build();
builder.setDomain(newDomainContent);
}

View File

@@ -64,7 +64,7 @@ public class ContactHistory extends HistoryEntry implements SqlEntity, UnsafeSer
// Store ContactBase instead of ContactResource so we don't pick up its @Id
// Nullable for the sake of pre-Registry-3.0 history objects
@Nullable ContactBase contactBase;
@DoNotCompare @Nullable ContactBase contactBase;
@Id
@Access(AccessType.PROPERTY)

View File

@@ -779,6 +779,10 @@ public class DomainContent extends EppResource
*/
void setContactFields(Set<DesignatedContact> contacts, boolean includeRegistrant) {
// Set the individual contact fields.
billingContact = techContact = adminContact = null;
if (includeRegistrant) {
registrantContact = null;
}
for (DesignatedContact contact : contacts) {
switch (contact.getType()) {
case BILLING:
@@ -895,6 +899,7 @@ public class DomainContent extends EppResource
public B setDsData(ImmutableSet<DelegationSignerData> dsData) {
getInstance().dsData = dsData;
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
@@ -918,11 +923,13 @@ public class DomainContent extends EppResource
public B setNameservers(VKey<HostResource> nameserver) {
getInstance().nsHosts = ImmutableSet.of(nameserver);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B setNameservers(ImmutableSet<VKey<HostResource>> nameservers) {
getInstance().nsHosts = forceEmptyToNull(nameservers);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
@@ -1032,17 +1039,20 @@ public class DomainContent extends EppResource
public B setGracePeriods(ImmutableSet<GracePeriod> gracePeriods) {
getInstance().gracePeriods = gracePeriods;
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B addGracePeriod(GracePeriod gracePeriod) {
getInstance().gracePeriods = union(getInstance().getGracePeriods(), gracePeriod);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B removeGracePeriod(GracePeriod gracePeriod) {
getInstance().gracePeriods =
CollectionUtils.difference(getInstance().getGracePeriods(), gracePeriod);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}

View File

@@ -86,7 +86,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
// Store DomainContent instead of DomainBase so we don't pick up its @Id
// Nullable for the sake of pre-Registry-3.0 history objects
@Nullable DomainContent domainContent;
@DoNotCompare @Nullable DomainContent domainContent;
@Id
@Access(AccessType.PROPERTY)
@@ -105,6 +105,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
// We could have reused domainContent.nsHosts here, but Hibernate throws a weird exception after
// we change to use a composite primary key.
// TODO(b/166776754): Investigate if we can reuse domainContent.nsHosts for storing host keys.
@DoNotCompare
@ElementCollection
@JoinTable(
name = "DomainHistoryHost",
@@ -118,6 +119,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
@Column(name = "host_repo_id")
Set<VKey<HostResource>> nsHosts;
@DoNotCompare
@OneToMany(
cascade = {CascadeType.ALL},
fetch = FetchType.EAGER,
@@ -137,6 +139,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
// HashSet rather than ImmutableSet so that Hibernate can fill them out lazily on request
Set<DomainDsDataHistory> dsDataHistories = new HashSet<>();
@DoNotCompare
@OneToMany(
cascade = {CascadeType.ALL},
fetch = FetchType.EAGER,

View File

@@ -66,7 +66,7 @@ public class HostHistory extends HistoryEntry implements SqlEntity, UnsafeSerial
// Store HostBase instead of HostResource so we don't pick up its @Id
// Nullable for the sake of pre-Registry-3.0 history objects
@Nullable HostBase hostBase;
@DoNotCompare @Nullable HostBase hostBase;
@Id
@Access(AccessType.PROPERTY)

View File

@@ -34,6 +34,20 @@ import javax.persistence.AccessType;
@ReportedOn
@Entity
@javax.persistence.Entity(name = "Host")
@javax.persistence.Table(
name = "Host",
/**
* A gin index defined on the inet_addresses field ({@link HostBase#inetAddresses} cannot be
* declared here because JPA/Hibernate does not support index type specification. As a result,
* the hibernate-generated schema (which is for reference only) does not have this index.
*
* <p>There are Hibernate-specific solutions for adding this index to Hibernate's domain model.
* We could either declare the index in hibernate.cfg.xml or add it to the {@link
* org.hibernate.cfg.Configuration} instance for {@link SessionFactory} instantiation (which
* would prevent us from using JPA standard bootstrapping). For now, there is no obvious benefit
* doing either.
*/
indexes = {@javax.persistence.Index(columnList = "hostName")})
@ExternalMessagingName("host")
@WithStringVKey
@Access(AccessType.FIELD) // otherwise it'll use the default if the repoId (property)

View File

@@ -21,6 +21,7 @@ import static google.registry.config.RegistryConfig.getEppResourceCachingDuratio
import static google.registry.config.RegistryConfig.getEppResourceMaxCachedEntries;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.entriesToImmutableMap;
import static google.registry.util.TypeUtils.instantiate;
@@ -51,6 +52,7 @@ import google.registry.model.host.HostResource;
import google.registry.model.replay.DatastoreOnlyEntity;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.util.NonFinalForTesting;
import java.util.Collection;
import java.util.Comparator;
@@ -198,7 +200,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
*/
public static <E extends EppResource> ImmutableMap<String, ForeignKeyIndex<E>> load(
Class<E> clazz, Collection<String> foreignKeys, final DateTime now) {
return loadIndexesFromStore(clazz, foreignKeys, true).entrySet().stream()
return loadIndexesFromStore(clazz, foreignKeys, true, false).entrySet().stream()
.filter(e -> now.isBefore(e.getValue().getDeletionTime()))
.collect(entriesToImmutableMap());
}
@@ -217,7 +219,10 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
*/
private static <E extends EppResource>
ImmutableMap<String, ForeignKeyIndex<E>> loadIndexesFromStore(
Class<E> clazz, Collection<String> foreignKeys, boolean inTransaction) {
Class<E> clazz,
Collection<String> foreignKeys,
boolean inTransaction,
boolean useReplicaJpaTm) {
if (tm().isOfy()) {
Class<ForeignKeyIndex<E>> fkiClass = mapToFkiClass(clazz);
return ImmutableMap.copyOf(
@@ -226,17 +231,18 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
: tm().doTransactionless(() -> auditedOfy().load().type(fkiClass).ids(foreignKeys)));
} else {
String property = RESOURCE_CLASS_TO_FKI_PROPERTY.get(clazz);
JpaTransactionManager jpaTmToUse = useReplicaJpaTm ? replicaJpaTm() : jpaTm();
ImmutableList<ForeignKeyIndex<E>> indexes =
tm().transact(
() ->
jpaTm()
.criteriaQuery(
CriteriaQueryBuilder.create(clazz)
.whereFieldIsIn(property, foreignKeys)
.build())
.getResultStream()
.map(e -> ForeignKeyIndex.create(e, e.getDeletionTime()))
.collect(toImmutableList()));
jpaTmToUse.transact(
() ->
jpaTmToUse
.criteriaQuery(
CriteriaQueryBuilder.create(clazz)
.whereFieldIsIn(property, foreignKeys)
.build())
.getResultStream()
.map(e -> ForeignKeyIndex.create(e, e.getDeletionTime()))
.collect(toImmutableList()));
// We need to find and return the entities with the maximum deletionTime for each foreign key.
return Multimaps.index(indexes, ForeignKeyIndex::getForeignKey).asMap().entrySet().stream()
.map(
@@ -260,7 +266,8 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
loadIndexesFromStore(
RESOURCE_CLASS_TO_FKI_CLASS.inverse().get(key.getKind()),
ImmutableSet.of(foreignKey),
false)
false,
true)
.get(foreignKey));
}
@@ -276,7 +283,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
Streams.stream(keys).map(v -> v.getSqlKey().toString()).collect(toImmutableSet());
ImmutableSet<VKey<ForeignKeyIndex<?>>> typedKeys = ImmutableSet.copyOf(keys);
ImmutableMap<String, ? extends ForeignKeyIndex<? extends EppResource>> existingFkis =
loadIndexesFromStore(resourceClass, foreignKeys, false);
loadIndexesFromStore(resourceClass, foreignKeys, false, true);
// ofy omits keys that don't have values in Datastore, so re-add them in
// here with Optional.empty() values.
return Maps.asMap(
@@ -336,7 +343,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
// Safe to cast VKey<FKI<E>> to VKey<FKI<?>>
@SuppressWarnings("unchecked")
ImmutableList<VKey<ForeignKeyIndex<?>>> fkiVKeys =
Streams.stream(foreignKeys)
foreignKeys.stream()
.map(fk -> (VKey<ForeignKeyIndex<?>>) VKey.create(fkiClass, fk))
.collect(toImmutableList());
try {

View File

@@ -41,7 +41,7 @@ import org.joda.time.DateTime;
/** Wrapper for {@link Supplier} that associates a time with each attempt. */
@DeleteAfterMigration
class CommitLoggedWork<R> implements Runnable {
public class CommitLoggedWork<R> implements Runnable {
private final Supplier<R> work;
private final Clock clock;

View File

@@ -46,6 +46,7 @@ import static google.registry.util.X509Utils.loadCertificate;
import static java.util.Comparator.comparing;
import static java.util.function.Predicate.isEqual;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.base.Supplier;
import com.google.common.collect.ImmutableList;
@@ -991,6 +992,16 @@ public class Registrar extends ImmutableObject
return this;
}
/**
* This lets tests set the update timestamp in cases where setting fields resets the timestamp
* and breaks the verification that an object has not been updated since it was copied.
*/
@VisibleForTesting
public Builder setLastUpdateTime(DateTime timestamp) {
getInstance().lastUpdateTime = UpdateAutoTimestamp.create(timestamp);
return this;
}
/** Build the registrar, nullifying empty fields. */
@Override
public Registrar build() {

View File

@@ -59,6 +59,10 @@ public class ReplicateToDatastoreAction implements Runnable {
public static final String PATH = "/_dr/cron/replicateToDatastore";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Name of the lock that ensures sequential execution of replays. */
public static final String REPLICATE_TO_DATASTORE_LOCK_NAME =
ReplicateToDatastoreAction.class.getSimpleName();
/**
* Number of transactions to fetch from SQL. The rationale for 200 is that we're processing these
* every minute and our production instance currently does about 2 mutations per second, so this
@@ -66,7 +70,7 @@ public class ReplicateToDatastoreAction implements Runnable {
*/
public static final int BATCH_SIZE = 200;
private static final Duration LEASE_LENGTH = standardHours(1);
public static final Duration REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH = standardHours(1);
private final Clock clock;
private final RequestStatusChecker requestStatusChecker;
@@ -81,21 +85,26 @@ public class ReplicateToDatastoreAction implements Runnable {
}
@VisibleForTesting
public List<TransactionEntity> getTransactionBatch() {
public List<TransactionEntity> getTransactionBatchAtSnapshot() {
return getTransactionBatchAtSnapshot(Optional.empty());
}
static List<TransactionEntity> getTransactionBatchAtSnapshot(Optional<String> snapshotId) {
// Get the next batch of transactions that we haven't replicated.
LastSqlTransaction lastSqlTxnBeforeBatch = ofyTm().transact(LastSqlTransaction::load);
try {
return jpaTm()
.transactWithoutBackup(
() ->
jpaTm()
.query(
"SELECT txn FROM TransactionEntity txn WHERE id >"
+ " :lastId ORDER BY id",
TransactionEntity.class)
.setParameter("lastId", lastSqlTxnBeforeBatch.getTransactionId())
.setMaxResults(BATCH_SIZE)
.getResultList());
() -> {
snapshotId.ifPresent(jpaTm()::setDatabaseSnapshot);
return jpaTm()
.query(
"SELECT txn FROM TransactionEntity txn WHERE id >" + " :lastId ORDER BY id",
TransactionEntity.class)
.setParameter("lastId", lastSqlTxnBeforeBatch.getTransactionId())
.setMaxResults(BATCH_SIZE)
.getResultList();
});
} catch (NoResultException e) {
return ImmutableList.of();
}
@@ -108,7 +117,7 @@ public class ReplicateToDatastoreAction implements Runnable {
* <p>Throws an exception if a fatal error occurred and the batch should be aborted
*/
@VisibleForTesting
public void applyTransaction(TransactionEntity txnEntity) {
public static void applyTransaction(TransactionEntity txnEntity) {
logger.atInfo().log("Applying a single transaction Cloud SQL -> Cloud Datastore.");
try (UpdateAutoTimestamp.DisableAutoUpdateResource disabler =
UpdateAutoTimestamp.disableAutoUpdate()) {
@@ -118,15 +127,16 @@ public class ReplicateToDatastoreAction implements Runnable {
// Reload the last transaction id, which could possibly have changed.
LastSqlTransaction lastSqlTxn = LastSqlTransaction.load();
long nextTxnId = lastSqlTxn.getTransactionId() + 1;
if (nextTxnId < txnEntity.getId()) {
// We're missing a transaction. This is bad. Transaction ids are supposed to
// increase monotonically, so we abort rather than applying anything out of
// order.
throw new IllegalStateException(
String.format(
"Missing transaction: last txn id = %s, next available txn = %s",
nextTxnId - 1, txnEntity.getId()));
} else if (nextTxnId > txnEntity.getId()) {
// Skip missing transactions. Missed transactions can happen normally. If a
// transaction gets rolled back, the sequence counter doesn't.
while (nextTxnId < txnEntity.getId()) {
logger.atWarning().log(
"Ignoring transaction %s, which does not exist.", nextTxnId);
++nextTxnId;
}
if (nextTxnId > txnEntity.getId()) {
// We've already replayed this transaction. This shouldn't happen, as GAE cron
// is supposed to avoid overruns and this action shouldn't be executed from any
// other context, but it's not harmful as we can just ignore the transaction. Log
@@ -174,7 +184,11 @@ public class ReplicateToDatastoreAction implements Runnable {
}
Optional<Lock> lock =
Lock.acquireSql(
this.getClass().getSimpleName(), null, LEASE_LENGTH, requestStatusChecker, false);
REPLICATE_TO_DATASTORE_LOCK_NAME,
null,
REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH,
requestStatusChecker,
false);
if (!lock.isPresent()) {
String message = "Can't acquire ReplicateToDatastoreAction lock, aborting.";
logger.atSevere().log(message);
@@ -203,10 +217,14 @@ public class ReplicateToDatastoreAction implements Runnable {
}
private int replayAllTransactions() {
return replayAllTransactions(Optional.empty());
}
public static int replayAllTransactions(Optional<String> snapshotId) {
int numTransactionsReplayed = 0;
List<TransactionEntity> transactionBatch;
do {
transactionBatch = getTransactionBatch();
transactionBatch = getTransactionBatchAtSnapshot(snapshotId);
for (TransactionEntity transaction : transactionBatch) {
applyTransaction(transaction);
numTransactionsReplayed++;

View File

@@ -324,7 +324,7 @@ public class HistoryEntry extends ImmutableObject
// Note: how we wish to treat this Hibernate setter depends on the current state of the object
// and what's passed in. The key principle is that we wish to maintain the link between parent
// and child objects, meaning that we should keep around whichever of the two sets (the
// parameter vs the class variable and clear/populate that as appropriate.
// parameter vs the class variable) and clear/populate that as appropriate.
//
// If the class variable is a PersistentSet and we overwrite it here, Hibernate will throw
// an exception "A collection with cascade=”all-delete-orphan” was no longer referenced by the
@@ -539,7 +539,7 @@ public class HistoryEntry extends ImmutableObject
public B setDomainTransactionRecords(
ImmutableSet<DomainTransactionRecord> domainTransactionRecords) {
getInstance().domainTransactionRecords = domainTransactionRecords;
getInstance().setDomainTransactionRecords(domainTransactionRecords);
return thisCastToDerived();
}
}

View File

@@ -14,12 +14,10 @@
package google.registry.model.translators;
import static com.google.common.base.MoreObjects.firstNonNull;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.joda.time.DateTimeZone.UTC;
import google.registry.model.CreateAutoTimestamp;
import google.registry.persistence.transaction.Transaction;
import java.util.Date;
import org.joda.time.DateTime;
@@ -47,13 +45,13 @@ public class CreateAutoTimestampTranslatorFactory
/** Save a timestamp, setting it to the current time if it did not have a previous value. */
@Override
public Date saveValue(CreateAutoTimestamp pojoValue) {
// Don't do this if we're in the course of transaction serialization.
if (Transaction.inSerializationMode()) {
return pojoValue.getTimestamp() == null ? null : pojoValue.getTimestamp().toDate();
}
return firstNonNull(pojoValue.getTimestamp(), ofyTm().getTransactionTime()).toDate();
// Note that we use the current transaction manager -- we need to do this under JPA when we
// serialize the entity from a Transaction object, but we need to use the JPA transaction
// manager in that case.
return (pojoValue.getTimestamp() == null
? tm().getTransactionTime()
: pojoValue.getTimestamp())
.toDate();
}
};
}

View File

@@ -14,6 +14,8 @@
package google.registry.model.translators;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static org.joda.time.DateTimeZone.UTC;
@@ -48,9 +50,16 @@ public class UpdateAutoTimestampTranslatorFactory
@Override
public Date saveValue(UpdateAutoTimestamp pojoValue) {
// Don't do this if we're in the course of transaction serialization.
// If we're in the course of Transaction serialization, we have to use the transaction time
// here and the JPA transaction manager which is what will ultimately be saved during the
// commit.
// Note that this branch doesn't respect "auto update disabled", as this state is
// specifically to address replay, so we add a runtime check for this.
if (Transaction.inSerializationMode()) {
return pojoValue.getTimestamp() == null ? null : pojoValue.getTimestamp().toDate();
checkState(
UpdateAutoTimestamp.autoUpdateEnabled(),
"Auto-update disabled during transaction serialization.");
return jpaTm().getTransactionTime().toDate();
}
return UpdateAutoTimestamp.autoUpdateEnabled()

View File

@@ -43,7 +43,7 @@ import google.registry.rde.JSchModule;
import google.registry.request.Modules.DatastoreServiceModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.URLFetchServiceModule;
import google.registry.request.Modules.UrlConnectionServiceModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
@@ -80,7 +80,7 @@ import javax.inject.Singleton;
ServerTridProviderModule.class,
SheetsServiceModule.class,
StackdriverModule.class,
URLFetchServiceModule.class,
UrlConnectionServiceModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
VoidDnsWriterModule.class,

View File

@@ -21,6 +21,7 @@ import google.registry.backup.CommitLogCheckpointAction;
import google.registry.backup.DeleteOldCommitLogsAction;
import google.registry.backup.ExportCommitLogDiffAction;
import google.registry.backup.ReplayCommitLogsToSqlAction;
import google.registry.backup.SyncDatastoreToSqlSnapshotAction;
import google.registry.batch.BatchModule;
import google.registry.batch.DeleteContactsAndHostsAction;
import google.registry.batch.DeleteExpiredDomainsAction;
@@ -199,6 +200,8 @@ interface BackendRequestComponent {
SendExpiringCertificateNotificationEmailAction sendExpiringCertificateNotificationEmailAction();
SyncDatastoreToSqlSnapshotAction syncDatastoreToSqlSnapshotAction();
SyncGroupMembersAction syncGroupMembersAction();
SyncRegistrarsSheetAction syncRegistrarsSheetAction();

View File

@@ -17,6 +17,7 @@ package google.registry.module.frontend;
import com.google.monitoring.metrics.MetricReporter;
import dagger.Component;
import dagger.Lazy;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.flows.ServerTridProviderModule;
@@ -33,7 +34,6 @@ import google.registry.monitoring.whitebox.StackdriverModule;
import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
import google.registry.ui.ConsoleDebug.ConsoleConfigModule;
@@ -45,10 +45,12 @@ import javax.inject.Singleton;
@Component(
modules = {
AuthModule.class,
CloudTasksUtilsModule.class,
ConfigModule.class,
ConsoleConfigModule.class,
CredentialModule.class,
CustomLogicFactoryModule.class,
CloudTasksUtilsModule.class,
DirectoryModule.class,
DummyKeyringModule.class,
FrontendRequestComponentModule.class,
@@ -62,7 +64,6 @@ import javax.inject.Singleton;
SecretManagerModule.class,
ServerTridProviderModule.class,
StackdriverModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
UtilsModule.class
})

View File

@@ -30,10 +30,10 @@ import google.registry.keyring.api.KeyModule;
import google.registry.keyring.kms.KmsModule;
import google.registry.module.pubapi.PubApiRequestComponent.PubApiRequestComponentModule;
import google.registry.monitoring.whitebox.StackdriverModule;
import google.registry.persistence.PersistenceModule;
import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
import google.registry.util.UtilsModule;
@@ -56,11 +56,11 @@ import javax.inject.Singleton;
KeyringModule.class,
KmsModule.class,
NetHttpTransportModule.class,
PersistenceModule.class,
PubApiRequestComponentModule.class,
SecretManagerModule.class,
ServerTridProviderModule.class,
StackdriverModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
UtilsModule.class
})

View File

@@ -17,6 +17,7 @@ package google.registry.module.tools;
import com.google.monitoring.metrics.MetricReporter;
import dagger.Component;
import dagger.Lazy;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.export.DriveModule;
@@ -35,7 +36,6 @@ import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.request.Modules.DatastoreServiceModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
import google.registry.util.UtilsModule;
@@ -49,6 +49,7 @@ import javax.inject.Singleton;
ConfigModule.class,
CredentialModule.class,
CustomLogicFactoryModule.class,
CloudTasksUtilsModule.class,
DatastoreServiceModule.class,
DirectoryModule.class,
DummyKeyringModule.class,
@@ -64,7 +65,6 @@ import javax.inject.Singleton;
ServerTridProviderModule.class,
StackdriverModule.class,
ToolsRequestComponentModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
UtilsModule.class
})

View File

@@ -277,6 +277,8 @@ public abstract class PersistenceModule {
setSqlCredential(credentialStore, new RobotUser(RobotId.NOMULUS), overrides);
replicaInstanceConnectionName.ifPresent(
name -> overrides.put(HIKARI_DS_CLOUD_SQL_INSTANCE, name));
overrides.put(
Environment.ISOLATION, TransactionIsolationLevel.TRANSACTION_READ_COMMITTED.name());
return new JpaTransactionManagerImpl(create(overrides), clock);
}
@@ -291,6 +293,8 @@ public abstract class PersistenceModule {
HashMap<String, String> overrides = Maps.newHashMap(beamCloudSqlConfigs);
replicaInstanceConnectionName.ifPresent(
name -> overrides.put(HIKARI_DS_CLOUD_SQL_INSTANCE, name));
overrides.put(
Environment.ISOLATION, TransactionIsolationLevel.TRANSACTION_READ_COMMITTED.name());
return new JpaTransactionManagerImpl(create(overrides), clock);
}

View File

@@ -18,7 +18,6 @@ import static google.registry.persistence.transaction.TransactionManagerFactory.
import com.google.common.collect.ImmutableList;
import java.util.Collection;
import javax.persistence.EntityManager;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Expression;
@@ -42,12 +41,14 @@ public class CriteriaQueryBuilder<T> {
private final CriteriaQuery<T> query;
private final Root<?> root;
private final JpaTransactionManager jpaTm;
private final ImmutableList.Builder<Predicate> predicates = new ImmutableList.Builder<>();
private final ImmutableList.Builder<Order> orders = new ImmutableList.Builder<>();
private CriteriaQueryBuilder(CriteriaQuery<T> query, Root<?> root) {
private CriteriaQueryBuilder(CriteriaQuery<T> query, Root<?> root, JpaTransactionManager jpaTm) {
this.query = query;
this.root = root;
this.jpaTm = jpaTm;
}
/** Adds a WHERE clause to the query, given the specified operation, field, and value. */
@@ -75,18 +76,18 @@ public class CriteriaQueryBuilder<T> {
*/
public <V> CriteriaQueryBuilder<T> whereFieldContains(String fieldName, Object value) {
return where(
jpaTm().getEntityManager().getCriteriaBuilder().isMember(value, root.get(fieldName)));
jpaTm.getEntityManager().getCriteriaBuilder().isMember(value, root.get(fieldName)));
}
/** Orders the result by the given field ascending. */
public CriteriaQueryBuilder<T> orderByAsc(String fieldName) {
orders.add(jpaTm().getEntityManager().getCriteriaBuilder().asc(root.get(fieldName)));
orders.add(jpaTm.getEntityManager().getCriteriaBuilder().asc(root.get(fieldName)));
return this;
}
/** Orders the result by the given field descending. */
public CriteriaQueryBuilder<T> orderByDesc(String fieldName) {
orders.add(jpaTm().getEntityManager().getCriteriaBuilder().desc(root.get(fieldName)));
orders.add(jpaTm.getEntityManager().getCriteriaBuilder().desc(root.get(fieldName)));
return this;
}
@@ -103,23 +104,24 @@ public class CriteriaQueryBuilder<T> {
/** Creates a query builder that will SELECT from the given class. */
public static <T> CriteriaQueryBuilder<T> create(Class<T> clazz) {
return create(jpaTm().getEntityManager(), clazz);
return create(jpaTm(), clazz);
}
/** Creates a query builder for the given entity manager. */
public static <T> CriteriaQueryBuilder<T> create(EntityManager em, Class<T> clazz) {
CriteriaQuery<T> query = em.getCriteriaBuilder().createQuery(clazz);
public static <T> CriteriaQueryBuilder<T> create(JpaTransactionManager jpaTm, Class<T> clazz) {
CriteriaQuery<T> query = jpaTm.getEntityManager().getCriteriaBuilder().createQuery(clazz);
Root<T> root = query.from(clazz);
query = query.select(root);
return new CriteriaQueryBuilder<>(query, root);
return new CriteriaQueryBuilder<>(query, root, jpaTm);
}
/** Creates a "count" query for the table for the class. */
public static <T> CriteriaQueryBuilder<Long> createCount(EntityManager em, Class<T> clazz) {
CriteriaBuilder builder = em.getCriteriaBuilder();
public static <T> CriteriaQueryBuilder<Long> createCount(
JpaTransactionManager jpaTm, Class<T> clazz) {
CriteriaBuilder builder = jpaTm.getEntityManager().getCriteriaBuilder();
CriteriaQuery<Long> query = builder.createQuery(Long.class);
Root<T> root = query.from(clazz);
query = query.select(builder.count(root));
return new CriteriaQueryBuilder<>(query, root);
return new CriteriaQueryBuilder<>(query, root, jpaTm);
}
}

View File

@@ -30,6 +30,7 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.model.ImmutableObject;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
@@ -604,6 +605,13 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
managedEntity = getEntityManager().merge(entity);
}
getEntityManager().remove(managedEntity);
// We check shouldReplicate() in TransactionInfo.addDelete(), but we have to check it here as
// well prior to attempting to create a datastore key because a non-replicated entity may not
// have one.
if (shouldReplicate(entity.getClass())) {
transactionInfo.get().addDelete(VKey.from(Key.create(entity)));
}
return managedEntity;
}
@@ -827,6 +835,12 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
replaySqlToDatastoreOverrideForTest.set(Optional.empty());
}
/** Returns true if the entity class should be replicated from SQL to datastore. */
private static boolean shouldReplicate(Class<?> entityClass) {
return !NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !SqlOnlyEntity.class.isAssignableFrom(entityClass);
}
private static class TransactionInfo {
ReadOnlyCheckingEntityManager entityManager;
boolean inTransaction = false;
@@ -883,12 +897,6 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
}
}
/** Returns true if the entity class should be replicated from SQL to datastore. */
private boolean shouldReplicate(Class<?> entityClass) {
return !NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !SqlOnlyEntity.class.isAssignableFrom(entityClass);
}
private void recordTransaction() {
if (contentsBuilder != null) {
Transaction persistedTxn = contentsBuilder.build();
@@ -1131,7 +1139,7 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
private TypedQuery<T> buildQuery() {
CriteriaQueryBuilder<T> queryBuilder =
CriteriaQueryBuilder.create(getEntityManager(), entityClass);
CriteriaQueryBuilder.create(JpaTransactionManagerImpl.this, entityClass);
return addCriteria(queryBuilder);
}
@@ -1178,7 +1186,7 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
@Override
public long count() {
CriteriaQueryBuilder<Long> queryBuilder =
CriteriaQueryBuilder.createCount(getEntityManager(), entityClass);
CriteriaQueryBuilder.createCount(JpaTransactionManagerImpl.this, entityClass);
return addCriteria(queryBuilder).getSingleResult();
}

View File

@@ -14,6 +14,7 @@
package google.registry.persistence.transaction;
import com.google.common.annotations.VisibleForTesting;
import google.registry.model.ImmutableObject;
import google.registry.model.replay.SqlOnlyEntity;
import javax.persistence.Entity;
@@ -39,7 +40,8 @@ public class TransactionEntity extends ImmutableObject implements SqlOnlyEntity
TransactionEntity() {}
TransactionEntity(byte[] contents) {
@VisibleForTesting
public TransactionEntity(byte[] contents) {
this.contents = contents;
}

View File

@@ -14,8 +14,8 @@
package google.registry.persistence.transaction;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import static org.joda.time.DateTimeZone.UTC;
import com.google.appengine.api.utils.SystemProperty;
@@ -47,6 +47,10 @@ public final class TransactionManagerFactory {
private static Supplier<JpaTransactionManager> jpaTm =
Suppliers.memoize(TransactionManagerFactory::createJpaTransactionManager);
@NonFinalForTesting
private static Supplier<JpaTransactionManager> replicaJpaTm =
Suppliers.memoize(TransactionManagerFactory::createReplicaJpaTransactionManager);
private static boolean onBeam = false;
private TransactionManagerFactory() {}
@@ -61,6 +65,14 @@ public final class TransactionManagerFactory {
}
}
private static JpaTransactionManager createReplicaJpaTransactionManager() {
if (isInAppEngine()) {
return DaggerPersistenceComponent.create().readOnlyReplicaJpaTransactionManager();
} else {
return DummyJpaTransactionManager.create();
}
}
private static DatastoreTransactionManager createTransactionManager() {
return new DatastoreTransactionManager(null);
}
@@ -108,6 +120,21 @@ public final class TransactionManagerFactory {
return jpaTm.get();
}
/** Returns a read-only {@link JpaTransactionManager} instance if configured. */
public static JpaTransactionManager replicaJpaTm() {
return replicaJpaTm.get();
}
/**
* Returns a {@link TransactionManager} that uses a replica database if one exists.
*
* <p>In Datastore mode, this is unchanged from the regular transaction manager. In SQL mode,
* however, this will be a reference to the read-only replica database if one is configured.
*/
public static TransactionManager replicaTm() {
return tm().isOfy() ? tm() : replicaJpaTm();
}
/** Returns {@link DatastoreTransactionManager} instance. */
@VisibleForTesting
public static DatastoreTransactionManager ofyTm() {
@@ -116,7 +143,7 @@ public final class TransactionManagerFactory {
/** Sets the return of {@link #jpaTm()} to the given instance of {@link JpaTransactionManager}. */
public static void setJpaTm(Supplier<JpaTransactionManager> jpaTmSupplier) {
checkNotNull(jpaTmSupplier, "jpaTmSupplier");
checkArgumentNotNull(jpaTmSupplier, "jpaTmSupplier");
checkState(
RegistryEnvironment.get().equals(RegistryEnvironment.UNITTEST)
|| RegistryToolEnvironment.get() != null,
@@ -124,13 +151,23 @@ public final class TransactionManagerFactory {
jpaTm = Suppliers.memoize(jpaTmSupplier::get);
}
/** Sets the value of {@link #replicaJpaTm()} to the given {@link JpaTransactionManager}. */
public static void setReplicaJpaTm(Supplier<JpaTransactionManager> replicaJpaTmSupplier) {
checkArgumentNotNull(replicaJpaTmSupplier, "replicaJpaTmSupplier");
checkState(
RegistryEnvironment.get().equals(RegistryEnvironment.UNITTEST)
|| RegistryToolEnvironment.get() != null,
"setReplicaJpaTm() should only be called by tools and tests.");
replicaJpaTm = Suppliers.memoize(replicaJpaTmSupplier::get);
}
/**
* Makes {@link #jpaTm()} return the {@link JpaTransactionManager} instance provided by {@code
* jpaTmSupplier} from now on. This method should only be called by an implementor of {@link
* org.apache.beam.sdk.harness.JvmInitializer}.
*/
public static void setJpaTmOnBeamWorker(Supplier<JpaTransactionManager> jpaTmSupplier) {
checkNotNull(jpaTmSupplier, "jpaTmSupplier");
checkArgumentNotNull(jpaTmSupplier, "jpaTmSupplier");
jpaTm = Suppliers.memoize(jpaTmSupplier::get);
onBeam = true;
}

View File

@@ -18,7 +18,7 @@ import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.model.index.ForeignKeyIndex.loadAndGetKey;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.GET;
import static google.registry.request.Action.Method.HEAD;
@@ -91,7 +91,9 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
@Inject @Parameter("name") Optional<String> nameParam;
@Inject @Parameter("nsLdhName") Optional<String> nsLdhNameParam;
@Inject @Parameter("nsIp") Optional<String> nsIpParam;
@Inject public RdapDomainSearchAction() {
@Inject
public RdapDomainSearchAction() {
super("domain search", EndpointType.DOMAINS);
}
@@ -223,13 +225,13 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
resultSet = getMatchingResources(query, true, querySizeLimit);
} else {
resultSet =
jpaTm()
replicaJpaTm()
.transact(
() -> {
CriteriaBuilder criteriaBuilder =
jpaTm().getEntityManager().getCriteriaBuilder();
replicaJpaTm().getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<DomainBase> queryBuilder =
CriteriaQueryBuilder.create(DomainBase.class)
CriteriaQueryBuilder.create(replicaJpaTm(), DomainBase.class)
.where(
"fullyQualifiedDomainName",
criteriaBuilder::like,
@@ -270,7 +272,7 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
resultSet = getMatchingResources(query, true, querySizeLimit);
} else {
resultSet =
jpaTm()
replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<DomainBase> builder =
@@ -354,7 +356,7 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
.map(VKey::from)
.collect(toImmutableSet());
} else {
return jpaTm()
return replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<HostResource> builder =
@@ -368,11 +370,12 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
builder =
builder.where(
"currentSponsorClientId",
jpaTm().getEntityManager().getCriteriaBuilder()::equal,
replicaJpaTm().getEntityManager().getCriteriaBuilder()::equal,
desiredRegistrar.get());
}
return getMatchingResourcesSql(builder, true, maxNameserversInFirstStage)
.resources().stream()
.resources()
.stream()
.map(HostResource::createVKey)
.collect(toImmutableSet());
});
@@ -509,11 +512,11 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
parameters.put("desiredRegistrar", desiredRegistrar.get());
}
hostKeys =
jpaTm()
replicaJpaTm()
.transact(
() -> {
javax.persistence.Query query =
jpaTm()
replicaJpaTm()
.getEntityManager()
.createNativeQuery(queryBuilder.toString())
.setMaxResults(maxNameserversInFirstStage);
@@ -568,16 +571,16 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
}
stream.forEach(domainSetBuilder::add);
} else {
jpaTm()
replicaJpaTm()
.transact(
() -> {
for (VKey<HostResource> hostKey : hostKeys) {
CriteriaQueryBuilder<DomainBase> queryBuilder =
CriteriaQueryBuilder.create(DomainBase.class)
CriteriaQueryBuilder.create(replicaJpaTm(), DomainBase.class)
.whereFieldContains("nsHosts", hostKey)
.orderByAsc("fullyQualifiedDomainName");
CriteriaBuilder criteriaBuilder =
jpaTm().getEntityManager().getCriteriaBuilder();
replicaJpaTm().getEntityManager().getCriteriaBuilder();
if (!shouldIncludeDeleted()) {
queryBuilder =
queryBuilder.where(
@@ -590,7 +593,7 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
criteriaBuilder::greaterThan,
cursorString.get());
}
jpaTm()
replicaJpaTm()
.criteriaQuery(queryBuilder.build())
.getResultStream()
.filter(this::isAuthorized)

View File

@@ -15,7 +15,7 @@
package google.registry.rdap;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static google.registry.rdap.RdapUtils.getRegistrarByIanaIdentifier;
@@ -277,7 +277,7 @@ public class RdapEntitySearchAction extends RdapSearchActionBase {
resultSet = getMatchingResources(query, false, rdapResultSetMaxSize + 1);
} else {
resultSet =
jpaTm()
replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<ContactResource> builder =
@@ -399,7 +399,7 @@ public class RdapEntitySearchAction extends RdapSearchActionBase {
querySizeLimit);
} else {
contactResultSet =
jpaTm()
replicaJpaTm()
.transact(
() ->
getMatchingResourcesSql(

View File

@@ -15,7 +15,7 @@
package google.registry.rdap;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.GET;
import static google.registry.request.Action.Method.HEAD;
@@ -233,7 +233,7 @@ public class RdapNameserverSearchAction extends RdapSearchActionBase {
return makeSearchResults(
getMatchingResources(query, shouldIncludeDeleted(), querySizeLimit), CursorType.NAME);
} else {
return jpaTm()
return replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<HostResource> queryBuilder =
@@ -290,11 +290,11 @@ public class RdapNameserverSearchAction extends RdapSearchActionBase {
}
queryBuilder.append(" ORDER BY repo_id ASC");
rdapResultSet =
jpaTm()
replicaJpaTm()
.transact(
() -> {
javax.persistence.Query query =
jpaTm()
replicaJpaTm()
.getEntityManager()
.createNativeQuery(queryBuilder.toString(), HostResource.class)
.setMaxResults(querySizeLimit);

View File

@@ -16,7 +16,7 @@ package google.registry.rdap;
import static com.google.common.base.Charsets.UTF_8;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.collect.ImmutableList;
@@ -193,16 +193,17 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
*/
<T extends EppResource> RdapResultSet<T> getMatchingResourcesSql(
CriteriaQueryBuilder<T> builder, boolean checkForVisibility, int querySizeLimit) {
jpaTm().assertInTransaction();
replicaJpaTm().assertInTransaction();
Optional<String> desiredRegistrar = getDesiredRegistrar();
if (desiredRegistrar.isPresent()) {
builder =
builder.where(
"currentSponsorClientId", jpaTm().getEntityManager().getCriteriaBuilder()::equal,
"currentSponsorClientId",
replicaJpaTm().getEntityManager().getCriteriaBuilder()::equal,
desiredRegistrar.get());
}
List<T> queryResult =
jpaTm().criteriaQuery(builder.build()).setMaxResults(querySizeLimit).getResultList();
replicaJpaTm().criteriaQuery(builder.build()).setMaxResults(querySizeLimit).getResultList();
if (checkForVisibility) {
return filterResourcesByVisibility(queryResult, querySizeLimit);
} else {
@@ -395,7 +396,7 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
RdapSearchPattern partialStringQuery,
Optional<String> cursorString,
DeletedItemHandling deletedItemHandling) {
jpaTm().assertInTransaction();
replicaJpaTm().assertInTransaction();
if (partialStringQuery.getInitialString().length()
< RdapSearchPattern.MIN_INITIAL_STRING_LENGTH) {
throw new UnprocessableEntityException(
@@ -403,8 +404,8 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
"Initial search string must be at least %d characters",
RdapSearchPattern.MIN_INITIAL_STRING_LENGTH));
}
CriteriaBuilder criteriaBuilder = jpaTm().getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(clazz);
CriteriaBuilder criteriaBuilder = replicaJpaTm().getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(replicaJpaTm(), clazz);
if (partialStringQuery.getHasWildcard()) {
builder =
builder.where(
@@ -493,9 +494,9 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
"Initial search string must be at least %d characters",
RdapSearchPattern.MIN_INITIAL_STRING_LENGTH));
}
jpaTm().assertInTransaction();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(clazz);
CriteriaBuilder criteriaBuilder = jpaTm().getEntityManager().getCriteriaBuilder();
replicaJpaTm().assertInTransaction();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(replicaJpaTm(), clazz);
CriteriaBuilder criteriaBuilder = replicaJpaTm().getEntityManager().getCriteriaBuilder();
builder = builder.where(filterField, criteriaBuilder::equal, queryString);
if (cursorString.isPresent()) {
if (cursorField.isPresent()) {
@@ -544,7 +545,7 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
RdapSearchPattern partialStringQuery,
Optional<String> cursorString,
DeletedItemHandling deletedItemHandling) {
jpaTm().assertInTransaction();
replicaJpaTm().assertInTransaction();
return queryItemsSql(clazz, "repoId", partialStringQuery, cursorString, deletedItemHandling);
}
@@ -553,7 +554,9 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
if (!Objects.equals(deletedItemHandling, DeletedItemHandling.INCLUDE)) {
builder =
builder.where(
"deletionTime", jpaTm().getEntityManager().getCriteriaBuilder()::equal, END_OF_TIME);
"deletionTime",
replicaJpaTm().getEntityManager().getCriteriaBuilder()::equal,
END_OF_TIME);
}
return builder;
}

View File

@@ -24,6 +24,7 @@ import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
@@ -86,7 +87,13 @@ public final class BrdaCopyAction implements Runnable {
}
private void copyAsRyde() throws IOException {
String nameWithoutPrefix = RdeNamingUtils.makeRydeFilename(tld, watermark, THIN, 1, 0);
// TODO(b/217772483): consider guarding this action with a lock and check if there is work.
// Not urgent since file writes on GCS are atomic.
int revision =
RdeRevision.getCurrentRevision(tld, watermark, THIN)
.orElseThrow(
() -> new IllegalStateException("RdeRevision was not set on generated deposit"));
String nameWithoutPrefix = RdeNamingUtils.makeRydeFilename(tld, watermark, THIN, 1, revision);
String name = prefix.orElse("") + nameWithoutPrefix;
BlobId xmlFilename = BlobId.of(stagingBucket, name + ".xml.ghostryde");
BlobId xmlLengthFilename = BlobId.of(stagingBucket, name + ".xml.length");

View File

@@ -14,26 +14,24 @@
package google.registry.rde;
import static com.google.appengine.api.urlfetch.FetchOptions.Builder.validateCertificate;
import static com.google.appengine.api.urlfetch.HTTPMethod.PUT;
import static com.google.common.io.BaseEncoding.base64;
import static com.google.common.net.HttpHeaders.AUTHORIZATION;
import static com.google.common.net.HttpHeaders.CONTENT_TYPE;
import static google.registry.request.UrlConnectionUtils.getResponseBytes;
import static google.registry.request.UrlConnectionUtils.setBasicAuth;
import static google.registry.request.UrlConnectionUtils.setPayload;
import static google.registry.util.DomainNameUtils.canonicalizeDomainName;
import static java.nio.charset.StandardCharsets.UTF_8;
import static javax.servlet.http.HttpServletResponse.SC_BAD_REQUEST;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.urlfetch.HTTPHeader;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.api.client.http.HttpMethods;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.request.HttpException.InternalServerErrorException;
import google.registry.request.UrlConnectionService;
import google.registry.util.Retrier;
import google.registry.util.UrlFetchException;
import google.registry.util.UrlConnectionException;
import google.registry.xjc.XjcXmlTransformer;
import google.registry.xjc.iirdea.XjcIirdeaResponseElement;
import google.registry.xjc.iirdea.XjcIirdeaResult;
@@ -41,6 +39,7 @@ import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdereport.XjcRdeReportReport;
import google.registry.xml.XmlException;
import java.io.ByteArrayInputStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.SocketTimeoutException;
import java.net.URL;
@@ -55,12 +54,15 @@ public class RdeReporter {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** @see <a href="http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05#section-4">
* ICANN Registry Interfaces - Interface details</a>*/
private static final String REPORT_MIME = "text/xml";
/**
* @see <a href="http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05#section-4">
* ICANN Registry Interfaces - Interface details</a>
*/
private static final MediaType MEDIA_TYPE = MediaType.XML_UTF_8;
@Inject Retrier retrier;
@Inject URLFetchService urlFetchService;
@Inject UrlConnectionService urlConnectionService;
@Inject @Config("rdeReportUrlPrefix") String reportUrlPrefix;
@Inject @Key("icannReportingPassword") String password;
@Inject RdeReporter() {}
@@ -74,29 +76,24 @@ public class RdeReporter {
// Send a PUT request to ICANN's HTTPS server.
URL url = makeReportUrl(header.getTld(), report.getId());
String username = header.getTld() + "_ry";
String token = base64().encode(String.format("%s:%s", username, password).getBytes(UTF_8));
final HTTPRequest req = new HTTPRequest(url, PUT, validateCertificate().setDeadline(60d));
req.addHeader(new HTTPHeader(CONTENT_TYPE, REPORT_MIME));
req.addHeader(new HTTPHeader(AUTHORIZATION, "Basic " + token));
req.setPayload(reportBytes);
logger.atInfo().log("Sending report:\n%s", new String(reportBytes, UTF_8));
HTTPResponse rsp =
byte[] responseBytes =
retrier.callWithRetry(
() -> {
HTTPResponse rsp1 = urlFetchService.fetch(req);
switch (rsp1.getResponseCode()) {
case SC_OK:
case SC_BAD_REQUEST:
break;
default:
throw new UrlFetchException("PUT failed", req, rsp1);
HttpURLConnection connection = urlConnectionService.createConnection(url);
connection.setRequestMethod(HttpMethods.PUT);
setBasicAuth(connection, username, password);
setPayload(connection, reportBytes, MEDIA_TYPE.toString());
int responseCode = connection.getResponseCode();
if (responseCode == SC_OK || responseCode == SC_BAD_REQUEST) {
return getResponseBytes(connection);
}
return rsp1;
throw new UrlConnectionException("PUT failed", connection);
},
SocketTimeoutException.class);
// Ensure the XML response is valid.
XjcIirdeaResult result = parseResult(rsp);
XjcIirdeaResult result = parseResult(responseBytes);
if (result.getCode().getValue() != 1000) {
logger.atWarning().log(
"PUT rejected: %d %s\n%s",
@@ -108,11 +105,11 @@ public class RdeReporter {
/**
* Unmarshals IIRDEA XML result object from {@link HTTPResponse} payload.
*
* @see <a href="http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05#section-4.1">
* @see <a
* href="http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05#section-4.1">
* ICANN Registry Interfaces - IIRDEA Result Object</a>
*/
private XjcIirdeaResult parseResult(HTTPResponse rsp) throws XmlException {
byte[] responseBytes = rsp.getContent();
private XjcIirdeaResult parseResult(byte[] responseBytes) throws XmlException {
logger.atInfo().log("Received response:\n%s", new String(responseBytes, UTF_8));
XjcIirdeaResponseElement response = XjcXmlTransformer.unmarshal(
XjcIirdeaResponseElement.class, new ByteArrayInputStream(responseBytes));

View File

@@ -391,9 +391,6 @@ public final class RdeStagingAction implements Runnable {
if (revision.isPresent()) {
throw new BadRequestException("Revision parameter not allowed in standard operation");
}
if (beam) {
throw new BadRequestException("Beam parameter not allowed in standard operation");
}
return ImmutableSetMultimap.copyOf(
Multimaps.filterValues(

View File

@@ -14,8 +14,6 @@
package google.registry.rde;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Verify.verify;
import static google.registry.model.common.Cursor.getCursorTimeOrStartOfTime;
@@ -26,6 +24,7 @@ import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.appengine.tools.mapreduce.Reducer;
import com.google.appengine.tools.mapreduce.ReducerInput;
import com.google.cloud.storage.BlobId;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
@@ -36,10 +35,11 @@ import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.request.Action.Service;
import google.registry.request.RequestParameters;
import google.registry.request.lock.LockHandler;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.util.TaskQueueUtils;
import google.registry.util.CloudTasksUtils;
import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdeheader.XjcRdeHeaderElement;
import google.registry.xml.ValidationMode;
@@ -65,7 +65,7 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final TaskQueueUtils taskQueueUtils;
private final CloudTasksUtils cloudTasksUtils;
private final LockHandler lockHandler;
private final String bucket;
private final Duration lockTimeout;
@@ -74,14 +74,14 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
private final GcsUtils gcsUtils;
RdeStagingReducer(
TaskQueueUtils taskQueueUtils,
CloudTasksUtils cloudTasksUtils,
LockHandler lockHandler,
String bucket,
Duration lockTimeout,
byte[] stagingKeyBytes,
ValidationMode validationMode,
GcsUtils gcsUtils) {
this.taskQueueUtils = taskQueueUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.lockHandler = lockHandler;
this.bucket = bucket;
this.lockTimeout = lockTimeout;
@@ -226,23 +226,35 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), tld, newPosition);
RdeRevision.saveRevision(tld, watermark, mode, revision);
// Enqueueing a task is a side effect that is not undone if the transaction rolls
// back. So this may result in multiple copies of the same task being processed. This
// is fine because the RdeUploadAction is guarded by a lock and tracks progress by
// cursor. The BrdaCopyAction writes a file to GCS, which is an atomic action.
if (mode == RdeMode.FULL) {
taskQueueUtils.enqueue(
getQueue("rde-upload"),
withUrl(RdeUploadAction.PATH).param(RequestParameters.PARAM_TLD, tld));
cloudTasksUtils.enqueue(
"rde-upload",
cloudTasksUtils.createPostTask(
RdeUploadAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(RequestParameters.PARAM_TLD, tld)));
} else {
taskQueueUtils.enqueue(
getQueue("brda"),
withUrl(BrdaCopyAction.PATH)
.param(RequestParameters.PARAM_TLD, tld)
.param(RdeModule.PARAM_WATERMARK, watermark.toString()));
cloudTasksUtils.enqueue(
"brda",
cloudTasksUtils.createPostTask(
BrdaCopyAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
tld,
RdeModule.PARAM_WATERMARK,
watermark.toString())));
}
});
}
/** Injectible factory for creating {@link RdeStagingReducer}. */
static class Factory {
@Inject TaskQueueUtils taskQueueUtils;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject LockHandler lockHandler;
@Inject @Config("rdeBucket") String bucket;
@Inject @Config("rdeStagingLockTimeout") Duration lockTimeout;
@@ -252,7 +264,7 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
RdeStagingReducer create(ValidationMode validationMode, GcsUtils gcsUtils) {
return new RdeStagingReducer(
taskQueueUtils,
cloudTasksUtils,
lockHandler,
bucket,
lockTimeout,

View File

@@ -134,7 +134,7 @@ public final class RdeUploadAction implements Runnable, EscrowTask {
}
cloudTasksUtils.enqueue(
RDE_REPORT_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeReportAction.PATH, Service.BACKEND.getServiceId(), params));
}

View File

@@ -40,6 +40,8 @@ public class ReportingModule {
public static final String BEAM_QUEUE = "beam-reporting";
/** The amount of time expected for the Dataflow jobs to complete. */
public static final int ENQUEUE_DELAY_MINUTES = 10;
/**
* The request parameter name used by reporting actions that takes a year/month parameter, which
* defaults to the last month.

View File

@@ -1,38 +0,0 @@
// Copyright 2018 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.reporting;
import com.google.appengine.api.taskqueue.QueueFactory;
import com.google.appengine.api.taskqueue.TaskOptions;
import java.util.Map;
import org.joda.time.Duration;
import org.joda.time.YearMonth;
/** Static methods common to various reporting tasks. */
public class ReportingUtils {
private static final int ENQUEUE_DELAY_MINUTES = 10;
/** Enqueues a task that takes a Beam jobId and the {@link YearMonth} as parameters. */
public static void enqueueBeamReportingTask(String path, Map<String, String> parameters) {
TaskOptions publishTask =
TaskOptions.Builder.withUrl(path)
.method(TaskOptions.Method.POST)
// Dataflow jobs tend to take about 10 minutes to complete.
.countdownMillis(Duration.standardMinutes(ENQUEUE_DELAY_MINUTES).getMillis());
parameters.forEach(publishTask::param);
QueueFactory.getQueue(ReportingModule.BEAM_QUEUE).add(publishTask);
}
}

View File

@@ -17,7 +17,6 @@ package google.registry.reporting.billing;
import static google.registry.beam.BeamUtils.createJobName;
import static google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase.CLOUD_SQL;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.reporting.ReportingUtils.enqueueBeamReportingTask;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
@@ -27,21 +26,25 @@ import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase;
import google.registry.persistence.PersistenceModule;
import google.registry.reporting.ReportingModule;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.io.IOException;
import java.util.Map;
import javax.inject.Inject;
import org.joda.time.Duration;
import org.joda.time.YearMonth;
/**
@@ -75,6 +78,7 @@ public class GenerateInvoicesAction implements Runnable {
private final Response response;
private final Dataflow dataflow;
private final PrimaryDatabase database;
private final CloudTasksUtils cloudTasksUtils;
@Inject
GenerateInvoicesAction(
@@ -87,6 +91,7 @@ public class GenerateInvoicesAction implements Runnable {
@Parameter(RequestParameters.PARAM_DATABASE) PrimaryDatabase database,
YearMonth yearMonth,
BillingEmailUtils emailUtils,
CloudTasksUtils cloudTasksUtils,
Clock clock,
Response response,
Dataflow dataflow) {
@@ -104,6 +109,7 @@ public class GenerateInvoicesAction implements Runnable {
this.database = database;
this.yearMonth = yearMonth;
this.emailUtils = emailUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.clock = clock;
this.response = response;
this.dataflow = dataflow;
@@ -120,17 +126,16 @@ public class GenerateInvoicesAction implements Runnable {
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(
ImmutableMap.of(
"yearMonth",
yearMonth.toString("yyyy-MM"),
"invoiceFilePrefix",
invoiceFilePrefix,
"database",
database.name(),
"billingBucketUrl",
billingBucketUrl,
"registryEnvironment",
RegistryEnvironment.get().name()));
new ImmutableMap.Builder<String, String>()
.put("yearMonth", yearMonth.toString("yyyy-MM"))
.put("invoiceFilePrefix", invoiceFilePrefix)
.put("database", database.name())
.put("billingBucketUrl", billingBucketUrl)
.put("registryEnvironment", RegistryEnvironment.get().name())
.put(
"jpaTransactionManagerType",
PersistenceModule.JpaTransactionManagerType.READ_ONLY_REPLICA.toString())
.build());
LaunchFlexTemplateResponse launchResponse =
dataflow
.projects()
@@ -144,13 +149,17 @@ public class GenerateInvoicesAction implements Runnable {
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
String jobId = launchResponse.getJob().getId();
if (shouldPublish) {
Map<String, String> beamTaskParameters =
ImmutableMap.of(
ReportingModule.PARAM_JOB_ID,
jobId,
ReportingModule.PARAM_YEAR_MONTH,
yearMonth.toString());
enqueueBeamReportingTask(PublishInvoicesAction.PATH, beamTaskParameters);
cloudTasksUtils.enqueue(
ReportingModule.BEAM_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
PublishInvoicesAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
ReportingModule.PARAM_JOB_ID,
jobId,
ReportingModule.PARAM_YEAR_MONTH,
yearMonth.toString()),
Duration.standardMinutes(ReportingModule.ENQUEUE_DELAY_MINUTES)));
}
response.setStatus(SC_OK);
response.setPayload(String.format("Launched invoicing pipeline: %s", jobId));

View File

@@ -125,7 +125,7 @@ public class PublishInvoicesAction implements Runnable {
private void enqueueCopyDetailReportsTask() {
cloudTasksUtils.enqueue(
BillingModule.CRON_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
CopyDetailReportsAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(PARAM_YEAR_MONTH, yearMonth.toString())));

View File

@@ -21,9 +21,6 @@ import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.taskqueue.QueueFactory;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
@@ -33,9 +30,11 @@ import google.registry.bigquery.BigqueryJobFailureException;
import google.registry.config.RegistryConfig.Config;
import google.registry.reporting.icann.IcannReportingModule.ReportType;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.CloudTasksUtils;
import google.registry.util.EmailMessage;
import google.registry.util.Retrier;
import google.registry.util.SendEmailService;
@@ -86,6 +85,7 @@ public final class IcannReportingStagingAction implements Runnable {
@Inject @Config("gSuiteOutgoingEmailAddress") InternetAddress sender;
@Inject @Config("alertRecipientEmailAddress") InternetAddress recipient;
@Inject SendEmailService emailService;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject IcannReportingStagingAction() {}
@@ -119,11 +119,13 @@ public final class IcannReportingStagingAction implements Runnable {
response.setPayload("Completed staging action.");
logger.atInfo().log("Enqueueing report upload.");
TaskOptions uploadTask =
TaskOptions.Builder.withUrl(IcannReportingUploadAction.PATH)
.method(Method.POST)
.countdownMillis(Duration.standardMinutes(2).getMillis());
QueueFactory.getQueue(CRON_QUEUE).add(uploadTask);
cloudTasksUtils.enqueue(
CRON_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
IcannReportingUploadAction.PATH,
Service.BACKEND.toString(),
null,
Duration.standardMinutes(2)));
return null;
},
BigqueryJobFailureException.class);

View File

@@ -16,7 +16,6 @@ package google.registry.reporting.spec11;
import static google.registry.beam.BeamUtils.createJobName;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.reporting.ReportingUtils.enqueueBeamReportingTask;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
@@ -26,6 +25,7 @@ import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
@@ -34,14 +34,16 @@ import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase;
import google.registry.reporting.ReportingModule;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.io.IOException;
import java.util.Map;
import javax.inject.Inject;
import org.joda.time.Duration;
import org.joda.time.LocalDate;
/**
@@ -73,6 +75,7 @@ public class GenerateSpec11ReportAction implements Runnable {
private final Dataflow dataflow;
private final PrimaryDatabase database;
private final boolean sendEmail;
private final CloudTasksUtils cloudTasksUtils;
@Inject
GenerateSpec11ReportAction(
@@ -86,7 +89,8 @@ public class GenerateSpec11ReportAction implements Runnable {
@Parameter(ReportingModule.SEND_EMAIL) boolean sendEmail,
Clock clock,
Response response,
Dataflow dataflow) {
Dataflow dataflow,
CloudTasksUtils cloudTasksUtils) {
this.projectId = projectId;
this.jobRegion = jobRegion;
this.stagingBucketUrl = stagingBucketUrl;
@@ -101,6 +105,7 @@ public class GenerateSpec11ReportAction implements Runnable {
this.response = response;
this.dataflow = dataflow;
this.sendEmail = sendEmail;
this.cloudTasksUtils = cloudTasksUtils;
}
@Override
@@ -136,11 +141,18 @@ public class GenerateSpec11ReportAction implements Runnable {
.execute();
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
String jobId = launchResponse.getJob().getId();
Map<String, String> beamTaskParameters =
ImmutableMap.of(
ReportingModule.PARAM_JOB_ID, jobId, ReportingModule.PARAM_DATE, date.toString());
if (sendEmail) {
enqueueBeamReportingTask(PublishSpec11ReportAction.PATH, beamTaskParameters);
cloudTasksUtils.enqueue(
ReportingModule.BEAM_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
PublishSpec11ReportAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
ReportingModule.PARAM_JOB_ID,
jobId,
ReportingModule.PARAM_DATE,
date.toString()),
Duration.standardMinutes(ReportingModule.ENQUEUE_DELAY_MINUTES)));
}
response.setStatus(SC_OK);
response.setPayload(String.format("Launched Spec11 pipeline: %s", jobId));

View File

@@ -23,12 +23,11 @@ import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.appengine.api.urlfetch.URLFetchServiceFactory;
import com.google.appengine.api.users.UserService;
import com.google.appengine.api.users.UserServiceFactory;
import dagger.Module;
import dagger.Provides;
import java.net.HttpURLConnection;
import javax.inject.Singleton;
/** Dagger modules for App Engine services and other vendor classes. */
@@ -45,14 +44,12 @@ public final class Modules {
}
}
/** Dagger module for {@link URLFetchService}. */
/** Dagger module for {@link UrlConnectionService}. */
@Module
public static final class URLFetchServiceModule {
private static final URLFetchService fetchService = URLFetchServiceFactory.getURLFetchService();
public static final class UrlConnectionServiceModule {
@Provides
static URLFetchService provideURLFetchService() {
return fetchService;
static UrlConnectionService provideUrlConnectionService() {
return url -> (HttpURLConnection) url.openConnection();
}
}

View File

@@ -0,0 +1,25 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.request;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
/** Functional interface for opening a connection from a URL, injectable for testing. */
public interface UrlConnectionService {
HttpURLConnection createConnection(URL url) throws IOException;
}

View File

@@ -1,4 +1,4 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.util;
package google.registry.request;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.io.BaseEncoding.base64;
@@ -22,36 +22,41 @@ import static com.google.common.net.HttpHeaders.CONTENT_LENGTH;
import static com.google.common.net.HttpHeaders.CONTENT_TYPE;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.appengine.api.urlfetch.HTTPHeader;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.common.base.Ascii;
import com.google.common.base.Strings;
import com.google.common.io.ByteStreams;
import com.google.common.net.MediaType;
import java.util.Optional;
import java.io.DataOutputStream;
import java.io.IOException;
import java.net.URLConnection;
import java.util.Random;
/** Helper methods for the App Engine URL fetch service. */
public final class UrlFetchUtils {
/** Utilities for common functionality relating to {@link java.net.URLConnection}s. */
public class UrlConnectionUtils {
/** Returns value of first header matching {@code name}. */
public static Optional<String> getHeaderFirst(HTTPResponse rsp, String name) {
return getHeaderFirstInternal(rsp.getHeadersUncombined(), name);
/** Retrieves the response from the given connection as a byte array. */
public static byte[] getResponseBytes(URLConnection connection) throws IOException {
return ByteStreams.toByteArray(connection.getInputStream());
}
/** Returns value of first header matching {@code name}. */
public static Optional<String> getHeaderFirst(HTTPRequest req, String name) {
return getHeaderFirstInternal(req.getHeaders(), name);
/** Sets auth on the given connection with the given username/password. */
public static void setBasicAuth(URLConnection connection, String username, String password) {
setBasicAuth(connection, String.format("%s:%s", username, password));
}
private static Optional<String> getHeaderFirstInternal(Iterable<HTTPHeader> hdrs, String name) {
name = Ascii.toLowerCase(name);
for (HTTPHeader header : hdrs) {
if (Ascii.toLowerCase(header.getName()).equals(name)) {
return Optional.of(header.getValue());
}
/** Sets auth on the given connection with the given string, formatted "username:password". */
public static void setBasicAuth(URLConnection connection, String usernameAndPassword) {
String token = base64().encode(usernameAndPassword.getBytes(UTF_8));
connection.setRequestProperty(AUTHORIZATION, "Basic " + token);
}
/** Sets the given byte[] payload on the given connection with a particular content type. */
public static void setPayload(URLConnection connection, byte[] bytes, String contentType)
throws IOException {
connection.setRequestProperty(CONTENT_TYPE, contentType);
connection.setDoOutput(true);
try (DataOutputStream dataStream = new DataOutputStream(connection.getOutputStream())) {
dataStream.write(bytes);
}
return Optional.empty();
}
/**
@@ -62,16 +67,16 @@ public final class UrlFetchUtils {
* @see <a href="http://www.ietf.org/rfc/rfc2388.txt">RFC2388 - Returning Values from Forms</a>
*/
public static void setPayloadMultipart(
HTTPRequest request,
URLConnection connection,
String name,
String filename,
MediaType contentType,
String data,
Random random) {
Random random)
throws IOException {
String boundary = createMultipartBoundary(random);
checkState(
!data.contains(boundary),
"Multipart data contains autogenerated boundary: %s", boundary);
!data.contains(boundary), "Multipart data contains autogenerated boundary: %s", boundary);
String multipart =
String.format("--%s\r\n", boundary)
+ String.format(
@@ -83,11 +88,9 @@ public final class UrlFetchUtils {
+ "\r\n"
+ String.format("--%s--\r\n", boundary);
byte[] payload = multipart.getBytes(UTF_8);
request.addHeader(
new HTTPHeader(
CONTENT_TYPE, String.format("multipart/form-data;" + " boundary=\"%s\"", boundary)));
request.addHeader(new HTTPHeader(CONTENT_LENGTH, Integer.toString(payload.length)));
request.setPayload(payload);
connection.setRequestProperty(CONTENT_LENGTH, Integer.toString(payload.length));
setPayload(
connection, payload, String.format("multipart/form-data;" + " boundary=\"%s\"", boundary));
}
private static String createMultipartBoundary(Random random) {
@@ -98,12 +101,4 @@ public final class UrlFetchUtils {
// See https://tools.ietf.org/html/rfc2046#section-5.1.1
return Strings.repeat("-", 30) + base64().encode(rand);
}
/** Sets the HTTP Basic Authentication header on an {@link HTTPRequest}. */
public static void setAuthorizationHeader(HTTPRequest req, Optional<String> login) {
if (login.isPresent()) {
String token = base64().encode(login.get().getBytes(UTF_8));
req.addHeader(new HTTPHeader(AUTHORIZATION, "Basic " + token));
}
}
}

View File

@@ -15,12 +15,12 @@
package google.registry.tmch;
import static com.google.common.base.Verify.verifyNotNull;
import static google.registry.util.UrlFetchUtils.setAuthorizationHeader;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.common.flogger.FluentLogger;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.tld.Registry;
import google.registry.request.UrlConnectionUtils;
import java.net.HttpURLConnection;
import java.util.Optional;
import javax.inject.Inject;
@@ -37,8 +37,9 @@ final class LordnRequestInitializer {
}
/** Initializes a URL fetch request for talking to the MarksDB server. */
void initialize(HTTPRequest request, String tld) {
setAuthorizationHeader(request, getMarksDbLordnCredentials(tld));
void initialize(HttpURLConnection connection, String tld) {
getMarksDbLordnCredentials(tld)
.ifPresent(login -> UrlConnectionUtils.setBasicAuth(connection, login));
}
/** Returns the username and password for the current TLD to login to the MarksDB server. */

View File

@@ -14,25 +14,23 @@
package google.registry.tmch;
import static com.google.appengine.api.urlfetch.FetchOptions.Builder.validateCertificate;
import static com.google.appengine.api.urlfetch.HTTPMethod.GET;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.request.UrlConnectionUtils.getResponseBytes;
import static google.registry.request.UrlConnectionUtils.setBasicAuth;
import static google.registry.util.HexDumper.dumpHex;
import static google.registry.util.UrlFetchUtils.setAuthorizationHeader;
import static java.nio.charset.StandardCharsets.US_ASCII;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import com.google.common.io.ByteSource;
import google.registry.config.RegistryConfig.Config;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.util.UrlFetchException;
import google.registry.request.UrlConnectionService;
import google.registry.util.UrlConnectionException;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
import java.security.Security;
import java.security.SignatureException;
@@ -57,7 +55,8 @@ public final class Marksdb {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final int MAX_DNL_LOGGING_LENGTH = 500;
@Inject URLFetchService fetchService;
@Inject UrlConnectionService urlConnectionService;
@Inject @Config("tmchMarksdbUrl") String tmchMarksdbUrl;
@Inject @Key("marksdbPublicKey") PGPPublicKey marksdbPublicKey;
@Inject Marksdb() {}
@@ -112,19 +111,16 @@ public final class Marksdb {
}
byte[] fetch(URL url, Optional<String> loginAndPassword) throws IOException {
HTTPRequest req = new HTTPRequest(url, GET, validateCertificate().setDeadline(60d));
setAuthorizationHeader(req, loginAndPassword);
HTTPResponse rsp;
HttpURLConnection connection = urlConnectionService.createConnection(url);
loginAndPassword.ifPresent(auth -> setBasicAuth(connection, auth));
try {
rsp = fetchService.fetch(req);
if (connection.getResponseCode() != SC_OK) {
throw new UrlConnectionException("Failed to fetch from MarksDB", connection);
}
return getResponseBytes(connection);
} catch (IOException e) {
throw new IOException(
String.format("Error connecting to MarksDB at URL %s", url), e);
throw new IOException(String.format("Error connecting to MarksDB at URL %s", url), e);
}
if (rsp.getResponseCode() != SC_OK) {
throw new UrlFetchException("Failed to fetch from MarksDB", req, rsp);
}
return rsp.getContent();
}
List<String> fetchSignedCsv(Optional<String> loginAndPassword, String csvPath, String sigPath)

View File

@@ -16,46 +16,46 @@ package google.registry.tmch;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static com.google.appengine.api.urlfetch.FetchOptions.Builder.validateCertificate;
import static com.google.appengine.api.urlfetch.HTTPMethod.POST;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.net.HttpHeaders.LOCATION;
import static com.google.common.net.MediaType.CSV_UTF_8;
import static google.registry.request.UrlConnectionUtils.getResponseBytes;
import static google.registry.tmch.LordnTaskUtils.COLUMNS_CLAIMS;
import static google.registry.tmch.LordnTaskUtils.COLUMNS_SUNRISE;
import static google.registry.util.UrlFetchUtils.getHeaderFirst;
import static google.registry.util.UrlFetchUtils.setPayloadMultipart;
import static java.nio.charset.StandardCharsets.US_ASCII;
import static java.nio.charset.StandardCharsets.UTF_8;
import static javax.servlet.http.HttpServletResponse.SC_ACCEPTED;
import com.google.api.client.http.HttpMethods;
import com.google.appengine.api.taskqueue.LeaseOptions;
import com.google.appengine.api.taskqueue.Queue;
import com.google.appengine.api.taskqueue.TaskHandle;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.apphosting.api.DeadlineExceededException;
import com.google.common.base.Joiner;
import com.google.common.base.Strings;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Ordering;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import google.registry.request.UrlConnectionService;
import google.registry.request.UrlConnectionUtils;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.Retrier;
import google.registry.util.TaskQueueUtils;
import google.registry.util.UrlFetchException;
import google.registry.util.UrlConnectionException;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
import java.security.SecureRandom;
import java.util.List;
import java.util.Optional;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import javax.inject.Inject;
@@ -94,7 +94,8 @@ public final class NordnUploadAction implements Runnable {
@Inject Retrier retrier;
@Inject SecureRandom random;
@Inject LordnRequestInitializer lordnRequestInitializer;
@Inject URLFetchService fetchService;
@Inject UrlConnectionService urlConnectionService;
@Inject @Config("tmchMarksdbUrl") String tmchMarksdbUrl;
@Inject @Parameter(LORDN_PHASE_PARAM) String phase;
@Inject @Parameter(RequestParameters.PARAM_TLD) String tld;
@@ -125,15 +126,19 @@ public final class NordnUploadAction implements Runnable {
* delimited String.
*/
static String convertTasksToCsv(List<TaskHandle> tasks, DateTime now, String columns) {
String header = String.format("1,%s,%d\n%s\n", now, tasks.size(), columns);
StringBuilder csv = new StringBuilder(header);
// Use a Set for deduping purposes so we can be idempotent in case tasks happened to be
// enqueued multiple times for a given domain create.
ImmutableSortedSet.Builder<String> builder =
new ImmutableSortedSet.Builder<>(Ordering.natural());
for (TaskHandle task : checkNotNull(tasks)) {
String payload = new String(task.getPayload(), UTF_8);
if (!Strings.isNullOrEmpty(payload)) {
csv.append(payload).append("\n");
builder.add(payload + '\n');
}
}
return csv.toString();
ImmutableSortedSet<String> csvLines = builder.build();
String header = String.format("1,%s,%d\n%s\n", now, csvLines.size(), columns);
return header + Joiner.on("").join(csvLines);
}
/** Leases and returns all tasks from the queue with the specified tag tld, in batches. */
@@ -168,6 +173,11 @@ public final class NordnUploadAction implements Runnable {
: LordnTaskUtils.QUEUE_CLAIMS);
String columns = phase.equals(PARAM_LORDN_PHASE_SUNRISE) ? COLUMNS_SUNRISE : COLUMNS_CLAIMS;
List<TaskHandle> tasks = loadAllTasks(queue, tld);
// Note: This upload/task deletion isn't done atomically (it's not clear how one would do so
// anyway). As a result, it is possible that the upload might succeed yet the deletion of
// enqueued tasks might fail. If so, this would result in the same lines being uploaded to NORDN
// across mulitple uploads. This is probably OK; all that we really cannot have is a missing
// line.
if (!tasks.isEmpty()) {
String csvData = convertTasksToCsv(tasks, now, columns);
uploadCsvToLordn(String.format("/LORDN/%s/%s", tld, phase), csvData);
@@ -181,47 +191,48 @@ public final class NordnUploadAction implements Runnable {
* <p>Idempotency: If the exact same LORDN report is uploaded twice, the MarksDB server will
* return the same confirmation number.
*
* @see <a href="http://tools.ietf.org/html/draft-lozano-tmch-func-spec-08#section-6.3">
* TMCH functional specifications - LORDN File</a>
* @see <a href="http://tools.ietf.org/html/draft-lozano-tmch-func-spec-08#section-6.3">TMCH
* functional specifications - LORDN File</a>
*/
private void uploadCsvToLordn(String urlPath, String csvData) throws IOException {
String url = tmchMarksdbUrl + urlPath;
logger.atInfo().log(
"LORDN upload task %s: Sending to URL: %s ; data: %s", actionLogId, url, csvData);
HTTPRequest req = new HTTPRequest(new URL(url), POST, validateCertificate().setDeadline(60d));
lordnRequestInitializer.initialize(req, tld);
setPayloadMultipart(req, "file", "claims.csv", CSV_UTF_8, csvData, random);
HTTPResponse rsp;
HttpURLConnection connection = urlConnectionService.createConnection(new URL(url));
connection.setRequestMethod(HttpMethods.POST);
lordnRequestInitializer.initialize(connection, tld);
UrlConnectionUtils.setPayloadMultipart(
connection, "file", "claims.csv", CSV_UTF_8, csvData, random);
try {
rsp = fetchService.fetch(req);
int responseCode = connection.getResponseCode();
if (logger.atInfo().isEnabled()) {
String responseContent = new String(getResponseBytes(connection), US_ASCII);
if (responseContent.isEmpty()) {
responseContent = "(null)";
}
logger.atInfo().log(
"LORDN upload task %s response: HTTP response code %d, response data: %s",
actionLogId, responseCode, responseContent);
}
if (responseCode != SC_ACCEPTED) {
throw new UrlConnectionException(
String.format(
"LORDN upload task %s error: Failed to upload LORDN claims to MarksDB",
actionLogId),
connection);
}
String location = connection.getHeaderField(LOCATION);
if (location == null) {
throw new UrlConnectionException(
String.format(
"LORDN upload task %s error: MarksDB failed to provide a Location header",
actionLogId),
connection);
}
getQueue(NordnVerifyAction.QUEUE).add(makeVerifyTask(new URL(location)));
} catch (IOException e) {
throw new IOException(
String.format("Error connecting to MarksDB at URL %s", url), e);
throw new IOException(String.format("Error connecting to MarksDB at URL %s", url), e);
}
if (logger.atInfo().isEnabled()) {
String response =
(rsp.getContent() == null) ? "(null)" : new String(rsp.getContent(), US_ASCII);
logger.atInfo().log(
"LORDN upload task %s response: HTTP response code %d, response data: %s",
actionLogId, rsp.getResponseCode(), response);
}
if (rsp.getResponseCode() != SC_ACCEPTED) {
throw new UrlFetchException(
String.format(
"LORDN upload task %s error: Failed to upload LORDN claims to MarksDB", actionLogId),
req,
rsp);
}
Optional<String> location = getHeaderFirst(rsp, LOCATION);
if (!location.isPresent()) {
throw new UrlFetchException(
String.format(
"LORDN upload task %s error: MarksDB failed to provide a Location header",
actionLogId),
req,
rsp);
}
getQueue(NordnVerifyAction.QUEUE).add(makeVerifyTask(new URL(location.get())));
}
private TaskOptions makeVerifyTask(URL url) {

View File

@@ -14,15 +14,11 @@
package google.registry.tmch;
import static com.google.appengine.api.urlfetch.FetchOptions.Builder.validateCertificate;
import static com.google.appengine.api.urlfetch.HTTPMethod.GET;
import static google.registry.request.UrlConnectionUtils.getResponseBytes;
import static java.nio.charset.StandardCharsets.UTF_8;
import static javax.servlet.http.HttpServletResponse.SC_NO_CONTENT;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.flogger.FluentLogger;
import com.google.common.io.ByteSource;
@@ -32,9 +28,11 @@ import google.registry.request.HttpException.ConflictException;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.UrlConnectionService;
import google.registry.request.auth.Auth;
import google.registry.util.UrlFetchException;
import google.registry.util.UrlConnectionException;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.Map.Entry;
import javax.inject.Inject;
@@ -68,7 +66,8 @@ public final class NordnVerifyAction implements Runnable {
@Inject LordnRequestInitializer lordnRequestInitializer;
@Inject Response response;
@Inject URLFetchService fetchService;
@Inject UrlConnectionService urlConnectionService;
@Inject @Header(URL_HEADER) URL url;
@Inject @Header(HEADER_ACTION_LOG_ID) String actionLogId;
@Inject @Parameter(RequestParameters.PARAM_TLD) String tld;
@@ -96,51 +95,49 @@ public final class NordnVerifyAction implements Runnable {
@VisibleForTesting
LordnLog verify() throws IOException {
logger.atInfo().log("LORDN verify task %s: Sending request to URL %s", actionLogId, url);
HTTPRequest req = new HTTPRequest(url, GET, validateCertificate().setDeadline(60d));
lordnRequestInitializer.initialize(req, tld);
HTTPResponse rsp;
HttpURLConnection connection = urlConnectionService.createConnection(url);
lordnRequestInitializer.initialize(connection, tld);
try {
rsp = fetchService.fetch(req);
} catch (IOException e) {
throw new IOException(
String.format("Error connecting to MarksDB at URL %s", url), e);
}
logger.atInfo().log(
"LORDN verify task %s response: HTTP response code %d, response data: %s",
actionLogId, rsp.getResponseCode(), rsp.getContent());
if (rsp.getResponseCode() == SC_NO_CONTENT) {
// Send a 400+ status code so App Engine will retry the task.
throw new ConflictException("Not ready");
}
if (rsp.getResponseCode() != SC_OK) {
throw new UrlFetchException(
String.format("LORDN verify task %s: Failed to verify LORDN upload to MarksDB.",
actionLogId),
req, rsp);
}
LordnLog log =
LordnLog.parse(ByteSource.wrap(rsp.getContent()).asCharSource(UTF_8).readLines());
if (log.getStatus() == LordnLog.Status.ACCEPTED) {
logger.atInfo().log("LORDN verify task %s: Upload accepted.", actionLogId);
} else {
logger.atSevere().log(
"LORDN verify task %s: Upload rejected with reason: %s", actionLogId, log);
}
for (Entry<String, LordnLog.Result> result : log) {
switch (result.getValue().getOutcome()) {
case OK:
break;
case WARNING:
// fall through
case ERROR:
logger.atWarning().log(result.toString());
break;
default:
logger.atWarning().log(
"LORDN verify task %s: Unexpected outcome: %s", actionLogId, result);
break;
int responseCode = connection.getResponseCode();
logger.atInfo().log(
"LORDN verify task %s response: HTTP response code %d", actionLogId, responseCode);
if (responseCode == SC_NO_CONTENT) {
// Send a 400+ status code so App Engine will retry the task.
throw new ConflictException("Not ready");
}
if (responseCode != SC_OK) {
throw new UrlConnectionException(
String.format(
"LORDN verify task %s: Failed to verify LORDN upload to MarksDB.", actionLogId),
connection);
}
LordnLog log =
LordnLog.parse(
ByteSource.wrap(getResponseBytes(connection)).asCharSource(UTF_8).readLines());
if (log.getStatus() == LordnLog.Status.ACCEPTED) {
logger.atInfo().log("LORDN verify task %s: Upload accepted.", actionLogId);
} else {
logger.atSevere().log(
"LORDN verify task %s: Upload rejected with reason: %s", actionLogId, log);
}
for (Entry<String, LordnLog.Result> result : log) {
switch (result.getValue().getOutcome()) {
case OK:
break;
case WARNING:
// fall through
case ERROR:
logger.atWarning().log(result.toString());
break;
default:
logger.atWarning().log(
"LORDN verify task %s: Unexpected outcome: %s", actionLogId, result);
break;
}
}
return log;
} catch (IOException e) {
throw new IOException(String.format("Error connecting to MarksDB at URL %s", url), e);
}
return log;
}
}

View File

@@ -0,0 +1,65 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import java.util.Optional;
/**
* Enumerates the DNSSEC digest types for use with Delegation Signer records.
*
* <p>This also enforces the set of types that are valid for use with Cloud DNS. Customers cannot
* create DS records containing any other digest type.
*
* <p>The complete list can be found here:
* https://www.iana.org/assignments/ds-rr-types/ds-rr-types.xhtml
*/
public enum DigestType {
SHA1(1, 20),
SHA256(2, 32),
// Algorithm number 3 is GOST R 34.11-94 and is deliberately NOT SUPPORTED.
// This algorithm was reviewed by ise-crypto and deemed academically broken (b/207029800).
// In addition, RFC 8624 specifies that this algorithm MUST NOT be used for DNSSEC delegations.
// TODO(sarhabot@): Add note in Cloud DNS code to notify the Registry of any new changes to
// supported digest types.
SHA384(4, 48);
private final int wireValue;
private final int bytes;
DigestType(int wireValue, int bytes) {
this.wireValue = wireValue;
this.bytes = bytes;
}
/** Fetches a DigestType enumeration constant by its IANA assigned value. */
public static Optional<DigestType> fromWireValue(int wireValue) {
for (DigestType alg : DigestType.values()) {
if (alg.getWireValue() == wireValue) {
return Optional.of(alg);
}
}
return Optional.empty();
}
/** Fetches a value in the range [0, 255] that encodes this DS digest type on the wire. */
public int getWireValue() {
return wireValue;
}
/** Returns the expected length in bytes of the signature. */
public int getBytes() {
return bytes;
}
}

View File

@@ -15,14 +15,16 @@
package google.registry.tools;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.batch.AsyncTaskEnqueuer.QUEUE_ASYNC_ACTIONS;
import static google.registry.model.EppResourceUtils.loadByForeignKeyCached;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.tools.LockOrUnlockDomainCommand.REGISTRY_LOCK_STATUSES;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Sets;
import google.registry.batch.AsyncTaskEnqueuer;
import google.registry.batch.RelockDomainAction;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Reason;
@@ -32,6 +34,8 @@ import google.registry.model.domain.RegistryLock;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import google.registry.model.tld.RegistryLockDao;
import google.registry.request.Action.Service;
import google.registry.util.CloudTasksUtils;
import google.registry.util.StringGenerator;
import java.util.Optional;
import javax.annotation.Nullable;
@@ -53,16 +57,16 @@ public final class DomainLockUtils {
private final StringGenerator stringGenerator;
private final String registryAdminRegistrarId;
private final AsyncTaskEnqueuer asyncTaskEnqueuer;
private CloudTasksUtils cloudTasksUtils;
@Inject
public DomainLockUtils(
@Named("base58StringGenerator") StringGenerator stringGenerator,
@Config("registryAdminClientId") String registryAdminRegistrarId,
AsyncTaskEnqueuer asyncTaskEnqueuer) {
CloudTasksUtils cloudTasksUtils) {
this.stringGenerator = stringGenerator;
this.registryAdminRegistrarId = registryAdminRegistrarId;
this.asyncTaskEnqueuer = asyncTaskEnqueuer;
this.cloudTasksUtils = cloudTasksUtils;
}
/**
@@ -203,10 +207,38 @@ public final class DomainLockUtils {
private void submitRelockIfNecessary(RegistryLock lock) {
if (lock.getRelockDuration().isPresent()) {
asyncTaskEnqueuer.enqueueDomainRelock(lock);
enqueueDomainRelock(lock);
}
}
/**
* Enqueues a task to asynchronously re-lock a registry-locked domain after it was unlocked.
*
* <p>Note: the relockDuration must be present on the lock object.
*/
public void enqueueDomainRelock(RegistryLock lock) {
checkArgument(
lock.getRelockDuration().isPresent(),
"Lock with ID %s not configured for relock",
lock.getRevisionId());
enqueueDomainRelock(lock.getRelockDuration().get(), lock.getRevisionId(), 0);
}
/** Enqueues a task to asynchronously re-lock a registry-locked domain after it was unlocked. */
public void enqueueDomainRelock(Duration countdown, long lockRevisionId, int previousAttempts) {
cloudTasksUtils.enqueue(
QUEUE_ASYNC_ACTIONS,
cloudTasksUtils.createPostTaskWithDelay(
RelockDomainAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
RelockDomainAction.OLD_UNLOCK_REVISION_ID_PARAM,
String.valueOf(lockRevisionId),
RelockDomainAction.PREVIOUS_ATTEMPTS_PARAM,
String.valueOf(previousAttempts)),
countdown));
}
private void setAsRelock(RegistryLock newLock) {
jpaTm()
.transact(

View File

@@ -16,6 +16,7 @@ package google.registry.tools;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.util.PreconditionsUtils.checkArgumentPresent;
import com.beust.jcommander.IStringConverter;
import com.google.auto.value.AutoValue;
@@ -25,6 +26,7 @@ import com.google.common.base.Splitter;
import com.google.common.io.BaseEncoding;
import com.google.template.soy.data.SoyListData;
import com.google.template.soy.data.SoyMapData;
import google.registry.flows.domain.DomainFlowUtils;
import java.util.List;
@AutoValue
@@ -46,6 +48,20 @@ abstract class DsRecord {
"digest should be even-lengthed hex, but is %s (length %s)",
digest,
digest.length());
checkArgumentPresent(
DigestType.fromWireValue(digestType),
String.format("DS record uses an unrecognized digest type: %d", digestType));
if (DigestType.fromWireValue(digestType).get().getBytes()
!= BaseEncoding.base16().decode(digest).length) {
throw new IllegalArgumentException(
String.format("DS record has an invalid digest length: %s", digest));
}
if (!DomainFlowUtils.validateAlgorithm(alg)) {
throw new IllegalArgumentException(
String.format("DS record uses an unrecognized algorithm: %d", alg));
}
return new AutoValue_DsRecord(keyTag, alg, digestType, digest);
}

View File

@@ -18,6 +18,7 @@ import static google.registry.util.DomainNameUtils.canonicalizeDomainName;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import google.registry.model.rde.RdeMode;
import google.registry.tools.params.PathParameter;
import java.nio.file.Path;
import java.nio.file.Paths;
@@ -46,11 +47,20 @@ class EncryptEscrowDepositCommand implements CommandWithRemoteApi {
validateWith = PathParameter.OutputDirectory.class)
private Path outdir = Paths.get(".");
@Inject
EscrowDepositEncryptor encryptor;
@Parameter(
names = {"-m", "--mode"},
description = "Specify the escrow mode, FULL for RDE and THIN for BRDA.")
private RdeMode mode = RdeMode.FULL;
@Parameter(
names = {"-r", "--revision"},
description = "Specify the revision.")
private int revision = 0;
@Inject EscrowDepositEncryptor encryptor;
@Override
public final void run() throws Exception {
encryptor.encrypt(canonicalizeDomainName(tld), input, outdir);
encryptor.encrypt(mode, canonicalizeDomainName(tld), revision, input, outdir);
}
}

View File

@@ -18,6 +18,7 @@ import static google.registry.model.rde.RdeMode.FULL;
import com.google.common.io.ByteStreams;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.rde.RdeUtil;
import google.registry.rde.RydeEncoder;
@@ -42,26 +43,44 @@ final class EscrowDepositEncryptor {
@Inject @Key("rdeSigningKey") Provider<PGPKeyPair> rdeSigningKey;
@Inject @Key("rdeReceiverKey") Provider<PGPPublicKey> rdeReceiverKey;
@Inject
@Key("brdaSigningKey")
Provider<PGPKeyPair> brdaSigningKey;
@Inject
@Key("brdaReceiverKey")
Provider<PGPPublicKey> brdaReceiverKey;
@Inject EscrowDepositEncryptor() {}
/** Creates a {@code .ryde} and {@code .sig} file, provided an XML deposit file. */
void encrypt(String tld, Path xmlFile, Path outdir)
void encrypt(RdeMode mode, String tld, Integer revision, Path xmlFile, Path outdir)
throws IOException, XmlException {
try (InputStream xmlFileInput = Files.newInputStream(xmlFile);
BufferedInputStream xmlInput = new BufferedInputStream(xmlFileInput, PEEK_BUFFER_SIZE)) {
DateTime watermark = RdeUtil.peekWatermark(xmlInput);
String name = RdeNamingUtils.makeRydeFilename(tld, watermark, FULL, 1, 0);
String name = RdeNamingUtils.makeRydeFilename(tld, watermark, mode, 1, revision);
Path rydePath = outdir.resolve(name + ".ryde");
Path sigPath = outdir.resolve(name + ".sig");
Path pubPath = outdir.resolve(tld + ".pub");
PGPKeyPair signingKey = rdeSigningKey.get();
PGPKeyPair signingKey;
PGPPublicKey receiverKey;
if (mode == FULL) {
signingKey = rdeSigningKey.get();
receiverKey = rdeReceiverKey.get();
} else {
signingKey = brdaSigningKey.get();
receiverKey = brdaReceiverKey.get();
}
try (OutputStream rydeOutput = Files.newOutputStream(rydePath);
OutputStream sigOutput = Files.newOutputStream(sigPath);
RydeEncoder rydeEncoder = new RydeEncoder.Builder()
.setRydeOutput(rydeOutput, rdeReceiverKey.get())
.setSignatureOutput(sigOutput, signingKey)
.setFileMetadata(name, Files.size(xmlFile), watermark)
.build()) {
RydeEncoder rydeEncoder =
new RydeEncoder.Builder()
.setRydeOutput(rydeOutput, receiverKey)
.setSignatureOutput(sigOutput, signingKey)
.setFileMetadata(name, Files.size(xmlFile), watermark)
.build()) {
ByteStreams.copy(xmlInput, rydeEncoder);
}
try (OutputStream pubOutput = Files.newOutputStream(pubPath);

View File

@@ -133,7 +133,7 @@ final class GenerateEscrowDepositCommand implements CommandWithRemoteApi {
}
cloudTasksUtils.enqueue(
RDE_REPORT_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeStagingAction.PATH, Service.BACKEND.toString(), paramsBuilder.build()));
}

View File

@@ -27,9 +27,9 @@ import google.registry.model.tld.Registries;
class LoadTestCommand extends ConfirmingCommand
implements CommandWithConnection, CommandWithRemoteApi {
// This is a mostly arbitrary value, roughly an hour and a quarter. It served as a generous
// This is a mostly arbitrary value, roughly two and a half hours. It served as a generous
// timespan for initial backup/restore testing, but has no other special significance.
private static final int DEFAULT_RUN_SECONDS = 4600;
private static final int DEFAULT_RUN_SECONDS = 9200;
@Parameter(
names = {"--tld"},

View File

@@ -15,14 +15,8 @@
package google.registry.tools;
import com.google.common.collect.ImmutableMap;
import google.registry.tools.javascrap.BackfillRegistryLocksCommand;
import google.registry.tools.javascrap.BackfillSpec11ThreatMatchesCommand;
import google.registry.tools.javascrap.CompareEscrowDepositsCommand;
import google.registry.tools.javascrap.DeleteContactByRoidCommand;
import google.registry.tools.javascrap.HardDeleteHostCommand;
import google.registry.tools.javascrap.PopulateNullRegistrarFieldsCommand;
import google.registry.tools.javascrap.RemoveIpAddressCommand;
import google.registry.tools.javascrap.ResaveAllTldsCommand;
/** Container class to create and run remote commands against a Datastore instance. */
public final class RegistryTool {
@@ -36,8 +30,6 @@ public final class RegistryTool {
public static final ImmutableMap<String, Class<? extends Command>> COMMAND_MAP =
new ImmutableMap.Builder<String, Class<? extends Command>>()
.put("ack_poll_messages", AckPollMessagesCommand.class)
.put("backfill_registry_locks", BackfillRegistryLocksCommand.class)
.put("backfill_spec11_threat_matches", BackfillSpec11ThreatMatchesCommand.class)
.put("canonicalize_labels", CanonicalizeLabelsCommand.class)
.put("check_domain", CheckDomainCommand.class)
.put("check_domain_claims", CheckDomainClaimsCommand.class)
@@ -57,7 +49,6 @@ public final class RegistryTool {
.put("curl", CurlCommand.class)
.put("dedupe_one_time_billing_event_ids", DedupeOneTimeBillingEventIdsCommand.class)
.put("delete_allocation_tokens", DeleteAllocationTokensCommand.class)
.put("delete_contact_by_roid", DeleteContactByRoidCommand.class)
.put("delete_domain", DeleteDomainCommand.class)
.put("delete_host", DeleteHostCommand.class)
.put("delete_premium_list", DeletePremiumListCommand.class)
@@ -107,12 +98,9 @@ public final class RegistryTool {
.put("login", LoginCommand.class)
.put("logout", LogoutCommand.class)
.put("pending_escrow", PendingEscrowCommand.class)
.put("populate_null_registrar_fields", PopulateNullRegistrarFieldsCommand.class)
.put("registrar_contact", RegistrarContactCommand.class)
.put("remove_ip_address", RemoveIpAddressCommand.class)
.put("remove_registry_one_key", RemoveRegistryOneKeyCommand.class)
.put("renew_domain", RenewDomainCommand.class)
.put("resave_all_tlds", ResaveAllTldsCommand.class)
.put("resave_entities", ResaveEntitiesCommand.class)
.put("resave_environment_entities", ResaveEnvironmentEntitiesCommand.class)
.put("resave_epp_resource", ResaveEppResourceCommand.class)
@@ -135,8 +123,10 @@ public final class RegistryTool {
.put("update_server_locks", UpdateServerLocksCommand.class)
.put("update_tld", UpdateTldCommand.class)
.put("upload_claims_list", UploadClaimsListCommand.class)
.put("validate_datastore", ValidateDatastoreCommand.class)
.put("validate_escrow_deposit", ValidateEscrowDepositCommand.class)
.put("validate_login_credentials", ValidateLoginCredentialsCommand.class)
.put("validate_sql", ValidateSqlCommand.class)
.put("verify_ote", VerifyOteCommand.class)
.put("whois_query", WhoisQueryCommand.class)
.build();

View File

@@ -38,13 +38,10 @@ import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.rde.RdeModule;
import google.registry.request.Modules.DatastoreServiceModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.URLFetchServiceModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UrlConnectionServiceModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.tools.AuthModule.LocalCredentialModule;
import google.registry.tools.javascrap.BackfillRegistryLocksCommand;
import google.registry.tools.javascrap.CompareEscrowDepositsCommand;
import google.registry.tools.javascrap.DeleteContactByRoidCommand;
import google.registry.tools.javascrap.HardDeleteHostCommand;
import google.registry.util.UtilsModule;
import google.registry.whois.NonCachingWhoisModule;
@@ -78,10 +75,10 @@ import javax.inject.Singleton;
LocalCredentialModule.class,
PersistenceModule.class,
RdeModule.class,
RegistryToolDataflowModule.class,
RequestFactoryModule.class,
SecretManagerModule.class,
URLFetchServiceModule.class,
UrlFetchTransportModule.class,
UrlConnectionServiceModule.class,
UserServiceModule.class,
UtilsModule.class,
VoidDnsWriterModule.class,
@@ -90,8 +87,6 @@ import javax.inject.Singleton;
interface RegistryToolComponent {
void inject(AckPollMessagesCommand command);
void inject(BackfillRegistryLocksCommand command);
void inject(CheckDomainClaimsCommand command);
void inject(CheckDomainCommand command);
@@ -112,8 +107,6 @@ interface RegistryToolComponent {
void inject(CreateTldCommand command);
void inject(DeleteContactByRoidCommand command);
void inject(EncryptEscrowDepositCommand command);
void inject(EnqueuePollMessageCommand command);
@@ -176,10 +169,14 @@ interface RegistryToolComponent {
void inject(UpdateTldCommand command);
void inject(ValidateDatastoreCommand command);
void inject(ValidateEscrowDepositCommand command);
void inject(ValidateLoginCredentialsCommand command);
void inject(ValidateSqlCommand command);
void inject(WhoisQueryCommand command);
AppEngineConnection appEngineConnection();

View File

@@ -0,0 +1,39 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import com.google.api.services.dataflow.Dataflow;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.LocalCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.GoogleCredentialsBundle;
/** Provides a {@link Dataflow} API client for use in {@link RegistryTool}. */
@Module
public class RegistryToolDataflowModule {
@Provides
static Dataflow provideDataflow(
@LocalCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Dataflow.Builder(
credentialsBundle.getHttpTransport(),
credentialsBundle.getJsonFactory(),
credentialsBundle.getHttpRequestInitializer())
.setApplicationName(String.format("%s nomulus", projectId))
.build();
}
}

View File

@@ -0,0 +1,232 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import static google.registry.beam.BeamUtils.createJobName;
import com.beust.jcommander.Parameter;
import com.google.api.services.dataflow.Dataflow;
import com.google.api.services.dataflow.model.Job;
import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.base.Ascii;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.config.RegistryConfig.Config;
import google.registry.tools.params.DateTimeParameter;
import google.registry.util.Clock;
import google.registry.util.RequestStatusChecker;
import google.registry.util.Sleeper;
import java.io.IOException;
import java.util.Optional;
import java.util.UUID;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/** Shared setup for commands that validate the data replication between Datastore and Cloud SQL. */
abstract class ValidateDatabaseMigrationCommand
implements CommandWithConnection, CommandWithRemoteApi {
private static final String PIPELINE_NAME = "validate_database_pipeline";
private static final String MANUAL_PIPELINE_LAUNCH_COMMAND_TEMPLATE =
"gcloud dataflow flex-template run "
+ "\"%s-${USER}-$(date +%%Y%%m%%dt%%H%%M%%S)\" "
+ "--template-file-gcs-location %s "
+ "--project %s "
+ "--region=%s "
+ "--worker-machine-type=n2-standard-8 --num-workers=8 "
+ "--parameters registryEnvironment=%s "
+ "--parameters sqlSnapshotId=%s "
+ "--parameters latestCommitLogTimestamp=%s "
+ "--parameters diffOutputGcsBucket=%s ";
// States indicating a job is not finished yet.
static final ImmutableSet<String> DATAFLOW_JOB_RUNNING_STATES =
ImmutableSet.of(
"JOB_STATE_UNKNOWN",
"JOB_STATE_RUNNING",
"JOB_STATE_STOPPED",
"JOB_STATE_PENDING",
"JOB_STATE_QUEUED");
static final Duration JOB_POLLING_INTERVAL = Duration.standardSeconds(60);
@Parameter(
names = {"-m", "--manual"},
description =
"If true, let user launch the comparison pipeline manually out of band. "
+ "Command will wait for user key-press to exit after syncing Datastore.")
boolean manualLaunchPipeline;
@Parameter(
names = {"-r", "--release"},
description = "The release tag of the BEAM pipeline to run. It defaults to 'live'.")
String release = "live";
@Parameter(
names = {"-c", "--comparisonStartTimestamp"},
description =
"When comparing History and Epp Resource entities, ignore those that have not"
+ " changed since this time.",
converter = DateTimeParameter.class)
DateTime comparisonStartTimestamp;
@Parameter(
names = {"-o", "--outputBucket"},
description =
"The GCS bucket where data discrepancies are logged. "
+ "It defaults to ${projectId}-beam")
String outputBucket;
@Inject Clock clock;
@Inject Dataflow dataflow;
@Inject
@Config("defaultJobRegion")
String jobRegion;
@Inject
@Config("beamStagingBucketUrl")
String stagingBucketUrl;
@Inject
@Config("projectId")
String projectId;
@Inject Sleeper sleeper;
AppEngineConnection connection;
@Override
public void setConnection(AppEngineConnection connection) {
this.connection = connection;
}
String getDataflowJobStatus(String jobId) {
try {
return dataflow
.projects()
.locations()
.jobs()
.get(projectId, jobRegion, jobId)
.execute()
.getCurrentState();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
void launchPipelineAndWaitUntilFinish(
String pipelineName, DatabaseSnapshot snapshot, String latestCommitTimestamp) {
Job pipelineJob =
launchComparisonPipeline(pipelineName, snapshot.getSnapshotId(), latestCommitTimestamp)
.getJob();
String jobId = pipelineJob.getId();
System.out.printf("Launched comparison pipeline %s (%s).\n", pipelineJob.getName(), jobId);
while (DATAFLOW_JOB_RUNNING_STATES.contains(getDataflowJobStatus(jobId))) {
sleeper.sleepInterruptibly(JOB_POLLING_INTERVAL);
}
System.out.printf(
"Pipeline ended with %s state. Please check counters for results.\n",
getDataflowJobStatus(jobId));
}
String getOutputBucket() {
return Optional.ofNullable(outputBucket).orElse(projectId + "-beam");
}
String getContainerSpecGcsPath() {
return String.format(
"%s/%s_metadata.json", stagingBucketUrl.replace("live", release), PIPELINE_NAME);
}
String getManualLaunchCommand(
String jobName, String snapshotId, String latestCommitLogTimestamp) {
String baseCommand =
String.format(
MANUAL_PIPELINE_LAUNCH_COMMAND_TEMPLATE,
jobName,
getContainerSpecGcsPath(),
projectId,
jobRegion,
RegistryToolEnvironment.get().name(),
snapshotId,
latestCommitLogTimestamp,
getOutputBucket());
if (comparisonStartTimestamp == null) {
return baseCommand;
}
return baseCommand + "--parameters comparisonStartTimestamp=" + comparisonStartTimestamp;
}
LaunchFlexTemplateResponse launchComparisonPipeline(
String jobName, String sqlSnapshotId, String latestCommitLogTimestamp) {
try {
// Hardcode machine type and initial workers to force a quick start.
ImmutableMap.Builder<String, String> paramsBuilder =
new ImmutableMap.Builder()
.put("workerMachineType", "n2-standard-8")
.put("numWorkers", "8")
.put("sqlSnapshotId", sqlSnapshotId)
.put("latestCommitLogTimestamp", latestCommitLogTimestamp)
.put("registryEnvironment", RegistryToolEnvironment.get().name())
.put("diffOutputGcsBucket", getOutputBucket());
if (comparisonStartTimestamp != null) {
paramsBuilder.put("comparisonStartTimestamp", comparisonStartTimestamp.toString());
}
LaunchFlexTemplateParameter parameter =
new LaunchFlexTemplateParameter()
.setJobName(createJobName(Ascii.toLowerCase(jobName).replace('_', '-'), clock))
.setContainerSpecGcsPath(getContainerSpecGcsPath())
.setParameters(paramsBuilder.build());
return dataflow
.projects()
.locations()
.flexTemplates()
.launch(
projectId, jobRegion, new LaunchFlexTemplateRequest().setLaunchParameter(parameter))
.execute();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
/**
* A fake implementation of {@link RequestStatusChecker} for managing SQL-backed locks from
* non-AppEngine platforms. This is only required until the Nomulus server is migrated off
* AppEngine.
*/
static class FakeRequestStatusChecker implements RequestStatusChecker {
private final String logId =
ValidateDatastoreCommand.class.getSimpleName() + "-" + UUID.randomUUID();
@Override
public String getLogId() {
return logId;
}
@Override
public boolean isRunning(String requestLogId) {
return false;
}
}
}

View File

@@ -0,0 +1,105 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import static google.registry.model.replay.ReplicateToDatastoreAction.REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH;
import static google.registry.model.replay.ReplicateToDatastoreAction.REPLICATE_TO_DATASTORE_LOCK_NAME;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.beust.jcommander.Parameters;
import com.google.common.collect.ImmutableMap;
import com.google.common.net.MediaType;
import google.registry.backup.SyncDatastoreToSqlSnapshotAction;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
import google.registry.model.server.Lock;
import google.registry.request.Action.Service;
import java.util.Optional;
/**
* Validates asynchronously replicated data from the primary Cloud SQL database to Datastore.
*
* <p>This command suspends the replication process (by acquiring the replication lock), take a
* snapshot of the Cloud SQL database, invokes a Nomulus server action to sync Datastore to this
* snapshot (See {@link SyncDatastoreToSqlSnapshotAction} for details), and finally launches a BEAM
* pipeline to compare Datastore with the given SQL snapshot.
*
* <p>This command does not lock up the SQL database. Normal processing can proceed.
*/
@Parameters(commandDescription = "Validates Datastore with the primary Cloud SQL database.")
public class ValidateDatastoreCommand extends ValidateDatabaseMigrationCommand {
private static final Service NOMULUS_SERVICE = Service.BACKEND;
@Override
public void run() throws Exception {
MigrationState state = DatabaseMigrationStateSchedule.getValueAtTime(clock.nowUtc());
if (!state.getReplayDirection().equals(ReplayDirection.SQL_TO_DATASTORE)) {
throw new IllegalStateException("Cannot validate Datastore in migration step " + state);
}
Optional<Lock> lock =
Lock.acquireSql(
REPLICATE_TO_DATASTORE_LOCK_NAME,
null,
REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH,
new FakeRequestStatusChecker(),
false);
if (!lock.isPresent()) {
throw new IllegalStateException("Cannot acquire the async propagation lock.");
}
try {
try (DatabaseSnapshot snapshot = DatabaseSnapshot.createSnapshot()) {
System.out.printf("Obtained snapshot %s\n", snapshot.getSnapshotId());
AppEngineConnection connectionToService = connection.withService(NOMULUS_SERVICE);
String response =
connectionToService.sendPostRequest(
getNomulusEndpoint(snapshot.getSnapshotId()),
ImmutableMap.<String, String>of(),
MediaType.PLAIN_TEXT_UTF_8,
"".getBytes(UTF_8));
System.out.println(response);
lock.ifPresent(Lock::releaseSql);
lock = Optional.empty();
// See SyncDatastoreToSqlSnapshotAction for response format.
String latestCommitTimestamp =
response.substring(response.lastIndexOf('(') + 1, response.lastIndexOf(')'));
if (manualLaunchPipeline) {
System.out.printf(
"To launch the pipeline manually, use the following command:\n%s\n",
getManualLaunchCommand(
"validate-datastore", snapshot.getSnapshotId(), latestCommitTimestamp));
System.out.print("\nEnter any key to continue when the pipeline ends:");
System.in.read();
} else {
launchPipelineAndWaitUntilFinish("validate-datastore", snapshot, latestCommitTimestamp);
}
}
} finally {
lock.ifPresent(Lock::releaseSql);
}
}
private static String getNomulusEndpoint(String sqlSnapshotId) {
return String.format(
"%s?sqlSnapshotId=%s", SyncDatastoreToSqlSnapshotAction.PATH, sqlSnapshotId);
}
}

View File

@@ -0,0 +1,91 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import static google.registry.backup.ReplayCommitLogsToSqlAction.REPLAY_TO_SQL_LOCK_LEASE_LENGTH;
import static google.registry.backup.ReplayCommitLogsToSqlAction.REPLAY_TO_SQL_LOCK_NAME;
import com.beust.jcommander.Parameters;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
import google.registry.model.replay.SqlReplayCheckpoint;
import google.registry.model.server.Lock;
import google.registry.persistence.transaction.TransactionManagerFactory;
import java.util.Optional;
import org.joda.time.DateTime;
/**
* Validates asynchronously replicated data from the primary Datastore to Cloud SQL.
*
* <p>This command suspends the replication process (by acquiring the replication lock), take a
* snapshot of the Cloud SQL database, finds the corresponding Datastore snapshot, and finally
* launches a BEAM pipeline to compare the two snapshots.
*
* <p>This command does not lock up either database. Normal processing can proceed.
*/
@Parameters(commandDescription = "Validates Cloud SQL with the primary Datastore.")
public class ValidateSqlCommand extends ValidateDatabaseMigrationCommand {
@Override
public void run() throws Exception {
MigrationState state = DatabaseMigrationStateSchedule.getValueAtTime(clock.nowUtc());
if (!state.getReplayDirection().equals(ReplayDirection.DATASTORE_TO_SQL)) {
throw new IllegalStateException("Cannot validate SQL in migration step " + state);
}
Optional<Lock> lock =
Lock.acquireSql(
REPLAY_TO_SQL_LOCK_NAME,
null,
REPLAY_TO_SQL_LOCK_LEASE_LENGTH,
new FakeRequestStatusChecker(),
false);
if (!lock.isPresent()) {
throw new IllegalStateException("Cannot acquire the async propagation lock.");
}
try {
DateTime latestCommitLogTime =
TransactionManagerFactory.jpaTm().transact(() -> SqlReplayCheckpoint.get());
try (DatabaseSnapshot databaseSnapshot = DatabaseSnapshot.createSnapshot()) {
// Eagerly release the commitlog replay lock so that replay can resume.
lock.ifPresent(Lock::releaseSql);
lock = Optional.empty();
System.out.printf(
"Start comparison with SQL snapshot (%s) and CommitLog timestamp (%s).\n",
databaseSnapshot.getSnapshotId(), latestCommitLogTime);
if (manualLaunchPipeline) {
System.out.printf(
"To launch the pipeline manually, use the following command:\n%s\n",
getManualLaunchCommand(
"validate-sql",
databaseSnapshot.getSnapshotId(),
latestCommitLogTime.toString()));
System.out.print("\nEnter any key to continue when the pipeline ends:");
System.in.read();
} else {
launchPipelineAndWaitUntilFinish(
"validate-sql", databaseSnapshot, latestCommitLogTime.toString());
}
}
} finally {
lock.ifPresent(Lock::releaseSql);
}
}
}

View File

@@ -1,157 +0,0 @@
// Copyright 2020 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools.javascrap;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static google.registry.tools.LockOrUnlockDomainCommand.REGISTRY_LOCK_STATUSES;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import com.google.common.collect.ImmutableCollection;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.RegistryLock;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.reporting.HistoryEntryDao;
import google.registry.model.tld.RegistryLockDao;
import google.registry.persistence.VKey;
import google.registry.tools.CommandWithRemoteApi;
import google.registry.tools.ConfirmingCommand;
import google.registry.util.Clock;
import google.registry.util.StringGenerator;
import java.util.Comparator;
import java.util.List;
import javax.inject.Inject;
import javax.inject.Named;
import org.joda.time.DateTime;
/**
* Scrap tool to backfill {@link RegistryLock}s for domains previously locked.
*
* <p>This will save new objects for all existing domains that are locked but don't have any
* corresponding lock objects already in the database.
*/
@Parameters(
separators = " =",
commandDescription =
"Backfills RegistryLock objects for specified domain resource IDs that are locked but don't"
+ " already have a corresponding RegistryLock object.")
public class BackfillRegistryLocksCommand extends ConfirmingCommand
implements CommandWithRemoteApi {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final int VERIFICATION_CODE_LENGTH = 32;
@Parameter(
names = {"--domain_roids"},
description = "Comma-separated list of domain roids to check")
protected List<String> roids;
// Inject here so that we can create the command automatically for tests
@Inject Clock clock;
@Inject
@Config("registryAdminClientId")
String registryAdminClientId;
@Inject
@Named("base58StringGenerator")
StringGenerator stringGenerator;
private ImmutableList<DomainBase> lockedDomains;
@Override
protected String prompt() {
checkArgument(
roids != null && !roids.isEmpty(), "Must provide non-empty domain_roids argument");
lockedDomains =
jpaTm().transact(() -> getLockedDomainsWithoutLocks(jpaTm().getTransactionTime()));
ImmutableList<String> lockedDomainNames =
lockedDomains.stream().map(DomainBase::getDomainName).collect(toImmutableList());
return String.format(
"Locked domains for which there does not exist a RegistryLock object: %s",
lockedDomainNames);
}
@Override
protected String execute() {
ImmutableSet.Builder<String> failedDomainsBuilder = new ImmutableSet.Builder<>();
jpaTm()
.transact(
() -> {
for (DomainBase domainBase : lockedDomains) {
try {
RegistryLockDao.save(
new RegistryLock.Builder()
.isSuperuser(true)
.setRegistrarId(registryAdminClientId)
.setRepoId(domainBase.getRepoId())
.setDomainName(domainBase.getDomainName())
.setLockCompletionTime(
getLockCompletionTimestamp(domainBase, jpaTm().getTransactionTime()))
.setVerificationCode(
stringGenerator.createString(VERIFICATION_CODE_LENGTH))
.build());
} catch (Throwable t) {
logger.atSevere().withCause(t).log(
"Error when creating lock object for domain '%s'.",
domainBase.getDomainName());
failedDomainsBuilder.add(domainBase.getDomainName());
}
}
});
ImmutableSet<String> failedDomains = failedDomainsBuilder.build();
if (failedDomains.isEmpty()) {
return String.format(
"Successfully created lock objects for %d domains.", lockedDomains.size());
} else {
return String.format(
"Successfully created lock objects for %d domains. We failed to create locks "
+ "for the following domains: %s",
lockedDomains.size() - failedDomains.size(), failedDomains);
}
}
private DateTime getLockCompletionTimestamp(DomainBase domainBase, DateTime now) {
// Best-effort, if a domain was URS-locked we should use that time
// If we can't find that, return now.
return HistoryEntryDao.loadHistoryObjectsForResource(domainBase.createVKey()).stream()
// sort by modification time descending so we get the most recent one if it was locked twice
.sorted(Comparator.comparing(HistoryEntry::getModificationTime).reversed())
.filter(entry -> "Uniform Rapid Suspension".equals(entry.getReason()))
.findFirst()
.map(HistoryEntry::getModificationTime)
.orElse(now);
}
private ImmutableList<DomainBase> getLockedDomainsWithoutLocks(DateTime now) {
ImmutableList<VKey<DomainBase>> domainKeys =
roids.stream().map(roid -> VKey.create(DomainBase.class, roid)).collect(toImmutableList());
ImmutableCollection<DomainBase> domains =
transactIfJpaTm(() -> tm().loadByKeys(domainKeys)).values();
return domains.stream()
.filter(d -> d.getDeletionTime().isAfter(now))
.filter(d -> d.getStatusValues().containsAll(REGISTRY_LOCK_STATUSES))
.filter(d -> !RegistryLockDao.getMostRecentByRepoId(d.getRepoId()).isPresent())
.collect(toImmutableList());
}
}

View File

@@ -1,223 +0,0 @@
// Copyright 2020 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools.javascrap;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableListMultimap.flatteningToImmutableListMultimap;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableListMultimap;
import com.google.common.collect.ImmutableSet;
import google.registry.beam.spec11.ThreatMatch;
import google.registry.model.domain.DomainBase;
import google.registry.model.reporting.Spec11ThreatMatch;
import google.registry.model.reporting.Spec11ThreatMatch.ThreatType;
import google.registry.model.reporting.Spec11ThreatMatchDao;
import google.registry.persistence.transaction.QueryComposer;
import google.registry.reporting.spec11.RegistrarThreatMatches;
import google.registry.reporting.spec11.Spec11RegistrarThreatMatchesParser;
import google.registry.tools.CommandWithRemoteApi;
import google.registry.tools.ConfirmingCommand;
import google.registry.util.Clock;
import java.io.IOException;
import java.util.Comparator;
import java.util.function.Function;
import javax.inject.Inject;
import org.joda.time.LocalDate;
/**
* Scrap tool to backfill {@link Spec11ThreatMatch} objects from prior days.
*
* <p>This will load the previously-existing Spec11 files from GCS (looking back to 2019-01-01 (a
* rough estimate of when we started using this format) and convert those RegistrarThreatMatches
* objects into the new Spec11ThreatMatch format. It will then insert these entries into SQL.
*
* <p>Note that the script will attempt to find the corresponding {@link DomainBase} object for each
* domain name on the day of the scan. It will fail if it cannot find a corresponding domain object,
* or if the domain objects were not active at the time of the scan.
*/
@Parameters(
commandDescription =
"Backfills Spec11 threat match entries from the old and deprecated GCS JSON files to the "
+ "Cloud SQL database.")
public class BackfillSpec11ThreatMatchesCommand extends ConfirmingCommand
implements CommandWithRemoteApi {
private static final LocalDate START_DATE = new LocalDate(2019, 1, 1);
@Parameter(
names = {"-o", "--overwrite_existing_dates"},
description =
"Whether the command will overwrite data that already exists for dates that exist in the "
+ "GCS bucket. Defaults to false.")
private boolean overrideExistingDates;
@Inject Spec11RegistrarThreatMatchesParser threatMatchesParser;
// Inject the clock for testing purposes
@Inject Clock clock;
@Override
protected String prompt() {
return String.format("Backfill Spec11 results from %d files?", getDatesToBackfill().size());
}
@Override
protected String execute() {
ImmutableList<LocalDate> dates = getDatesToBackfill();
ImmutableListMultimap.Builder<LocalDate, RegistrarThreatMatches> threatMatchesBuilder =
new ImmutableListMultimap.Builder<>();
for (LocalDate date : dates) {
try {
// It's OK if the file doesn't exist for a particular date; the result will be empty.
threatMatchesBuilder.putAll(date, threatMatchesParser.getRegistrarThreatMatches(date));
} catch (IOException e) {
throw new RuntimeException(
String.format("Error parsing through file with date %s.", date), e);
}
}
ImmutableListMultimap<LocalDate, RegistrarThreatMatches> threatMatches =
threatMatchesBuilder.build();
// Look up all possible DomainBases for these domain names, any of which can be in the past
ImmutableListMultimap<String, DomainBase> domainsByDomainName =
getDomainsByDomainName(threatMatches);
// For each date, convert all threat matches with the proper domain repo ID
int totalNumThreats = 0;
for (LocalDate date : threatMatches.keySet()) {
ImmutableList.Builder<Spec11ThreatMatch> spec11ThreatsBuilder = new ImmutableList.Builder<>();
for (RegistrarThreatMatches rtm : threatMatches.get(date)) {
rtm.threatMatches().stream()
.map(
threatMatch ->
threatMatchToCloudSqlObject(
threatMatch, date, rtm.clientId(), domainsByDomainName))
.forEach(spec11ThreatsBuilder::add);
}
ImmutableList<Spec11ThreatMatch> spec11Threats = spec11ThreatsBuilder.build();
jpaTm()
.transact(
() -> {
Spec11ThreatMatchDao.deleteEntriesByDate(jpaTm(), date);
jpaTm().putAll(spec11Threats);
});
totalNumThreats += spec11Threats.size();
}
return String.format(
"Successfully parsed through %d files with %d threats.", dates.size(), totalNumThreats);
}
/** Returns a per-domain list of possible DomainBase objects, starting with the most recent. */
private ImmutableListMultimap<String, DomainBase> getDomainsByDomainName(
ImmutableListMultimap<LocalDate, RegistrarThreatMatches> threatMatchesByDate) {
return threatMatchesByDate.values().stream()
.map(RegistrarThreatMatches::threatMatches)
.flatMap(ImmutableList::stream)
.map(ThreatMatch::fullyQualifiedDomainName)
.distinct()
.collect(
flatteningToImmutableListMultimap(
Function.identity(),
(domainName) -> {
ImmutableList<DomainBase> domains = loadDomainsForFqdn(domainName);
checkState(
!domains.isEmpty(),
"Domain name %s had no associated DomainBase objects.",
domainName);
return domains.stream()
.sorted(Comparator.comparing(DomainBase::getCreationTime).reversed());
}));
}
/** Loads in all {@link DomainBase} objects for a given FQDN. */
private ImmutableList<DomainBase> loadDomainsForFqdn(String fullyQualifiedDomainName) {
return transactIfJpaTm(
() ->
tm().createQueryComposer(DomainBase.class)
.where(
"fullyQualifiedDomainName",
QueryComposer.Comparator.EQ,
fullyQualifiedDomainName)
.list());
}
/** Converts the previous {@link ThreatMatch} object to {@link Spec11ThreatMatch}. */
private Spec11ThreatMatch threatMatchToCloudSqlObject(
ThreatMatch threatMatch,
LocalDate date,
String registrarId,
ImmutableListMultimap<String, DomainBase> domainsByDomainName) {
DomainBase domain =
findDomainAsOfDateOrThrow(
threatMatch.fullyQualifiedDomainName(), date, domainsByDomainName);
return new Spec11ThreatMatch.Builder()
.setThreatTypes(ImmutableSet.of(ThreatType.valueOf(threatMatch.threatType())))
.setCheckDate(date)
.setRegistrarId(registrarId)
.setDomainName(threatMatch.fullyQualifiedDomainName())
.setDomainRepoId(domain.getRepoId())
.build();
}
/** Returns the DomainBase object as of the particular date, which is likely in the past. */
private DomainBase findDomainAsOfDateOrThrow(
String domainName,
LocalDate date,
ImmutableListMultimap<String, DomainBase> domainsByDomainName) {
ImmutableList<DomainBase> domains = domainsByDomainName.get(domainName);
for (DomainBase domain : domains) {
// We only know the date (not datetime) of the threat scan, so we approximate
LocalDate creationDate = domain.getCreationTime().toLocalDate();
LocalDate deletionDate = domain.getDeletionTime().toLocalDate();
if (!date.isBefore(creationDate) && !date.isAfter(deletionDate)) {
return domain;
}
}
throw new IllegalStateException(
String.format("Could not find a DomainBase valid for %s on day %s.", domainName, date));
}
/** Returns the list of dates between {@link #START_DATE} and now (UTC), inclusive. */
private ImmutableList<LocalDate> getDatesToBackfill() {
ImmutableSet<LocalDate> datesToSkip =
overrideExistingDates ? ImmutableSet.of() : getExistingDates();
ImmutableList.Builder<LocalDate> result = new ImmutableList.Builder<>();
LocalDate endDate = clock.nowUtc().toLocalDate();
for (LocalDate currentDate = START_DATE;
!currentDate.isAfter(endDate);
currentDate = currentDate.plusDays(1)) {
if (!datesToSkip.contains(currentDate)) {
result.add(currentDate);
}
}
return result.build();
}
private ImmutableSet<LocalDate> getExistingDates() {
return jpaTm()
.transact(
() ->
jpaTm()
.query(
"SELECT DISTINCT stm.checkDate FROM Spec11ThreatMatch stm", LocalDate.class)
.getResultStream()
.collect(toImmutableSet()));
}
}

View File

@@ -1,115 +0,0 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools.javascrap;
import static com.google.common.base.Verify.verify;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.model.contact.ContactResource;
import google.registry.model.eppcommon.StatusValue;
import google.registry.model.index.EppResourceIndex;
import google.registry.model.index.ForeignKeyIndex;
import google.registry.tools.CommandWithRemoteApi;
import google.registry.tools.ConfirmingCommand;
import google.registry.util.SystemClock;
import java.util.List;
import java.util.Objects;
/**
* Deletes a {@link google.registry.model.contact.ContactResource} by its ROID.
*
* <p>This is a short-term tool for race condition clean up while the bug is being fixed.
*/
@Parameters(separators = " =", commandDescription = "Delete a contact by its ROID.")
public class DeleteContactByRoidCommand extends ConfirmingCommand implements CommandWithRemoteApi {
@Parameter(names = "--roid", description = "The roid of the contact to be deleted.")
String roid;
@Parameter(
names = "--contact_id",
description = "The user provided contactId, for verification purpose.")
String contactId;
ImmutableList<Key<?>> toDelete;
@Override
protected void init() {
System.out.printf("Deleting %s, which refers to %s.\n", roid, contactId);
tm().transact(
() -> {
Key<ContactResource> targetKey = Key.create(ContactResource.class, roid);
ContactResource targetContact = auditedOfy().load().key(targetKey).now();
verify(
Objects.equals(targetContact.getContactId(), contactId),
"contactId does not match.");
verify(
Objects.equals(targetContact.getStatusValues(), ImmutableSet.of(StatusValue.OK)));
System.out.println("Target contact has the expected contactId");
String canonicalResource =
ForeignKeyIndex.load(ContactResource.class, contactId, new SystemClock().nowUtc())
.getResourceKey()
.getOfyKey()
.getName();
verify(!Objects.equals(canonicalResource, roid), "Contact still in ForeignKeyIndex.");
System.out.printf(
"It is safe to delete %s, since the contactId is mapped to a different entry in"
+ " the Foreign key index (%s).\n\n",
roid, canonicalResource);
List<Object> ancestors =
auditedOfy().load().ancestor(Key.create(ContactResource.class, roid)).list();
System.out.println("Ancestor query returns: ");
for (Object entity : ancestors) {
System.out.println(Key.create(entity));
}
ImmutableSet<String> deletetableKinds =
ImmutableSet.of("HistoryEntry", "ContactResource");
toDelete =
ancestors.stream()
.map(Key::create)
.filter(key -> deletetableKinds.contains(key.getKind()))
.collect(ImmutableList.toImmutableList());
EppResourceIndex eppResourceIndex =
auditedOfy().load().entity(EppResourceIndex.create(targetKey)).now();
verify(eppResourceIndex.getKey().equals(targetKey), "Wrong EppResource Index loaded");
System.out.printf("\n\nEppResourceIndex found (%s).\n", Key.create(eppResourceIndex));
toDelete =
new ImmutableList.Builder<Key<?>>()
.addAll(toDelete)
.add(Key.create(eppResourceIndex))
.build();
System.out.printf("\n\nAbout to delete %s entities:\n", toDelete.size());
toDelete.forEach(System.out::println);
});
}
@Override
protected String execute() {
tm().transact(() -> auditedOfy().delete().keys(toDelete).now());
return "Done";
}
}

View File

@@ -1,70 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools.javascrap;
import static com.google.common.base.MoreObjects.firstNonNull;
import com.beust.jcommander.Parameters;
import com.google.common.collect.ImmutableList;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarAddress;
import google.registry.tools.MutatingCommand;
import java.util.Objects;
/**
* Scrap tool to update Registrars with null registrarName or localizedAddress fields.
*
* <p>This sets a null registrarName to the key name, and null localizedAddress fields to fake data.
*/
@Parameters(
separators = " =",
commandDescription = "Populate previously null required registrar fields."
)
public class PopulateNullRegistrarFieldsCommand extends MutatingCommand {
@Override
protected void init() {
for (Registrar registrar : Registrar.loadAll()) {
Registrar.Builder changeBuilder = registrar.asBuilder();
changeBuilder.setRegistrarName(
firstNonNull(registrar.getRegistrarName(), registrar.getRegistrarId()));
RegistrarAddress address = registrar.getLocalizedAddress();
if (address == null) {
changeBuilder.setLocalizedAddress(
new RegistrarAddress.Builder()
.setCity("Fakington")
.setCountryCode("US")
.setState("FL")
.setZip("12345")
.setStreet(ImmutableList.of("123 Fake Street"))
.build());
} else {
changeBuilder.setLocalizedAddress(
new RegistrarAddress.Builder()
.setCity(firstNonNull(address.getCity(), "Fakington"))
.setCountryCode(firstNonNull(address.getCountryCode(), "US"))
.setState(firstNonNull(address.getState(), "FL"))
.setZip(firstNonNull(address.getZip(), "12345"))
.setStreet(firstNonNull(address.getStreet(), ImmutableList.of("123 Fake Street")))
.build());
}
Registrar changedRegistrar = changeBuilder.build();
if (!Objects.equals(registrar, changedRegistrar)) {
stageEntityChange(registrar, changedRegistrar);
}
}
}
}

View File

@@ -1,88 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools.javascrap;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import com.google.template.soy.data.SoyMapData;
import google.registry.model.host.HostResource;
import google.registry.persistence.VKey;
import google.registry.tools.MutatingEppToolCommand;
import google.registry.tools.params.PathParameter;
import google.registry.tools.soy.RemoveIpAddressSoyInfo;
import java.net.Inet6Address;
import java.net.InetAddress;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
/**
* Command to remove external IP Addresses from HostResources identified by text file listing
* resource ids, one per line.
*
* <p>Written for b/23757755 so we can clean up records with IP addresses that should always be
* resolved by hostname.
*
* <p>The JSON file should contain a list of objects each of which has a "roid" attribute.
*/
@Parameters(separators = " =", commandDescription = "Remove all IP Addresses.")
public class RemoveIpAddressCommand extends MutatingEppToolCommand {
public static String registrarId = "CharlestonRoad";
@Parameter(names = "--roids_file",
description = "Text file containing a list of HostResource roids to remove",
required = true,
validateWith = PathParameter.InputFile.class)
private Path roidsFilePath;
@Override
protected void initMutatingEppToolCommand() throws Exception {
List<String> roids = Files.readAllLines(roidsFilePath, UTF_8);
for (String roid : roids) {
// Look up the HostResource from its roid.
Optional<HostResource> host =
transactIfJpaTm(() -> tm().loadByKeyIfPresent(VKey.create(HostResource.class, roid)));
if (!host.isPresent()) {
System.err.printf("Record for %s not found.\n", roid);
continue;
}
ArrayList<SoyMapData> ipAddresses = new ArrayList<>();
for (InetAddress address : host.get().getInetAddresses()) {
SoyMapData dataMap = new SoyMapData(
"address", address.getHostAddress(),
"version", address instanceof Inet6Address ? "v6" : "v4");
ipAddresses.add(dataMap);
}
// Build and execute the EPP command.
setSoyTemplate(
RemoveIpAddressSoyInfo.getInstance(), RemoveIpAddressSoyInfo.REMOVE_IP_ADDRESS);
addSoyRecord(
registrarId,
new SoyMapData(
"name", host.get().getHostName(),
"ipAddresses", ipAddresses,
"requestedByRegistrar", registrarId));
}
}
}

View File

@@ -1,30 +0,0 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools.javascrap;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.beust.jcommander.Parameters;
import google.registry.model.tld.Registry;
import google.registry.tools.CommandWithRemoteApi;
/** Scrap command to resave all Registry entities. */
@Parameters(commandDescription = "Resave all TLDs")
public class ResaveAllTldsCommand implements CommandWithRemoteApi {
@Override
public void run() throws Exception {
tm().transact(() -> tm().putAll(tm().loadAllOf(Registry.class)));
}
}

View File

@@ -19,7 +19,6 @@ import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static com.google.common.collect.Sets.difference;
import static google.registry.config.RegistryEnvironment.PRODUCTION;
import static google.registry.export.sheet.SyncRegistrarsSheetAction.enqueueRegistrarSheetSync;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.security.JsonResponseHelper.Status.ERROR;
@@ -32,18 +31,21 @@ import com.google.common.base.Strings;
import com.google.common.collect.HashMultimap;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Multimap;
import com.google.common.collect.Sets;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryEnvironment;
import google.registry.export.sheet.SyncRegistrarsSheetAction;
import google.registry.flows.certs.CertificateChecker;
import google.registry.flows.certs.CertificateChecker.InsecureCertificateException;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.registrar.RegistrarContact.Type;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.HttpException.BadRequestException;
import google.registry.request.HttpException.ForbiddenException;
import google.registry.request.JsonActionRunner;
@@ -58,6 +60,7 @@ import google.registry.ui.forms.FormFieldException;
import google.registry.ui.server.RegistrarFormFields;
import google.registry.ui.server.SendEmailUtils;
import google.registry.util.AppEngineServiceUtils;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CollectionUtils;
import google.registry.util.DiffUtils;
import java.util.HashSet;
@@ -88,6 +91,22 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
static final String ARGS_PARAM = "args";
static final String ID_PARAM = "id";
/**
* Allows task enqueueing to be disabled when executing registrar console test cases.
*
* <p>The existing workflow in UI test cases triggers task enqueueing, which was not an issue with
* Task Queue since it's a native App Engine feature simulated by the App Engine SDK's
* environment. However, with Cloud Tasks, the server enqueues and fails to deliver to the actual
* Cloud Tasks endpoint due to lack of permission.
*
* <p>One way to allow enqueuing in backend test and avoid enqueuing in UI test is to disable
* enqueuing when the test server starts and enable enqueueing once the test server stops. This
* can be done by utilizing a ThreadLocal<Boolean> variable isInTestDriver, which is set to false
* by default. Enqueuing is allowed only if the value of isInTestDriver is false. It's set to true
* in start() and set to false in stop() inside TestDriver.java, a class used in testing.
*/
private static ThreadLocal<Boolean> isInTestDriver = ThreadLocal.withInitial(() -> false);
@Inject JsonActionRunner jsonActionRunner;
@Inject AppEngineServiceUtils appEngineServiceUtils;
@Inject RegistrarConsoleMetrics registrarConsoleMetrics;
@@ -95,6 +114,7 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
@Inject AuthenticatedRegistrarAccessor registrarAccessor;
@Inject AuthResult authResult;
@Inject CertificateChecker certificateChecker;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject RegistrarSettingsAction() {}
@@ -102,6 +122,14 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
return contact.getPhoneNumber() != null;
}
public static void setIsInTestDriverToFalse() {
isInTestDriver.set(false);
}
public static void setIsInTestDriverToTrue() {
isInTestDriver.set(true);
}
@Override
public void run() {
jsonActionRunner.run(this);
@@ -170,6 +198,26 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
}
}
@AutoValue
abstract static class EmailInfo {
abstract Registrar registrar();
abstract Registrar updatedRegistrar();
abstract ImmutableSet<RegistrarContact> contacts();
abstract ImmutableSet<RegistrarContact> updatedContacts();
static EmailInfo create(
Registrar registrar,
Registrar updatedRegistrar,
ImmutableSet<RegistrarContact> contacts,
ImmutableSet<RegistrarContact> updatedContacts) {
return new AutoValue_RegistrarSettingsAction_EmailInfo(
registrar, updatedRegistrar, contacts, updatedContacts);
}
}
private RegistrarResult read(String registrarId) {
return RegistrarResult.create("Success", loadRegistrarUnchecked(registrarId));
}
@@ -183,72 +231,69 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
}
private RegistrarResult update(final Map<String, ?> args, String registrarId) {
tm().transact(
() -> {
// We load the registrar here rather than outside of the transaction - to make
// sure we have the latest version. This one is loaded inside the transaction, so it's
// guaranteed to not change before we update it.
Registrar registrar = loadRegistrarUnchecked(registrarId);
// Detach the registrar to avoid Hibernate object-updates, since we wish to email
// out the diffs between the existing and updated registrar objects
if (!tm().isOfy()) {
jpaTm().getEntityManager().detach(registrar);
}
// Verify that the registrar hasn't been changed.
// To do that - we find the latest update time (or null if the registrar has been
// deleted) and compare to the update time from the args. The update time in the args
// comes from the read that gave the UI the data - if it's out of date, then the UI
// had out of date data.
DateTime latest = registrar.getLastUpdateTime();
DateTime latestFromArgs =
RegistrarFormFields.LAST_UPDATE_TIME.extractUntyped(args).get();
if (!latestFromArgs.equals(latest)) {
logger.atWarning().log(
"Registrar changed since reading the data!"
+ " Last updated at %s, but args data last updated at %s.",
latest, latestFromArgs);
throw new IllegalStateException(
"Registrar has been changed by someone else. Please reload and retry.");
}
// Keep the current contacts so we can later check that no required contact was
// removed, email the changes to the contacts
ImmutableSet<RegistrarContact> contacts = registrar.getContacts();
Registrar updatedRegistrar = registrar;
// Do OWNER only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateOwnerControlledFields(updatedRegistrar, args);
// Do ADMIN only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateAdminControlledFields(updatedRegistrar, args);
// read the contacts from the request.
ImmutableSet<RegistrarContact> updatedContacts =
readContacts(registrar, contacts, args);
// Save the updated contacts
if (!updatedContacts.equals(contacts)) {
if (!registrarAccessor.hasRoleOnRegistrar(Role.OWNER, registrar.getRegistrarId())) {
throw new ForbiddenException("Only OWNERs can update the contacts");
}
checkContactRequirements(contacts, updatedContacts);
RegistrarContact.updateContacts(updatedRegistrar, updatedContacts);
updatedRegistrar =
updatedRegistrar.asBuilder().setContactsRequireSyncing(true).build();
}
// Save the updated registrar
if (!updatedRegistrar.equals(registrar)) {
tm().put(updatedRegistrar);
}
// Email the updates
sendExternalUpdatesIfNecessary(
registrar, contacts, updatedRegistrar, updatedContacts);
});
// Email the updates
sendExternalUpdatesIfNecessary(tm().transact(() -> saveUpdates(args, registrarId)));
// Reload the result outside of the transaction to get the most recent version
return RegistrarResult.create("Saved " + registrarId, loadRegistrarUnchecked(registrarId));
}
/** Saves the updates and returns info needed for the update email */
private EmailInfo saveUpdates(final Map<String, ?> args, String registrarId) {
// We load the registrar here rather than outside of the transaction - to make
// sure we have the latest version. This one is loaded inside the transaction, so it's
// guaranteed to not change before we update it.
Registrar registrar = loadRegistrarUnchecked(registrarId);
// Detach the registrar to avoid Hibernate object-updates, since we wish to email
// out the diffs between the existing and updated registrar objects
if (!tm().isOfy()) {
jpaTm().getEntityManager().detach(registrar);
}
// Verify that the registrar hasn't been changed.
// To do that - we find the latest update time (or null if the registrar has been
// deleted) and compare to the update time from the args. The update time in the args
// comes from the read that gave the UI the data - if it's out of date, then the UI
// had out of date data.
DateTime latest = registrar.getLastUpdateTime();
DateTime latestFromArgs = RegistrarFormFields.LAST_UPDATE_TIME.extractUntyped(args).get();
if (!latestFromArgs.equals(latest)) {
logger.atWarning().log(
"Registrar changed since reading the data!"
+ " Last updated at %s, but args data last updated at %s.",
latest, latestFromArgs);
throw new IllegalStateException(
"Registrar has been changed by someone else. Please reload and retry.");
}
// Keep the current contacts so we can later check that no required contact was
// removed, email the changes to the contacts
ImmutableSet<RegistrarContact> contacts = registrar.getContacts();
Registrar updatedRegistrar = registrar;
// Do OWNER only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateOwnerControlledFields(updatedRegistrar, args);
// Do ADMIN only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateAdminControlledFields(updatedRegistrar, args);
// read the contacts from the request.
ImmutableSet<RegistrarContact> updatedContacts = readContacts(registrar, contacts, args);
// Save the updated contacts
if (!updatedContacts.equals(contacts)) {
if (!registrarAccessor.hasRoleOnRegistrar(Role.OWNER, registrar.getRegistrarId())) {
throw new ForbiddenException("Only OWNERs can update the contacts");
}
checkContactRequirements(contacts, updatedContacts);
RegistrarContact.updateContacts(updatedRegistrar, updatedContacts);
updatedRegistrar = updatedRegistrar.asBuilder().setContactsRequireSyncing(true).build();
}
// Save the updated registrar
if (!updatedRegistrar.equals(registrar)) {
tm().put(updatedRegistrar);
}
return EmailInfo.create(registrar, updatedRegistrar, contacts, updatedContacts);
}
private Map<String, Object> expandRegistrarWithContacts(
Iterable<RegistrarContact> contacts, Registrar registrar) {
ImmutableSet<Map<String, Object>> expandedContacts =
@@ -408,6 +453,13 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
Map<?, ?> diffs =
DiffUtils.deepDiff(
originalRegistrar.toDiffableFieldMap(), updatedRegistrar.toDiffableFieldMap(), true);
// It's expected that the update timestamp will be changed, as it gets reset whenever we change
// nested collections. If it's the only change, just return the original registrar.
if (diffs.keySet().equals(ImmutableSet.of("lastUpdateTime"))) {
return originalRegistrar;
}
throw new ForbiddenException(
String.format("Unauthorized: only %s can change fields %s", allowedRole, diffs.keySet()));
}
@@ -575,26 +627,30 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
* sends an email with a diff of the changes to the configured notification email address and all
* contact addresses and enqueues a task to re-sync the registrar sheet.
*/
private void sendExternalUpdatesIfNecessary(
Registrar existingRegistrar,
ImmutableSet<RegistrarContact> existingContacts,
Registrar updatedRegistrar,
ImmutableSet<RegistrarContact> updatedContacts) {
private void sendExternalUpdatesIfNecessary(EmailInfo emailInfo) {
ImmutableSet<RegistrarContact> existingContacts = emailInfo.contacts();
if (!sendEmailUtils.hasRecipients() && existingContacts.isEmpty()) {
return;
}
Registrar existingRegistrar = emailInfo.registrar();
Map<?, ?> diffs =
DiffUtils.deepDiff(
expandRegistrarWithContacts(existingContacts, existingRegistrar),
expandRegistrarWithContacts(updatedContacts, updatedRegistrar),
expandRegistrarWithContacts(emailInfo.updatedContacts(), emailInfo.updatedRegistrar()),
true);
@SuppressWarnings("unchecked")
Set<String> changedKeys = (Set<String>) diffs.keySet();
if (CollectionUtils.difference(changedKeys, "lastUpdateTime").isEmpty()) {
return;
}
enqueueRegistrarSheetSync(appEngineServiceUtils.getCurrentVersionHostname("backend"));
if (!isInTestDriver.get()) {
// Enqueues a sync registrar sheet task if enqueuing is not triggered by console tests and
// there's an update besides the lastUpdateTime
cloudTasksUtils.enqueue(
SyncRegistrarsSheetAction.QUEUE,
cloudTasksUtils.createGetTask(
SyncRegistrarsSheetAction.PATH, Service.BACKEND.toString(), ImmutableMultimap.of()));
}
String environment = Ascii.toLowerCase(String.valueOf(RegistryEnvironment.get()));
sendEmailUtils.sendEmail(
String.format(

View File

@@ -64,11 +64,17 @@ final class NameserverLookupByIpCommand implements WhoisCommand {
jpaTm()
.transact(
() ->
// We cannot query @Convert-ed fields in HQL so we must use native Postgres
// We cannot query @Convert-ed fields in HQL so we must use native Postgres.
jpaTm()
.getEntityManager()
/**
* Using array_operator <@ (contained-by) with gin index on inet_address.
* Without gin index, this is slightly slower than the alternative form of
* ':address = ANY(inet_address)'.
*/
.createNativeQuery(
"SELECT * From \"Host\" WHERE :address = ANY(inet_addresses) AND "
"SELECT * From \"Host\" WHERE "
+ "ARRAY[ CAST(:address AS TEXT) ] <@ inet_addresses AND "
+ "deletion_time > CAST(:now AS timestamptz)",
HostResource.class)
.setParameter("address", InetAddresses.toAddrString(ipAddress))

View File

@@ -69,6 +69,12 @@
"regexes": [
"^DATASTORE|CLOUD_SQL$"
]
},
{
"name": "jpaTransactionManagerType",
"label": "The type of JPA transaction manager to use if using SQL",
"helpText": "The standard SQL instance or a read-only replica may be used",
"regexes": ["^REGULAR|READ_ONLY_REPLICA$"]
}
]
}

View File

@@ -0,0 +1,48 @@
{
"name": "Validate Datastore and Cloud SQL",
"description": "An Apache Beam batch pipeline that compares the data in Datastore and Cloud SQL database.",
"parameters": [
{
"name": "registryEnvironment",
"label": "The Registry environment.",
"helpText": "The Registry environment.",
"is_optional": false,
"regexes": [
"^PRODUCTION|SANDBOX|CRASH|QA|ALPHA$"
]
},
{
"name": "isolationOverride",
"label": "The desired SQL transaction isolation level.",
"helpText": "The desired SQL transaction isolation level.",
"is_optional": true,
"regexes": [
"^[0-9A-Z_]+$"
]
},
{
"name": "sqlSnapshotId",
"label": "The ID of an exported Cloud SQL (Postgresql) snapshot.",
"helpText": "The ID of an exported Cloud SQL (Postgresql) snapshot.",
"is_optional": false
},
{
"name": "latestCommitLogTimestamp",
"label": "Nomulus CommitLog start time",
"helpText": "The latest entity update time allowed for inclusion in validation, in ISO8601 format.",
"is_optional": false
},
{
"name": "comparisonStartTimestamp",
"label": "Only entities updated at or after this time are included for validation.",
"helpText": "The earliest entity update time allowed for inclusion in validation, in ISO8601 format.",
"is_optional": true
},
{
"name": "diffOutputGcsBucket",
"label": "The GCS bucket where data discrepancies should be output to.",
"helpText": "The GCS bucket where data discrepancies should be output to.",
"is_optional": true
}
]
}

View File

@@ -15,7 +15,6 @@
package google.registry.batch;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_REQUESTED_TIME;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESAVE_TIMES;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESOURCE_KEY;
@@ -23,26 +22,20 @@ import static google.registry.batch.AsyncTaskEnqueuer.QUEUE_ASYNC_ACTIONS;
import static google.registry.batch.AsyncTaskEnqueuer.QUEUE_ASYNC_DELETE;
import static google.registry.batch.AsyncTaskEnqueuer.QUEUE_ASYNC_HOST_RENAME;
import static google.registry.testing.DatabaseHelper.persistActiveContact;
import static google.registry.testing.SqlHelper.saveRegistryLock;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static google.registry.testing.TestLogHandlerUtils.assertLogMessage;
import static org.joda.time.Duration.standardDays;
import static org.joda.time.Duration.standardHours;
import static org.joda.time.Duration.standardSeconds;
import static org.junit.Assert.assertThrows;
import static org.mockito.Mockito.when;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.common.collect.ImmutableSortedSet;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.RegistryLock;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.FakeClock;
import google.registry.testing.FakeSleeper;
import google.registry.testing.InjectExtension;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.util.AppEngineServiceUtils;
import google.registry.util.CapturingLogHandler;
import google.registry.util.CloudTasksUtils;
import google.registry.util.JdkLoggerConfig;
import google.registry.util.Retrier;
import java.util.logging.Level;
@@ -52,7 +45,6 @@ import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.junit.jupiter.api.extension.RegisterExtension;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
import org.mockito.junit.jupiter.MockitoSettings;
import org.mockito.quality.Strictness;
@@ -67,27 +59,25 @@ public class AsyncTaskEnqueuerTest {
@RegisterExtension public final InjectExtension inject = new InjectExtension();
@Mock private AppEngineServiceUtils appEngineServiceUtils;
private AsyncTaskEnqueuer asyncTaskEnqueuer;
private final CapturingLogHandler logHandler = new CapturingLogHandler();
private final FakeClock clock = new FakeClock(DateTime.parse("2015-05-18T12:34:56Z"));
private CloudTasksHelper cloudTasksHelper = new CloudTasksHelper(clock);
@BeforeEach
void beforeEach() {
JdkLoggerConfig.getConfig(AsyncTaskEnqueuer.class).addHandler(logHandler);
when(appEngineServiceUtils.getServiceHostname("backend")).thenReturn("backend.hostname.fake");
asyncTaskEnqueuer = createForTesting(appEngineServiceUtils, clock, standardSeconds(90));
asyncTaskEnqueuer =
createForTesting(cloudTasksHelper.getTestCloudTasksUtils(), clock, standardSeconds(90));
}
public static AsyncTaskEnqueuer createForTesting(
AppEngineServiceUtils appEngineServiceUtils, FakeClock clock, Duration asyncDeleteDelay) {
CloudTasksUtils cloudTasksUtils, FakeClock clock, Duration asyncDeleteDelay) {
return new AsyncTaskEnqueuer(
getQueue(QUEUE_ASYNC_ACTIONS),
getQueue(QUEUE_ASYNC_DELETE),
getQueue(QUEUE_ASYNC_HOST_RENAME),
asyncDeleteDelay,
appEngineServiceUtils,
cloudTasksUtils,
new Retrier(new FakeSleeper(clock), 1));
}
@@ -96,18 +86,16 @@ public class AsyncTaskEnqueuerTest {
ContactResource contact = persistActiveContact("jd23456");
asyncTaskEnqueuer.enqueueAsyncResave(
contact.createVKey(), clock.nowUtc(), clock.nowUtc().plusDays(5));
assertTasksEnqueued(
cloudTasksHelper.assertTasksEnqueued(
QUEUE_ASYNC_ACTIONS,
new TaskMatcher()
new CloudTasksHelper.TaskMatcher()
.url(ResaveEntityAction.PATH)
.method("POST")
.header("Host", "backend.hostname.fake")
.method(HttpMethod.POST)
.service("backend")
.header("content-type", "application/x-www-form-urlencoded")
.param(PARAM_RESOURCE_KEY, contact.createVKey().stringify())
.param(PARAM_REQUESTED_TIME, clock.nowUtc().toString())
.etaDelta(
standardDays(5).minus(standardSeconds(30)),
standardDays(5).plus(standardSeconds(30))));
.scheduleTime(clock.nowUtc().plus(Duration.standardDays(5))));
}
@Test
@@ -118,19 +106,17 @@ public class AsyncTaskEnqueuerTest {
contact.createVKey(),
now,
ImmutableSortedSet.of(now.plusHours(24), now.plusHours(50), now.plusHours(75)));
assertTasksEnqueued(
cloudTasksHelper.assertTasksEnqueued(
QUEUE_ASYNC_ACTIONS,
new TaskMatcher()
.url(ResaveEntityAction.PATH)
.method("POST")
.header("Host", "backend.hostname.fake")
.method(HttpMethod.POST)
.service("backend")
.header("content-type", "application/x-www-form-urlencoded")
.param(PARAM_RESOURCE_KEY, contact.createVKey().stringify())
.param(PARAM_REQUESTED_TIME, now.toString())
.param(PARAM_RESAVE_TIMES, "2015-05-20T14:34:56.000Z,2015-05-21T15:34:56.000Z")
.etaDelta(
standardHours(24).minus(standardSeconds(30)),
standardHours(24).plus(standardSeconds(30))));
.scheduleTime(clock.nowUtc().plus(Duration.standardHours(24))));
}
@MockitoSettings(strictness = Strictness.LENIENT)
@@ -139,62 +125,7 @@ public class AsyncTaskEnqueuerTest {
ContactResource contact = persistActiveContact("jd23456");
asyncTaskEnqueuer.enqueueAsyncResave(
contact.createVKey(), clock.nowUtc(), clock.nowUtc().plusDays(31));
assertNoTasksEnqueued(QUEUE_ASYNC_ACTIONS);
cloudTasksHelper.assertNoTasksEnqueued(QUEUE_ASYNC_ACTIONS);
assertLogMessage(logHandler, Level.INFO, "Ignoring async re-save");
}
@Test
void testEnqueueRelock() {
RegistryLock lock =
saveRegistryLock(
new RegistryLock.Builder()
.setLockCompletionTime(clock.nowUtc())
.setUnlockRequestTime(clock.nowUtc())
.setUnlockCompletionTime(clock.nowUtc())
.isSuperuser(false)
.setDomainName("example.tld")
.setRepoId("repoId")
.setRelockDuration(standardHours(6))
.setRegistrarId("TheRegistrar")
.setRegistrarPocId("someone@example.com")
.setVerificationCode("hi")
.build());
asyncTaskEnqueuer.enqueueDomainRelock(lock.getRelockDuration().get(), lock.getRevisionId(), 0);
assertTasksEnqueued(
QUEUE_ASYNC_ACTIONS,
new TaskMatcher()
.url(RelockDomainAction.PATH)
.method("POST")
.header("Host", "backend.hostname.fake")
.param(
RelockDomainAction.OLD_UNLOCK_REVISION_ID_PARAM,
String.valueOf(lock.getRevisionId()))
.param(RelockDomainAction.PREVIOUS_ATTEMPTS_PARAM, "0")
.etaDelta(
standardHours(6).minus(standardSeconds(30)),
standardHours(6).plus(standardSeconds(30))));
}
@MockitoSettings(strictness = Strictness.LENIENT)
@Test
void testFailure_enqueueRelock_noDuration() {
RegistryLock lockWithoutDuration =
saveRegistryLock(
new RegistryLock.Builder()
.isSuperuser(false)
.setDomainName("example.tld")
.setRepoId("repoId")
.setRegistrarId("TheRegistrar")
.setRegistrarPocId("someone@example.com")
.setVerificationCode("hi")
.build());
assertThat(
assertThrows(
IllegalArgumentException.class,
() -> asyncTaskEnqueuer.enqueueDomainRelock(lockWithoutDuration)))
.hasMessageThat()
.isEqualTo(
String.format(
"Lock with ID %s not configured for relock", lockWithoutDuration.getRevisionId()));
}
}

View File

@@ -91,13 +91,13 @@ import google.registry.model.transfer.ContactTransferData;
import google.registry.model.transfer.TransferData;
import google.registry.model.transfer.TransferResponse;
import google.registry.model.transfer.TransferStatus;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.FakeClock;
import google.registry.testing.FakeResponse;
import google.registry.testing.FakeSleeper;
import google.registry.testing.InjectExtension;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.testing.mapreduce.MapreduceTestCase;
import google.registry.util.AppEngineServiceUtils;
import google.registry.util.RequestStatusChecker;
import google.registry.util.Retrier;
import google.registry.util.Sleeper;
@@ -148,7 +148,7 @@ public class DeleteContactsAndHostsActionTest
inject.setStaticField(Ofy.class, "clock", clock);
enqueuer =
AsyncTaskEnqueuerTest.createForTesting(
mock(AppEngineServiceUtils.class), clock, Duration.ZERO);
new CloudTasksHelper(clock).getTestCloudTasksUtils(), clock, Duration.ZERO);
AsyncTaskMetrics asyncTaskMetricsMock = mock(AsyncTaskMetrics.class);
action = new DeleteContactsAndHostsAction();
action.asyncTaskMetrics = asyncTaskMetricsMock;

View File

@@ -39,7 +39,6 @@ import google.registry.model.domain.DomainHistory;
import google.registry.model.ofy.Ofy;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.monitoring.whitebox.EppMetric;
import google.registry.persistence.transaction.QueryComposer.Comparator;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.DualDatabaseTest;
@@ -78,8 +77,7 @@ class DeleteExpiredDomainsActionTest {
createTld("tld");
EppController eppController =
DaggerEppTestComponent.builder()
.fakesAndMocksModule(
FakesAndMocksModule.create(clock, EppMetric.builderForRequest(clock)))
.fakesAndMocksModule(FakesAndMocksModule.create(clock))
.build()
.startRequest()
.eppController();

View File

@@ -48,7 +48,6 @@ import google.registry.model.common.Cursor;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.ofy.Ofy;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry;
@@ -56,7 +55,6 @@ import google.registry.model.tld.Registry;
import google.registry.testing.DualDatabaseTest;
import google.registry.testing.FakeClock;
import google.registry.testing.FakeResponse;
import google.registry.testing.InjectExtension;
import google.registry.testing.ReplayExtension;
import google.registry.testing.TestOfyAndSql;
import google.registry.testing.TestOfyOnly;
@@ -78,11 +76,6 @@ public class ExpandRecurringBillingEventsActionTest
private DateTime currentTestTime = DateTime.parse("1999-01-05T00:00:00Z");
private final FakeClock clock = new FakeClock(currentTestTime);
@Order(Order.DEFAULT - 1)
@RegisterExtension
public final InjectExtension inject =
new InjectExtension().withStaticFieldOverride(Ofy.class, "clock", clock);
@Order(Order.DEFAULT - 2)
@RegisterExtension
public final ReplayExtension replayExtension = ReplayExtension.createWithDoubleReplay(clock);

Some files were not shown because too many files have changed in this diff Show More