1
0
mirror of https://github.com/google/nomulus synced 2026-02-02 19:12:27 +00:00

Compare commits

...

27 Commits

Author SHA1 Message Date
Lai Jiang
59bca1a9ed Disable sending cert expiration emails on sandbox (#1528) 2022-02-22 14:46:27 -05:00
Michael Muller
f8198fa590 Do full database comparison during replay tests (#1524)
* Fix entity delete replication, compare db @ replay

Replay tests currently only verify that the contents of a transaction are
can be successfully replicated to the other database.  They do not verify that
the contents of both databases are equivalent.  As a result, we miss any
changes omitted from the transaction (as was the case with entity deletions).

This change adds a final database comparison to ReplayExtension so we can
safely say that the databases are in the same state.

This comparison is introduced in part as a unit test for the one-line fix for
replication of an "entity delete" operation (where we delete using an entity
object instead of the object's key) which so far has only affected PollMessage
deletion.  The fix is also included in this commit within
JpaTransactionManagerImpl.

* Exclude tests and entities with failing comparisons

* Get all tests to pass and fix more timestamp

Fix most of the unit tests that were broken by this change.

- Fix timestamp updates after grace period changes in DomainContent and for
  TLD changes in Registrar.
- Reenable full database comparison for most DomainCreateFlowTest's.
- Make some test entities NonReplicated so they don't break when used with
  jpaTm().delete()
- Diable checking of a few more entity types that are failing comparisons.
- Add some formatting fixes.

* Remove unnecessary "NoDatabaseCompare"

I turns out that after other fixes/elisions we no longer need these for
any tests in DomainCreateFlowTest.

* Changes for review

* Remove old "compare" flag.

* Reformatted.
2022-02-22 10:49:57 -05:00
Lai Jiang
bbac81996b Make a few quality-of-life improvements in CloudTasksUtils (#1521)
* Make a few quality-of-life improvements in CloudTasksUtils

1. Update the method names. There are too many overloaded methods and it
   is hard to figure out which one does which without checking the
   javadoc.

2. Added a method in the task matcher to specify the delay time in
   DateTime, so the caller does not need to convert it to Timestamp.

3. Remove the expilict dependency on a clock when enqueueing a task with
   delay, the clock is now injected directly into the util instance
   itself.
2022-02-18 20:21:56 -05:00
Ben McIlwain
52c759d1db Disable prober data deletion cron job in prod & sandbox (#1525)
* Disable prober data deletion cron job in prod & sandbox

This is going to unnecessarily make the database migration more complex, and we
don't need them that badly. We'll re-enable these cron jobs once we've written
the new version of this action that handles Cloud SQL correctly (the current
version only does Datastore anyway).
2022-02-17 08:46:40 -08:00
Weimin Yu
453af87615 Ignore prober data when comparing databases (#1523)
* Ignore prober data when comparing databases

Completely ignore prober data when comparing Datastore and SQL.

Prober data deletions are not propagated from Datastore to SQL. It is
difficult to distinguish soft-deletes from normal updates, therefore
difficult to avoid false positives when looking for differences.
2022-02-15 12:01:20 -05:00
Ben McIlwain
d0d7515c0a Make NordnUploadAction resilient to duplicate task queue tasks (#1516)
This is necessary because the Cloud Tasks API is not transactionally enrolled,
so it's possible that multiple tasks might end up being enqueued. We need to be
able to handle them.
2022-02-14 14:59:46 -05:00
Michael Muller
2c70127573 Fix update timestamps for DomainContent types (#1517)
* Fix update timestamps for DomainContent types

We expect update timestamps to be updated whenever a containing entity is
modified and persisted, but unfortunately Hibernate doesn't seem to do this --
instead it appears to regard such an entity as unchanged.

To work around this, we explicitly reset the update timestamp whenever a
nested collection is modified in the Builder.

Note that this change only solves the problem for DomainContent.  All other
entitities containing UpdateAutoTimestamp will need to be audited and
instrumented with a similar change.

* Fix a handful of tests broken by this change

* Reformatted.
2022-02-14 11:31:03 -05:00
Rachel Guan
d3fc6063c9 Use CloudTasksUtils to enqueue in RegistrarSettingsAction (#1467)
* Use CloudTaskUtils to enqueue

* Add CloudTasksUtilsModule to FrontendComponent

* Fix Uri query issue

* Remove header and check service in matcher

* Use a ThreadLocal boolean in TestServer to determine enqueueing

* Extract enqueuing and email sending from tm().transact()
2022-02-10 11:16:28 -05:00
Weimin Yu
82802ec85c Compare datastore to sql action (#1507)
* Add action to DB comparison pipeline

Add a backend Action in Nomulus server that lanuches the pipeline for
comparing datastore (secondary) with Cloud SQL (primary).

* Save progress

* Revert test changes

* Add pipeline launching
2022-02-10 10:43:36 -05:00
Rachel Guan
e53594a626 Fix protobuf-java-util dependency (#1518) 2022-02-09 14:11:09 -05:00
Rachel Guan
e6577e3f23 Use CloudTasksUtil to enqueue task in IcannReportingStagingAction (#1489)
* Use CloudTasksUtil to enqueue task

* Use schedule time helper and add schedule time comparison
2022-02-09 12:33:56 -05:00
Michael Muller
c9da36be9f Fix create/update timestamp replay problems (#1515)
* Fix create/update timestamp replay problems

When CreateAutoTimestamp and UpdateAutoTimestamp are inserted into a
Transaction, their values are not populated in the same way as when they are
stored in the course of an SQL commit.  This results in different timestamp
values between SQL and datastore during the SQL -> DS replay.

Fix this by providing these values from the JPA transaction time when we're
doing transaction serialization.

This change also removes the initialization of the Ofy clock in
ExpandRecurringBillingEventsActionTest.  It's not necessary as the
ReplayExtension already takes care of this and doing it after the
ReplayExtension as we were breaks a test now that the update timestamps are
correct.
2022-02-09 08:48:51 -05:00
Rachel Guan
2ccae00dae Remove ReportingUtils and use CloudTasksUtil to enqueue tasks in GenerateInvoicesAction and GenerateSpec11ReportAction (#1491)
* Remove ReportingUtils and use CloudTaskUtil to enqueue 

* Use schedule time helper to enqueue and update schedule time comparison

* Fix comment, indentation in gradle file and improve time comparison
2022-02-08 17:48:47 -05:00
Rachel Guan
00c8b6a76d Change from TaskQueueUtils to CloudTasksUtils in LoadTestAction (#1468)
* Change from TaskQueueUtils to CloudTasksUtils in LoadTestAction

* Put X_CSRF_TOKEN in task headers

* Fix schedule time and gradle issue

* Remove TaskQueue constant dependency

* Double run seconds

* Add comment for X_CSRF_TOKEN
2022-02-08 17:44:24 -05:00
Lai Jiang
09dca28122 Make EscrowDepositEncryptor work with BRDA deposits (#1512)
Also make it possible to specify a revision number.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1512)
<!-- Reviewable:end -->
2022-02-07 12:40:00 -05:00
Weimin Yu
b412bdef9f Fix flaky RdeStagingActionDatastoreTest (#1514)
* Fix flaky RdeStagingActionDatastoreTest

Fixed the most common cause that makes one method flaky (Clock and
timestamp problem). Added a TODO to rethink test case.

Also added notes on tasks potentially enqueued multiple times.
2022-02-04 10:40:52 -05:00
Rachel Guan
62e5de8a3a Add support for delay of duration when scheduling a task (#1493)
* Add support for delay by duration when scheduling task

* Fix comments

* Add test for negative duration

* Change delay parameter type to duration
2022-02-03 22:25:39 -05:00
Lai Jiang
fa9b784c5c Correctly delete all stopped versions except for the most recent 3 (#1511)
The gcloud command does some weird stuff with sorting when custom format
is used. Here we instead rely on linux sort and head command to sort the
versions list.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1511)
<!-- Reviewable:end -->
2022-02-03 16:04:58 -05:00
Weimin Yu
e2bd72a74e Add an index on Host.host_name column (#1510)
* Add an index on Host.host_name column

This field is queried during host creation and needs an index to speed
up the query.

Since Hibernate does not explicitly refer to indexes, we can change the
code and schema in one PR.
2022-02-03 15:57:15 -05:00
gbrodman
28d41488b1 Use the built-in replicaJpaTm() in RDAP (#1506)
* Use the built-in replicaJpaTm() in RDAP

This includes a test for the replica-simulating transaction manager and
removal of any replica-specific code in RDAP tests, because it's
unnecessary due to the existing tests.
2022-02-03 11:14:26 -05:00
Weimin Yu
1107b9f2e3 Count duplicates when comparing Databases (#1509)
* Count duplicates when comparing Databases

Cursors may have duplicates in Datastore if imported across projects.
Count them instead of throwing.
2022-02-03 10:59:03 -05:00
Lai Jiang
9624b483d4 Copy the latest revision of BRDA during upload (#1508)
The revision was hardcoded to 0, which caused problem when we need to
re-run BRDA.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1508)
<!-- Reviewable:end -->
2022-02-02 21:54:42 -05:00
Rachel Guan
365937f22d Change from TaskQueueUtils to CloudTasksUtils in RdeStaging (#1411)
* Change from TaskQueueUtils to CloudTaskUtils in RdeStaging
2022-02-01 20:41:56 -05:00
sarahcaseybot
d5db6c16bc Add DS validation to match Cloud DNS (#1487)
* Add DS validation to match Cloud DNS

* Add checks to flows

* Add some flow tests

* Add tests for DomainCreateFlow

* Add tests for UpdateDomainCommand

* Fix docs test

* Small fixes

* Remove builder from tests
2022-02-01 15:25:00 -05:00
Lai Jiang
c1ad06afd1 Allow the beam parameter in RDE standard mode (#1505)
Standard mode will determine the watermarks based on the cursors and
kick off subsequent uploading steps. In order to run both the Beam and
the Mapreduce pipeline in parallel, we need to allow setting the beam
parameter when in standard mode. This changes should have been part of
https://github.com/google/nomulus/pull/1500.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1505)
<!-- Reviewable:end -->
2022-01-31 14:20:23 -05:00
gbrodman
b24670f33a Use the replica jpaTm in FKI and EppResource cache methods (#1503)
The cached methods are only used in situations where we don't really
care about being 100% synchronously up to date (e.g. whois), and they're
not used frequently anyway, so it's safe to use the replica in these
locations.
2022-01-28 18:05:18 -05:00
Weimin Yu
1253fa479a Release ValidateSqlPipeline as container image (#1504)
* Release ValidateSqlPipeline as container image
2022-01-28 14:57:31 -05:00
124 changed files with 3047 additions and 1354 deletions

View File

@@ -41,4 +41,20 @@ public interface Sleeper {
* @see com.google.common.util.concurrent.Uninterruptibles#sleepUninterruptibly
*/
void sleepUninterruptibly(ReadableDuration duration);
/**
* Puts the current thread to interruptible sleep.
*
* <p>This is a convenience method for {@link #sleep} that properly converts an {@link
* InterruptedException} to a {@link RuntimeException}.
*/
default void sleepInterruptibly(ReadableDuration duration) {
try {
sleep(duration);
} catch (InterruptedException e) {
// Restore current thread's interrupted state.
Thread.currentThread().interrupt();
throw new RuntimeException("Interrupted.", e);
}
}
}

View File

@@ -319,6 +319,7 @@ dependencies {
testCompile deps['com.google.appengine:appengine-testing']
testCompile deps['com.google.guava:guava-testlib']
testCompile deps['com.google.monitoring-client:contrib']
testCompile deps['com.google.protobuf:protobuf-java-util']
testCompile deps['com.google.truth:truth']
testCompile deps['com.google.truth.extensions:truth-java8-extension']
testCompile deps['org.checkerframework:checker-qual']
@@ -801,6 +802,11 @@ if (environment == 'alpha') {
mainClass: 'google.registry.beam.comparedb.ValidateDatastorePipeline',
metaData: 'google/registry/beam/validate_datastore_pipeline_metadata.json'
],
validateSql :
[
mainClass: 'google.registry.beam.comparedb.ValidateSqlPipeline',
metaData: 'google/registry/beam/validate_sql_pipeline_metadata.json'
],
]
project.tasks.create("stageBeamPipelines") {
doLast {

View File

@@ -21,6 +21,7 @@ import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_
import static google.registry.backup.RestoreCommitLogsAction.BUCKET_OVERRIDE_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.FROM_TIME_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.TO_TIME_PARAM;
import static google.registry.backup.SyncDatastoreToSqlSnapshotAction.SQL_SNAPSHOT_ID_PARAM;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredDatetimeParameter;
import static google.registry.request.RequestParameters.extractRequiredParameter;
@@ -98,6 +99,12 @@ public final class BackupModule {
return extractRequiredDatetimeParameter(req, TO_TIME_PARAM);
}
@Provides
@Parameter(SQL_SNAPSHOT_ID_PARAM)
static String provideSqlSnapshotId(HttpServletRequest req) {
return extractRequiredParameter(req, SQL_SNAPSHOT_ID_PARAM);
}
@Provides
@Backups
static ListeningExecutorService provideListeningExecutorService() {

View File

@@ -30,6 +30,7 @@ import google.registry.request.Action.Service;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
@@ -64,33 +65,47 @@ public final class CommitLogCheckpointAction implements Runnable {
@Override
public void run() {
createCheckPointAndStartAsyncExport();
}
/**
* Creates a {@link CommitLogCheckpoint} and initiates an asynchronous export task.
*
* @return the {@code CommitLogCheckpoint} to be exported
*/
public Optional<CommitLogCheckpoint> createCheckPointAndStartAsyncExport() {
final CommitLogCheckpoint checkpoint = strategy.computeCheckpoint();
logger.atInfo().log(
"Generated candidate checkpoint for time: %s", checkpoint.getCheckpointTime());
ofyTm()
.transact(
() -> {
DateTime lastWrittenTime = CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
logger.atInfo().log(
"Newer checkpoint already written at time: %s", lastWrittenTime);
return;
}
auditedOfy()
.saveIgnoringReadOnlyWithoutBackup()
.entities(
checkpoint, CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
cloudTasksUtils.enqueue(
QUEUE_NAME,
CloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
});
boolean isCheckPointPersisted =
ofyTm()
.transact(
() -> {
DateTime lastWrittenTime =
CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
logger.atInfo().log(
"Newer checkpoint already written at time: %s", lastWrittenTime);
return false;
}
auditedOfy()
.saveIgnoringReadOnlyWithoutBackup()
.entities(
checkpoint,
CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
cloudTasksUtils.enqueue(
QUEUE_NAME,
cloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
return true;
});
return isCheckPointPersisted ? Optional.of(checkpoint) : Optional.empty();
}
}

View File

@@ -0,0 +1,173 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.backup;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.ofy.CommitLogCheckpoint;
import google.registry.model.replay.ReplicateToDatastoreAction;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Sleeper;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Synchronizes Datastore to a given SQL snapshot when SQL is the primary database.
*
* <p>The caller takes the responsibility for:
*
* <ul>
* <li>verifying the current migration stage
* <li>acquiring the {@link ReplicateToDatastoreAction#REPLICATE_TO_DATASTORE_LOCK_NAME
* replication lock}, and
* <li>while holding the lock, creating an SQL snapshot and invoking this action with the snapshot
* id
* </ul>
*
* The caller may release the replication lock upon receiving the response from this action. Please
* refer to {@link google.registry.tools.ValidateDatastoreWithSqlCommand} for more information on
* usage.
*
* <p>This action plays SQL transactions up to the user-specified snapshot, creates a new CommitLog
* checkpoint, and exports all CommitLogs to GCS up to this checkpoint. The timestamp of this
* checkpoint can be used to recreate a Datastore snapshot that is equivalent to the given SQL
* snapshot. If this action succeeds, the checkpoint timestamp is included in the response (the
* format of which is defined by {@link #SUCCESS_RESPONSE_TEMPLATE}).
*/
@Action(
service = Service.BACKEND,
path = SyncDatastoreToSqlSnapshotAction.PATH,
method = Action.Method.POST,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
@DeleteAfterMigration
public class SyncDatastoreToSqlSnapshotAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
public static final String PATH = "/_dr/task/syncDatastoreToSqlSnapshot";
public static final String SUCCESS_RESPONSE_TEMPLATE =
"Datastore is up-to-date with provided SQL snapshot (%s). CommitLog timestamp is (%s).";
static final String SQL_SNAPSHOT_ID_PARAM = "sqlSnapshotId";
private static final int COMMITLOGS_PRESENCE_CHECK_ATTEMPTS = 10;
private static final Duration COMMITLOGS_PRESENCE_CHECK_DELAY = Duration.standardSeconds(6);
private final Response response;
private final Sleeper sleeper;
@Config("commitLogGcsBucket")
private final String gcsBucket;
private final GcsDiffFileLister gcsDiffFileLister;
private final LatestDatastoreSnapshotFinder datastoreSnapshotFinder;
private final CommitLogCheckpointAction commitLogCheckpointAction;
private final String sqlSnapshotId;
@Inject
SyncDatastoreToSqlSnapshotAction(
Response response,
Sleeper sleeper,
@Config("commitLogGcsBucket") String gcsBucket,
GcsDiffFileLister gcsDiffFileLister,
LatestDatastoreSnapshotFinder datastoreSnapshotFinder,
CommitLogCheckpointAction commitLogCheckpointAction,
@Parameter(SQL_SNAPSHOT_ID_PARAM) String sqlSnapshotId) {
this.response = response;
this.sleeper = sleeper;
this.gcsBucket = gcsBucket;
this.gcsDiffFileLister = gcsDiffFileLister;
this.datastoreSnapshotFinder = datastoreSnapshotFinder;
this.commitLogCheckpointAction = commitLogCheckpointAction;
this.sqlSnapshotId = sqlSnapshotId;
}
@Override
public void run() {
logger.atInfo().log("Datastore validation invoked. SqlSnapshotId is %s.", sqlSnapshotId);
try {
CommitLogCheckpoint checkpoint = ensureDatabasesComparable(sqlSnapshotId);
response.setStatus(SC_OK);
response.setPayload(
String.format(SUCCESS_RESPONSE_TEMPLATE, sqlSnapshotId, checkpoint.getCheckpointTime()));
return;
} catch (Exception e) {
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(e.getMessage());
}
}
private CommitLogCheckpoint ensureDatabasesComparable(String sqlSnapshotId) {
// Replicate SQL transaction to Datastore, up to when this snapshot is taken.
int playbacks = ReplicateToDatastoreAction.replayAllTransactions(Optional.of(sqlSnapshotId));
logger.atInfo().log("Played %s SQL transactions.", playbacks);
Optional<CommitLogCheckpoint> checkpoint = exportCommitLogs();
if (!checkpoint.isPresent()) {
throw new RuntimeException("Cannot create CommitLog checkpoint");
}
logger.atInfo().log(
"CommitLog checkpoint created at %s.", checkpoint.get().getCheckpointTime());
verifyCommitLogsPersisted(checkpoint.get());
return checkpoint.get();
}
private Optional<CommitLogCheckpoint> exportCommitLogs() {
// Trigger an async CommitLog export to GCS. Will check file availability later.
// Although we can add support to synchronous execution, it can disrupt the export cadence
// when the system is busy
Optional<CommitLogCheckpoint> checkpoint =
commitLogCheckpointAction.createCheckPointAndStartAsyncExport();
// Failure to create checkpoint most likely caused by race with cron-triggered checkpointing.
// Retry once.
if (!checkpoint.isPresent()) {
commitLogCheckpointAction.createCheckPointAndStartAsyncExport();
}
return checkpoint;
}
private void verifyCommitLogsPersisted(CommitLogCheckpoint checkpoint) {
DateTime exportStartTime =
datastoreSnapshotFinder
.getSnapshotInfo(checkpoint.getCheckpointTime().toInstant())
.exportInterval()
.getStart();
logger.atInfo().log("Found Datastore export at %s", exportStartTime);
for (int attempts = 0; attempts < COMMITLOGS_PRESENCE_CHECK_ATTEMPTS; attempts++) {
try {
gcsDiffFileLister.listDiffFiles(gcsBucket, exportStartTime, checkpoint.getCheckpointTime());
return;
} catch (IllegalStateException e) {
// Gap in commitlog files. Fall through to sleep and retry.
logger.atInfo().log("Commitlog files not yet found on GCS.");
}
sleeper.sleepInterruptibly(COMMITLOGS_PRESENCE_CHECK_DELAY);
}
throw new RuntimeException("Cannot find all commitlog files.");
}
}

View File

@@ -66,7 +66,7 @@ public class LatestDatastoreSnapshotFinder {
* "2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata".
*/
Optional<String> metaFilePathOptional =
findNewestExportMetadataFileBeforeTime(bucketName, exportEndTimeUpperBound, 2);
findNewestExportMetadataFileBeforeTime(bucketName, exportEndTimeUpperBound, 5);
if (!metaFilePathOptional.isPresent()) {
throw new NoSuchElementException("No exports found over the past 2 days.");
}
@@ -125,12 +125,12 @@ public class LatestDatastoreSnapshotFinder {
/** Holds information about a Datastore snapshot. */
@AutoValue
abstract static class DatastoreSnapshotInfo {
abstract String exportDir();
public abstract static class DatastoreSnapshotInfo {
public abstract String exportDir();
abstract String commitLogDir();
public abstract String commitLogDir();
abstract Interval exportInterval();
public abstract Interval exportInterval();
static DatastoreSnapshotInfo create(
String exportDir, String commitLogDir, Interval exportOperationInterval) {

View File

@@ -23,7 +23,6 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.initsql.Transforms;
import google.registry.config.RegistryEnvironment;
import google.registry.model.BackupGroupRoot;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
@@ -104,6 +103,7 @@ final class ValidateSqlUtils {
private final HashMap<String, Counter> missingCounters = new HashMap<>();
private final HashMap<String, Counter> unequalCounters = new HashMap<>();
private final HashMap<String, Counter> badEntityCounters = new HashMap<>();
private final HashMap<String, Counter> duplicateEntityCounters = new HashMap<>();
private volatile boolean logPrinted = false;
@@ -120,6 +120,8 @@ final class ValidateSqlUtils {
counterKey, Metrics.counter("CompareDB", "Missing In One DB: " + counterKey));
unequalCounters.put(counterKey, Metrics.counter("CompareDB", "Not Equal:" + counterKey));
badEntityCounters.put(counterKey, Metrics.counter("CompareDB", "Bad Entities:" + counterKey));
duplicateEntityCounters.put(
counterKey, Metrics.counter("CompareDB", "Duplicate Entities:" + counterKey));
}
/**
@@ -158,12 +160,18 @@ final class ValidateSqlUtils {
ImmutableList<SqlEntity> entities = ImmutableList.copyOf(kv.getValue());
verify(!entities.isEmpty(), "Can't happen: no value for key %s.", kv.getKey());
verify(entities.size() <= 2, "Unexpected duplicates for key %s", kv.getKey());
String counterKey = getCounterKey(entities.get(0).getClass());
ensureCounterExists(counterKey);
totalCounters.get(counterKey).inc();
if (entities.size() > 2) {
// Duplicates may happen with Cursors if imported across projects. Its key in Datastore, the
// id field, encodes the project name and is not fixed by the importing job.
duplicateEntityCounters.get(counterKey).inc();
return;
}
if (entities.size() == 1) {
if (isSpecialCaseProberEntity(entities.get(0))) {
return;
@@ -176,12 +184,19 @@ final class ValidateSqlUtils {
}
return;
}
SqlEntity entity0;
SqlEntity entity1;
SqlEntity entity0 = entities.get(0);
SqlEntity entity1 = entities.get(1);
if (isSpecialCaseProberEntity(entity0) && isSpecialCaseProberEntity(entity1)) {
// Ignore prober-related data: their deletions are not propagated from Datastore to SQL.
// When code reaches here, in most cases it involves one soft deleted entity in Datastore
// and an SQL entity with its pre-deletion status.
return;
}
try {
entity0 = normalizeEntity(entities.get(0));
entity1 = normalizeEntity(entities.get(1));
entity0 = normalizeEntity(entity0);
entity1 = normalizeEntity(entity1);
} catch (Exception e) {
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
@@ -218,15 +233,6 @@ final class ValidateSqlUtils {
*/
static SqlEntity normalizeEppResource(SqlEntity eppResource) {
try {
if (isSpecialCaseProberEntity(eppResource)) {
// Clearing some timestamps. See isSpecialCaseProberEntity() for reasons.
Field lastUpdateTime = BackupGroupRoot.class.getDeclaredField("updateTimestamp");
lastUpdateTime.setAccessible(true);
lastUpdateTime.set(eppResource, null);
Field deletionTime = EppResource.class.getDeclaredField("deletionTime");
deletionTime.setAccessible(true);
deletionTime.set(eppResource, null);
}
Field authField =
eppResource instanceof DomainContent
? DomainContent.class.getDeclaredField("authInfo")

View File

@@ -297,10 +297,14 @@ public class RdeIO {
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), revision);
// Enqueueing a task is a side effect that is not undone if the transaction rolls
// back. So this may result in multiple copies of the same task being processed.
// This is fine because the RdeUploadAction is guarded by a lock and tracks progress
// by cursor. The BrdaCopyAction writes a file to GCS, which is an atomic action.
if (key.mode() == RdeMode.FULL) {
cloudTasksUtils.enqueue(
RDE_UPLOAD_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeUploadAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(
@@ -311,7 +315,7 @@ public class RdeIO {
} else {
cloudTasksUtils.enqueue(
BRDA_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
BrdaCopyAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(

View File

@@ -21,6 +21,7 @@ import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.util.CloudTasksUtils.SerializableCloudTasksClient;
@@ -46,8 +47,9 @@ public abstract class CloudTasksUtilsModule {
@Config("projectId") String projectId,
@Config("locationId") String locationId,
SerializableCloudTasksClient client,
Retrier retrier) {
return new CloudTasksUtils(retrier, projectId, locationId, client);
Retrier retrier,
Clock clock) {
return new CloudTasksUtils(retrier, clock, projectId, locationId, client);
}
// Provides a supplier instead of using a Dagger @Provider because the latter is not serializable.

View File

@@ -20,7 +20,6 @@ import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import javax.inject.Inject;
@@ -35,7 +34,6 @@ public final class CommitLogFanoutAction implements Runnable {
public static final String BUCKET_PARAM = "bucket";
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject @Parameter("endpoint") String endpoint;
@@ -43,18 +41,15 @@ public final class CommitLogFanoutAction implements Runnable {
@Inject @Parameter("jitterSeconds") Optional<Integer> jitterSeconds;
@Inject CommitLogFanoutAction() {}
@Override
public void run() {
for (int bucketId : CommitLogBucket.getBucketIds()) {
cloudTasksUtils.enqueue(
queue,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTaskWithJitter(
endpoint,
Service.BACKEND.toString(),
ImmutableMultimap.of(BUCKET_PARAM, Integer.toString(bucketId)),
clock,
jitterSeconds));
}
}

View File

@@ -45,7 +45,6 @@ import google.registry.request.ParameterMap;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import java.util.stream.Stream;
@@ -98,7 +97,6 @@ public final class TldFanoutAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject Response response;
@Inject @Parameter(ENDPOINT_PARAM) String endpoint;
@@ -159,7 +157,7 @@ public final class TldFanoutAction implements Runnable {
params = ArrayListMultimap.create(params);
params.put(RequestParameters.PARAM_TLD, tld);
}
return CloudTasksUtils.createPostTask(
endpoint, Service.BACKEND.toString(), params, clock, jitterSeconds);
return cloudTasksUtils.createPostTaskWithJitter(
endpoint, Service.BACKEND.toString(), params, jitterSeconds);
}
}

View File

@@ -422,6 +422,12 @@ have been in the database for a certain period of time. -->
<url-pattern>/_dr/task/createSyntheticHistoryEntries</url-pattern>
</servlet-mapping>
<!-- Action to sync Datastore to a snapshot of the primary SQL database. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/task/syncDatastoreToSqlSnapshot</url-pattern>
</servlet-mapping>
<!-- Security config -->
<security-constraint>
<web-resource-collection>

View File

@@ -253,16 +253,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<description>

View File

@@ -168,15 +168,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/task/sendExpiringCertificateNotificationEmail]]></url>
<description>
This job runs an action that sends emails to partners if their certificates are expiring soon.
</description>
<schedule>every day 04:30</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=export-snapshot&endpoint=/_dr/task/backupDatastore&runInEmpty]]></url>
<description>
@@ -191,16 +182,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<description>

View File

@@ -14,8 +14,6 @@
package google.registry.export.sheet;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static com.google.common.net.MediaType.PLAIN_TEXT_UTF_8;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_BAD_REQUEST;
@@ -23,7 +21,6 @@ import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_NO_CONTENT;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action;
@@ -100,7 +97,7 @@ public class SyncRegistrarsSheetAction implements Runnable {
}
public static final String PATH = "/_dr/task/syncRegistrarsSheet";
private static final String QUEUE = "sheet";
public static final String QUEUE = "sheet";
private static final String LOCK_NAME = "Synchronize registrars sheet";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@@ -144,11 +141,4 @@ public class SyncRegistrarsSheetAction implements Runnable {
Result.LOCKED.send(response, null);
}
}
/**
* Enqueues a sync registrar sheet task targeting the App Engine service specified by hostname.
*/
public static void enqueueRegistrarSheetSync(String hostname) {
getQueue(QUEUE).add(withUrl(PATH).method(Method.GET).header("Host", hostname));
}
}

View File

@@ -169,6 +169,7 @@ import org.joda.time.Duration;
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredDuringEarlyAccessProgramException}
* @error {@link DomainFlowUtils.FeesRequiredForPremiumNameException}
* @error {@link DomainFlowUtils.InvalidDsRecordException}
* @error {@link DomainFlowUtils.InvalidIdnDomainLabelException}
* @error {@link DomainFlowUtils.InvalidPunycodeException}
* @error {@link DomainFlowUtils.InvalidTcnIdChecksumException}

View File

@@ -129,6 +129,7 @@ import google.registry.model.tld.label.ReservedList;
import google.registry.model.tmch.ClaimsListDao;
import google.registry.persistence.VKey;
import google.registry.tldconfig.idn.IdnLabelValidator;
import google.registry.tools.DigestType;
import google.registry.util.Idn;
import java.math.BigDecimal;
import java.util.Collection;
@@ -144,6 +145,7 @@ import org.joda.money.CurrencyUnit;
import org.joda.money.Money;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.xbill.DNS.DNSSEC.Algorithm;
/** Static utility functions for domain flows. */
public class DomainFlowUtils {
@@ -293,13 +295,46 @@ public class DomainFlowUtils {
/** Check that the DS data that will be set on a domain is valid. */
static void validateDsData(Set<DelegationSignerData> dsData) throws EppException {
if (dsData != null && dsData.size() > MAX_DS_RECORDS_PER_DOMAIN) {
throw new TooManyDsRecordsException(
String.format(
"A maximum of %s DS records are allowed per domain.", MAX_DS_RECORDS_PER_DOMAIN));
if (dsData != null) {
if (dsData.size() > MAX_DS_RECORDS_PER_DOMAIN) {
throw new TooManyDsRecordsException(
String.format(
"A maximum of %s DS records are allowed per domain.", MAX_DS_RECORDS_PER_DOMAIN));
}
// TODO(sarahbot@): Add signature length verification
ImmutableList<DelegationSignerData> invalidAlgorithms =
dsData.stream()
.filter(ds -> !validateAlgorithm(ds.getAlgorithm()))
.collect(toImmutableList());
if (!invalidAlgorithms.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid algorithm wire value: %s",
invalidAlgorithms));
}
ImmutableList<DelegationSignerData> invalidDigestTypes =
dsData.stream()
.filter(ds -> !DigestType.fromWireValue(ds.getDigestType()).isPresent())
.collect(toImmutableList());
if (!invalidDigestTypes.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid digest type: %s",
invalidDigestTypes));
}
}
}
public static boolean validateAlgorithm(int alg) {
if (alg > 255 || alg < 0) {
return false;
}
// Algorithms that are reserved or unassigned will just return a string representation of their
// integer wire value.
String algorithm = Algorithm.string(alg);
return !algorithm.equals(Integer.toString(alg));
}
/** We only allow specifying years in a period. */
static Period verifyUnitIsYears(Period period) throws EppException {
if (!checkNotNull(period).getUnit().equals(Period.Unit.YEARS)) {
@@ -1217,6 +1252,13 @@ public class DomainFlowUtils {
}
}
/** Domain has an invalid DS record. */
static class InvalidDsRecordException extends ParameterValuePolicyErrorException {
public InvalidDsRecordException(String message) {
super(message);
}
}
/** Domain name is under tld which doesn't exist. */
static class TldDoesNotExistException extends ParameterValueRangeErrorException {
public TldDoesNotExistException(String tld) {

View File

@@ -114,6 +114,7 @@ import org.joda.time.DateTime;
* @error {@link DomainFlowUtils.EmptySecDnsUpdateException}
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredForNonFreeOperationException}
* @error {@link DomainFlowUtils.InvalidDsRecordException}
* @error {@link DomainFlowUtils.LinkedResourcesDoNotExistException}
* @error {@link DomainFlowUtils.LinkedResourceInPendingDeleteProhibitsOperationException}
* @error {@link DomainFlowUtils.MaxSigLifeChangeNotSupportedException}

View File

@@ -14,8 +14,6 @@
package google.registry.loadtest;
import static com.google.appengine.api.taskqueue.QueueConstants.maxTasksPerAdd;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.Lists.partition;
@@ -24,16 +22,20 @@ import static google.registry.util.ResourceUtils.readResourceUtf8;
import static java.util.Arrays.asList;
import static org.joda.time.DateTimeZone.UTC;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.Iterators;
import com.google.common.flogger.FluentLogger;
import com.google.protobuf.Timestamp;
import google.registry.config.RegistryEnvironment;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.security.XsrfTokenManager;
import google.registry.util.TaskQueueUtils;
import google.registry.util.CloudTasksUtils;
import java.time.Instant;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
@@ -62,6 +64,7 @@ public class LoadTestAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final int NUM_QUEUES = 10;
private static final int MAX_TASKS_PER_LOAD = 100;
private static final int ARBITRARY_VALID_HOST_LENGTH = 40;
private static final int MAX_CONTACT_LENGTH = 13;
private static final int MAX_DOMAIN_LABEL_LENGTH = 63;
@@ -146,7 +149,7 @@ public class LoadTestAction implements Runnable {
@Parameter("hostInfos")
int hostInfosPerSecond;
@Inject TaskQueueUtils taskQueueUtils;
@Inject CloudTasksUtils cloudTasksUtils;
private final String xmlContactCreateTmpl;
private final String xmlContactCreateFail;
@@ -208,7 +211,7 @@ public class LoadTestAction implements Runnable {
ImmutableList<String> contactNames = contactNamesBuilder.build();
ImmutableList<String> hostPrefixes = hostPrefixesBuilder.build();
ImmutableList.Builder<TaskOptions> tasks = new ImmutableList.Builder<>();
ImmutableList.Builder<Task> tasks = new ImmutableList.Builder<>();
for (int offsetSeconds = 0; offsetSeconds < runSeconds; offsetSeconds++) {
DateTime startSecond = initialStartSecond.plusSeconds(offsetSeconds);
// The first "failed" creates might actually succeed if the object doesn't already exist, but
@@ -254,7 +257,7 @@ public class LoadTestAction implements Runnable {
.collect(toImmutableList()),
startSecond));
}
ImmutableList<TaskOptions> taskOptions = tasks.build();
ImmutableList<Task> taskOptions = tasks.build();
enqueue(taskOptions);
logger.atInfo().log("Added %d total load test tasks.", taskOptions.size());
}
@@ -322,28 +325,51 @@ public class LoadTestAction implements Runnable {
return name.toString();
}
private List<TaskOptions> createTasks(List<String> xmls, DateTime start) {
ImmutableList.Builder<TaskOptions> tasks = new ImmutableList.Builder<>();
private List<Task> createTasks(List<String> xmls, DateTime start) {
ImmutableList.Builder<Task> tasks = new ImmutableList.Builder<>();
for (int i = 0; i < xmls.size(); i++) {
// Space tasks evenly within across a second.
int offsetMillis = (int) (1000.0 / xmls.size() * i);
Instant scheduleTime =
Instant.ofEpochMilli(start.plusMillis((int) (1000.0 / xmls.size() * i)).getMillis());
tasks.add(
TaskOptions.Builder.withUrl("/_dr/epptool")
.etaMillis(start.getMillis() + offsetMillis)
.header(X_CSRF_TOKEN, xsrfToken)
.param("clientId", registrarId)
.param("superuser", Boolean.FALSE.toString())
.param("dryRun", Boolean.FALSE.toString())
.param("xml", xmls.get(i)));
Task.newBuilder()
.setAppEngineHttpRequest(
cloudTasksUtils
.createPostTask(
"/_dr/epptool",
Service.TOOLS.toString(),
ImmutableMultimap.of(
"clientId",
registrarId,
"superuser",
Boolean.FALSE.toString(),
"dryRun",
Boolean.FALSE.toString(),
"xml",
xmls.get(i)))
.toBuilder()
.getAppEngineHttpRequest()
.toBuilder()
// instead of adding the X_CSRF_TOKEN to params, this remains as part of
// headers because of the existing setup for authentication in {@link
// google.registry.request.auth.LegacyAuthenticationMechanism}
.putHeaders(X_CSRF_TOKEN, xsrfToken)
.build())
.setScheduleTime(
Timestamp.newBuilder()
.setSeconds(scheduleTime.getEpochSecond())
.setNanos(scheduleTime.getNano())
.build())
.build());
}
return tasks.build();
}
private void enqueue(List<TaskOptions> tasks) {
List<List<TaskOptions>> chunks = partition(tasks, maxTasksPerAdd());
private void enqueue(List<Task> tasks) {
List<List<Task>> chunks = partition(tasks, MAX_TASKS_PER_LOAD);
// Farm out tasks to multiple queues to work around queue qps quotas.
for (int i = 0; i < chunks.size(); i++) {
taskQueueUtils.enqueue(getQueue("load" + (i % NUM_QUEUES)), chunks.get(i));
cloudTasksUtils.enqueue("load" + (i % NUM_QUEUES), chunks.get(i));
}
}
}

View File

@@ -60,4 +60,25 @@ public abstract class BackupGroupRoot extends ImmutableObject implements UnsafeS
protected void copyUpdateTimestamp(BackupGroupRoot other) {
this.updateTimestamp = PreconditionsUtils.checkArgumentNotNull(other, "other").updateTimestamp;
}
/**
* Resets the {@link #updateTimestamp} to force Hibernate to persist it.
*
* <p>This method is for use in setters in derived builders that do not result in the derived
* object being persisted.
*/
protected void resetUpdateTimestamp() {
this.updateTimestamp = UpdateAutoTimestamp.create(null);
}
/**
* Sets the {@link #updateTimestamp}.
*
* <p>This method is for use in the few places where we need to restore the update timestamp after
* mutating a collection in order to force the new timestamp to be persisted when it ordinarily
* wouldn't.
*/
protected void setUpdateTimestamp(UpdateAutoTimestamp timestamp) {
updateTimestamp = timestamp;
}
}

View File

@@ -21,6 +21,7 @@ import static com.google.common.collect.Sets.union;
import static google.registry.config.RegistryConfig.getEppResourceCachingDuration;
import static google.registry.config.RegistryConfig.getEppResourceMaxCachedEntries;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.CollectionUtils.nullToEmptyImmutableCopy;
@@ -361,6 +362,16 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable {
return thisCastToDerived();
}
/**
* Set the update timestamp.
*
* <p>This is provided at EppResource since BackupGroupRoot doesn't have a Builder.
*/
public B setUpdateTimestamp(UpdateAutoTimestamp updateTimestamp) {
getInstance().setUpdateTimestamp(updateTimestamp);
return thisCastToDerived();
}
/** Build the resource, nullifying empty strings and sets and setting defaults. */
@Override
public T build() {
@@ -380,13 +391,13 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable {
@Override
public EppResource load(VKey<? extends EppResource> key) {
return tm().doTransactionless(() -> tm().loadByKey(key));
return replicaTm().doTransactionless(() -> replicaTm().loadByKey(key));
}
@Override
public Map<VKey<? extends EppResource>, EppResource> loadAll(
Iterable<? extends VKey<? extends EppResource>> keys) {
return tm().doTransactionless(() -> tm().loadByKeys(keys));
return replicaTm().doTransactionless(() -> replicaTm().loadByKeys(keys));
}
};

View File

@@ -74,6 +74,8 @@ public class BulkQueryEntities {
builder.setGracePeriods(gracePeriods);
builder.setDsData(delegationSignerData);
builder.setNameservers(nsHosts);
// Restore the original update timestamp (this gets cleared when we set nameservers or DS data).
builder.setUpdateTimestamp(domainBaseLite.getUpdateTimestamp());
return builder.build();
}
@@ -100,6 +102,9 @@ public class BulkQueryEntities {
dsDataHistories.stream()
.map(DelegationSignerData::create)
.collect(toImmutableSet()))
// Restore the original update timestamp (this gets cleared when we set nameservers or
// DS data).
.setUpdateTimestamp(domainHistoryLite.domainContent.getUpdateTimestamp())
.build();
builder.setDomain(newDomainContent);
}

View File

@@ -895,6 +895,7 @@ public class DomainContent extends EppResource
public B setDsData(ImmutableSet<DelegationSignerData> dsData) {
getInstance().dsData = dsData;
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
@@ -918,11 +919,13 @@ public class DomainContent extends EppResource
public B setNameservers(VKey<HostResource> nameserver) {
getInstance().nsHosts = ImmutableSet.of(nameserver);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B setNameservers(ImmutableSet<VKey<HostResource>> nameservers) {
getInstance().nsHosts = forceEmptyToNull(nameservers);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
@@ -1032,17 +1035,20 @@ public class DomainContent extends EppResource
public B setGracePeriods(ImmutableSet<GracePeriod> gracePeriods) {
getInstance().gracePeriods = gracePeriods;
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B addGracePeriod(GracePeriod gracePeriod) {
getInstance().gracePeriods = union(getInstance().getGracePeriods(), gracePeriod);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B removeGracePeriod(GracePeriod gracePeriod) {
getInstance().gracePeriods =
CollectionUtils.difference(getInstance().getGracePeriods(), gracePeriod);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}

View File

@@ -34,6 +34,9 @@ import javax.persistence.AccessType;
@ReportedOn
@Entity
@javax.persistence.Entity(name = "Host")
@javax.persistence.Table(
name = "Host",
indexes = {@javax.persistence.Index(columnList = "hostName")})
@ExternalMessagingName("host")
@WithStringVKey
@Access(AccessType.FIELD) // otherwise it'll use the default if the repoId (property)

View File

@@ -21,6 +21,7 @@ import static google.registry.config.RegistryConfig.getEppResourceCachingDuratio
import static google.registry.config.RegistryConfig.getEppResourceMaxCachedEntries;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.entriesToImmutableMap;
import static google.registry.util.TypeUtils.instantiate;
@@ -51,6 +52,7 @@ import google.registry.model.host.HostResource;
import google.registry.model.replay.DatastoreOnlyEntity;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.util.NonFinalForTesting;
import java.util.Collection;
import java.util.Comparator;
@@ -198,7 +200,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
*/
public static <E extends EppResource> ImmutableMap<String, ForeignKeyIndex<E>> load(
Class<E> clazz, Collection<String> foreignKeys, final DateTime now) {
return loadIndexesFromStore(clazz, foreignKeys, true).entrySet().stream()
return loadIndexesFromStore(clazz, foreignKeys, true, false).entrySet().stream()
.filter(e -> now.isBefore(e.getValue().getDeletionTime()))
.collect(entriesToImmutableMap());
}
@@ -217,7 +219,10 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
*/
private static <E extends EppResource>
ImmutableMap<String, ForeignKeyIndex<E>> loadIndexesFromStore(
Class<E> clazz, Collection<String> foreignKeys, boolean inTransaction) {
Class<E> clazz,
Collection<String> foreignKeys,
boolean inTransaction,
boolean useReplicaJpaTm) {
if (tm().isOfy()) {
Class<ForeignKeyIndex<E>> fkiClass = mapToFkiClass(clazz);
return ImmutableMap.copyOf(
@@ -226,17 +231,18 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
: tm().doTransactionless(() -> auditedOfy().load().type(fkiClass).ids(foreignKeys)));
} else {
String property = RESOURCE_CLASS_TO_FKI_PROPERTY.get(clazz);
JpaTransactionManager jpaTmToUse = useReplicaJpaTm ? replicaJpaTm() : jpaTm();
ImmutableList<ForeignKeyIndex<E>> indexes =
tm().transact(
() ->
jpaTm()
.criteriaQuery(
CriteriaQueryBuilder.create(clazz)
.whereFieldIsIn(property, foreignKeys)
.build())
.getResultStream()
.map(e -> ForeignKeyIndex.create(e, e.getDeletionTime()))
.collect(toImmutableList()));
jpaTmToUse.transact(
() ->
jpaTmToUse
.criteriaQuery(
CriteriaQueryBuilder.create(clazz)
.whereFieldIsIn(property, foreignKeys)
.build())
.getResultStream()
.map(e -> ForeignKeyIndex.create(e, e.getDeletionTime()))
.collect(toImmutableList()));
// We need to find and return the entities with the maximum deletionTime for each foreign key.
return Multimaps.index(indexes, ForeignKeyIndex::getForeignKey).asMap().entrySet().stream()
.map(
@@ -260,7 +266,8 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
loadIndexesFromStore(
RESOURCE_CLASS_TO_FKI_CLASS.inverse().get(key.getKind()),
ImmutableSet.of(foreignKey),
false)
false,
true)
.get(foreignKey));
}
@@ -276,7 +283,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
Streams.stream(keys).map(v -> v.getSqlKey().toString()).collect(toImmutableSet());
ImmutableSet<VKey<ForeignKeyIndex<?>>> typedKeys = ImmutableSet.copyOf(keys);
ImmutableMap<String, ? extends ForeignKeyIndex<? extends EppResource>> existingFkis =
loadIndexesFromStore(resourceClass, foreignKeys, false);
loadIndexesFromStore(resourceClass, foreignKeys, false, true);
// ofy omits keys that don't have values in Datastore, so re-add them in
// here with Optional.empty() values.
return Maps.asMap(
@@ -336,7 +343,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
// Safe to cast VKey<FKI<E>> to VKey<FKI<?>>
@SuppressWarnings("unchecked")
ImmutableList<VKey<ForeignKeyIndex<?>>> fkiVKeys =
Streams.stream(foreignKeys)
foreignKeys.stream()
.map(fk -> (VKey<ForeignKeyIndex<?>>) VKey.create(fkiClass, fk))
.collect(toImmutableList());
try {

View File

@@ -41,7 +41,7 @@ import org.joda.time.DateTime;
/** Wrapper for {@link Supplier} that associates a time with each attempt. */
@DeleteAfterMigration
class CommitLoggedWork<R> implements Runnable {
public class CommitLoggedWork<R> implements Runnable {
private final Supplier<R> work;
private final Clock clock;

View File

@@ -46,6 +46,7 @@ import static google.registry.util.X509Utils.loadCertificate;
import static java.util.Comparator.comparing;
import static java.util.function.Predicate.isEqual;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.base.Supplier;
import com.google.common.collect.ImmutableList;
@@ -824,6 +825,7 @@ public class Registrar extends ImmutableObject
public Builder setAllowedTlds(Set<String> allowedTlds) {
getInstance().allowedTlds = ImmutableSortedSet.copyOf(assertTldsExist(allowedTlds));
getInstance().lastUpdateTime = UpdateAutoTimestamp.create(null);
return this;
}
@@ -991,6 +993,16 @@ public class Registrar extends ImmutableObject
return this;
}
/**
* This lets tests set the update timestamp in cases where setting fields resets the timestamp
* and breaks the verification that an object has not been updated since it was copied.
*/
@VisibleForTesting
public Builder setLastUpdateTime(DateTime timestamp) {
getInstance().lastUpdateTime = UpdateAutoTimestamp.create(timestamp);
return this;
}
/** Build the registrar, nullifying empty fields. */
@Override
public Registrar build() {

View File

@@ -59,6 +59,10 @@ public class ReplicateToDatastoreAction implements Runnable {
public static final String PATH = "/_dr/cron/replicateToDatastore";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Name of the lock that ensures sequential execution of replays. */
public static final String REPLICATE_TO_DATASTORE_LOCK_NAME =
ReplicateToDatastoreAction.class.getSimpleName();
/**
* Number of transactions to fetch from SQL. The rationale for 200 is that we're processing these
* every minute and our production instance currently does about 2 mutations per second, so this
@@ -66,7 +70,7 @@ public class ReplicateToDatastoreAction implements Runnable {
*/
public static final int BATCH_SIZE = 200;
private static final Duration LEASE_LENGTH = standardHours(1);
public static final Duration REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH = standardHours(1);
private final Clock clock;
private final RequestStatusChecker requestStatusChecker;
@@ -81,21 +85,26 @@ public class ReplicateToDatastoreAction implements Runnable {
}
@VisibleForTesting
public List<TransactionEntity> getTransactionBatch() {
public List<TransactionEntity> getTransactionBatchAtSnapshot() {
return getTransactionBatchAtSnapshot(Optional.empty());
}
static List<TransactionEntity> getTransactionBatchAtSnapshot(Optional<String> snapshotId) {
// Get the next batch of transactions that we haven't replicated.
LastSqlTransaction lastSqlTxnBeforeBatch = ofyTm().transact(LastSqlTransaction::load);
try {
return jpaTm()
.transactWithoutBackup(
() ->
jpaTm()
.query(
"SELECT txn FROM TransactionEntity txn WHERE id >"
+ " :lastId ORDER BY id",
TransactionEntity.class)
.setParameter("lastId", lastSqlTxnBeforeBatch.getTransactionId())
.setMaxResults(BATCH_SIZE)
.getResultList());
() -> {
snapshotId.ifPresent(jpaTm()::setDatabaseSnapshot);
return jpaTm()
.query(
"SELECT txn FROM TransactionEntity txn WHERE id >" + " :lastId ORDER BY id",
TransactionEntity.class)
.setParameter("lastId", lastSqlTxnBeforeBatch.getTransactionId())
.setMaxResults(BATCH_SIZE)
.getResultList();
});
} catch (NoResultException e) {
return ImmutableList.of();
}
@@ -108,7 +117,7 @@ public class ReplicateToDatastoreAction implements Runnable {
* <p>Throws an exception if a fatal error occurred and the batch should be aborted
*/
@VisibleForTesting
public void applyTransaction(TransactionEntity txnEntity) {
public static void applyTransaction(TransactionEntity txnEntity) {
logger.atInfo().log("Applying a single transaction Cloud SQL -> Cloud Datastore.");
try (UpdateAutoTimestamp.DisableAutoUpdateResource disabler =
UpdateAutoTimestamp.disableAutoUpdate()) {
@@ -174,7 +183,11 @@ public class ReplicateToDatastoreAction implements Runnable {
}
Optional<Lock> lock =
Lock.acquireSql(
this.getClass().getSimpleName(), null, LEASE_LENGTH, requestStatusChecker, false);
REPLICATE_TO_DATASTORE_LOCK_NAME,
null,
REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH,
requestStatusChecker,
false);
if (!lock.isPresent()) {
String message = "Can't acquire ReplicateToDatastoreAction lock, aborting.";
logger.atSevere().log(message);
@@ -203,10 +216,14 @@ public class ReplicateToDatastoreAction implements Runnable {
}
private int replayAllTransactions() {
return replayAllTransactions(Optional.empty());
}
public static int replayAllTransactions(Optional<String> snapshotId) {
int numTransactionsReplayed = 0;
List<TransactionEntity> transactionBatch;
do {
transactionBatch = getTransactionBatch();
transactionBatch = getTransactionBatchAtSnapshot(snapshotId);
for (TransactionEntity transaction : transactionBatch) {
applyTransaction(transaction);
numTransactionsReplayed++;

View File

@@ -14,12 +14,10 @@
package google.registry.model.translators;
import static com.google.common.base.MoreObjects.firstNonNull;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.joda.time.DateTimeZone.UTC;
import google.registry.model.CreateAutoTimestamp;
import google.registry.persistence.transaction.Transaction;
import java.util.Date;
import org.joda.time.DateTime;
@@ -47,13 +45,13 @@ public class CreateAutoTimestampTranslatorFactory
/** Save a timestamp, setting it to the current time if it did not have a previous value. */
@Override
public Date saveValue(CreateAutoTimestamp pojoValue) {
// Don't do this if we're in the course of transaction serialization.
if (Transaction.inSerializationMode()) {
return pojoValue.getTimestamp() == null ? null : pojoValue.getTimestamp().toDate();
}
return firstNonNull(pojoValue.getTimestamp(), ofyTm().getTransactionTime()).toDate();
// Note that we use the current transaction manager -- we need to do this under JPA when we
// serialize the entity from a Transaction object, but we need to use the JPA transaction
// manager in that case.
return (pojoValue.getTimestamp() == null
? tm().getTransactionTime()
: pojoValue.getTimestamp())
.toDate();
}
};
}

View File

@@ -14,6 +14,8 @@
package google.registry.model.translators;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static org.joda.time.DateTimeZone.UTC;
@@ -48,9 +50,16 @@ public class UpdateAutoTimestampTranslatorFactory
@Override
public Date saveValue(UpdateAutoTimestamp pojoValue) {
// Don't do this if we're in the course of transaction serialization.
// If we're in the course of Transaction serialization, we have to use the transaction time
// here and the JPA transaction manager which is what will ultimately be saved during the
// commit.
// Note that this branch doesn't respect "auto update disabled", as this state is
// specifically to address replay, so we add a runtime check for this.
if (Transaction.inSerializationMode()) {
return pojoValue.getTimestamp() == null ? null : pojoValue.getTimestamp().toDate();
checkState(
UpdateAutoTimestamp.autoUpdateEnabled(),
"Auto-update disabled during transaction serialization.");
return jpaTm().getTransactionTime().toDate();
}
return UpdateAutoTimestamp.autoUpdateEnabled()

View File

@@ -21,6 +21,7 @@ import google.registry.backup.CommitLogCheckpointAction;
import google.registry.backup.DeleteOldCommitLogsAction;
import google.registry.backup.ExportCommitLogDiffAction;
import google.registry.backup.ReplayCommitLogsToSqlAction;
import google.registry.backup.SyncDatastoreToSqlSnapshotAction;
import google.registry.batch.BatchModule;
import google.registry.batch.DeleteContactsAndHostsAction;
import google.registry.batch.DeleteExpiredDomainsAction;
@@ -199,6 +200,8 @@ interface BackendRequestComponent {
SendExpiringCertificateNotificationEmailAction sendExpiringCertificateNotificationEmailAction();
SyncDatastoreToSqlSnapshotAction syncDatastoreToSqlSnapshotAction();
SyncGroupMembersAction syncGroupMembersAction();
SyncRegistrarsSheetAction syncRegistrarsSheetAction();

View File

@@ -17,6 +17,7 @@ package google.registry.module.frontend;
import com.google.monitoring.metrics.MetricReporter;
import dagger.Component;
import dagger.Lazy;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.flows.ServerTridProviderModule;
@@ -49,6 +50,7 @@ import javax.inject.Singleton;
ConsoleConfigModule.class,
CredentialModule.class,
CustomLogicFactoryModule.class,
CloudTasksUtilsModule.class,
DirectoryModule.class,
DummyKeyringModule.class,
FrontendRequestComponentModule.class,

View File

@@ -17,6 +17,7 @@ package google.registry.module.tools;
import com.google.monitoring.metrics.MetricReporter;
import dagger.Component;
import dagger.Lazy;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.export.DriveModule;
@@ -49,6 +50,7 @@ import javax.inject.Singleton;
ConfigModule.class,
CredentialModule.class,
CustomLogicFactoryModule.class,
CloudTasksUtilsModule.class,
DatastoreServiceModule.class,
DirectoryModule.class,
DummyKeyringModule.class,

View File

@@ -18,7 +18,6 @@ import static google.registry.persistence.transaction.TransactionManagerFactory.
import com.google.common.collect.ImmutableList;
import java.util.Collection;
import javax.persistence.EntityManager;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Expression;
@@ -42,12 +41,14 @@ public class CriteriaQueryBuilder<T> {
private final CriteriaQuery<T> query;
private final Root<?> root;
private final JpaTransactionManager jpaTm;
private final ImmutableList.Builder<Predicate> predicates = new ImmutableList.Builder<>();
private final ImmutableList.Builder<Order> orders = new ImmutableList.Builder<>();
private CriteriaQueryBuilder(CriteriaQuery<T> query, Root<?> root) {
private CriteriaQueryBuilder(CriteriaQuery<T> query, Root<?> root, JpaTransactionManager jpaTm) {
this.query = query;
this.root = root;
this.jpaTm = jpaTm;
}
/** Adds a WHERE clause to the query, given the specified operation, field, and value. */
@@ -75,18 +76,18 @@ public class CriteriaQueryBuilder<T> {
*/
public <V> CriteriaQueryBuilder<T> whereFieldContains(String fieldName, Object value) {
return where(
jpaTm().getEntityManager().getCriteriaBuilder().isMember(value, root.get(fieldName)));
jpaTm.getEntityManager().getCriteriaBuilder().isMember(value, root.get(fieldName)));
}
/** Orders the result by the given field ascending. */
public CriteriaQueryBuilder<T> orderByAsc(String fieldName) {
orders.add(jpaTm().getEntityManager().getCriteriaBuilder().asc(root.get(fieldName)));
orders.add(jpaTm.getEntityManager().getCriteriaBuilder().asc(root.get(fieldName)));
return this;
}
/** Orders the result by the given field descending. */
public CriteriaQueryBuilder<T> orderByDesc(String fieldName) {
orders.add(jpaTm().getEntityManager().getCriteriaBuilder().desc(root.get(fieldName)));
orders.add(jpaTm.getEntityManager().getCriteriaBuilder().desc(root.get(fieldName)));
return this;
}
@@ -103,23 +104,24 @@ public class CriteriaQueryBuilder<T> {
/** Creates a query builder that will SELECT from the given class. */
public static <T> CriteriaQueryBuilder<T> create(Class<T> clazz) {
return create(jpaTm().getEntityManager(), clazz);
return create(jpaTm(), clazz);
}
/** Creates a query builder for the given entity manager. */
public static <T> CriteriaQueryBuilder<T> create(EntityManager em, Class<T> clazz) {
CriteriaQuery<T> query = em.getCriteriaBuilder().createQuery(clazz);
public static <T> CriteriaQueryBuilder<T> create(JpaTransactionManager jpaTm, Class<T> clazz) {
CriteriaQuery<T> query = jpaTm.getEntityManager().getCriteriaBuilder().createQuery(clazz);
Root<T> root = query.from(clazz);
query = query.select(root);
return new CriteriaQueryBuilder<>(query, root);
return new CriteriaQueryBuilder<>(query, root, jpaTm);
}
/** Creates a "count" query for the table for the class. */
public static <T> CriteriaQueryBuilder<Long> createCount(EntityManager em, Class<T> clazz) {
CriteriaBuilder builder = em.getCriteriaBuilder();
public static <T> CriteriaQueryBuilder<Long> createCount(
JpaTransactionManager jpaTm, Class<T> clazz) {
CriteriaBuilder builder = jpaTm.getEntityManager().getCriteriaBuilder();
CriteriaQuery<Long> query = builder.createQuery(Long.class);
Root<T> root = query.from(clazz);
query = query.select(builder.count(root));
return new CriteriaQueryBuilder<>(query, root);
return new CriteriaQueryBuilder<>(query, root, jpaTm);
}
}

View File

@@ -30,6 +30,7 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.model.ImmutableObject;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
@@ -604,6 +605,13 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
managedEntity = getEntityManager().merge(entity);
}
getEntityManager().remove(managedEntity);
// We check shouldReplicate() in TransactionInfo.addDelete(), but we have to check it here as
// well prior to attempting to create a datastore key because a non-replicated entity may not
// have one.
if (shouldReplicate(entity.getClass())) {
transactionInfo.get().addDelete(VKey.from(Key.create(entity)));
}
return managedEntity;
}
@@ -827,6 +835,12 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
replaySqlToDatastoreOverrideForTest.set(Optional.empty());
}
/** Returns true if the entity class should be replicated from SQL to datastore. */
private static boolean shouldReplicate(Class<?> entityClass) {
return !NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !SqlOnlyEntity.class.isAssignableFrom(entityClass);
}
private static class TransactionInfo {
ReadOnlyCheckingEntityManager entityManager;
boolean inTransaction = false;
@@ -883,12 +897,6 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
}
}
/** Returns true if the entity class should be replicated from SQL to datastore. */
private boolean shouldReplicate(Class<?> entityClass) {
return !NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !SqlOnlyEntity.class.isAssignableFrom(entityClass);
}
private void recordTransaction() {
if (contentsBuilder != null) {
Transaction persistedTxn = contentsBuilder.build();
@@ -1131,7 +1139,7 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
private TypedQuery<T> buildQuery() {
CriteriaQueryBuilder<T> queryBuilder =
CriteriaQueryBuilder.create(getEntityManager(), entityClass);
CriteriaQueryBuilder.create(JpaTransactionManagerImpl.this, entityClass);
return addCriteria(queryBuilder);
}
@@ -1178,7 +1186,7 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
@Override
public long count() {
CriteriaQueryBuilder<Long> queryBuilder =
CriteriaQueryBuilder.createCount(getEntityManager(), entityClass);
CriteriaQueryBuilder.createCount(JpaTransactionManagerImpl.this, entityClass);
return addCriteria(queryBuilder).getSingleResult();
}

View File

@@ -14,8 +14,8 @@
package google.registry.persistence.transaction;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import static org.joda.time.DateTimeZone.UTC;
import com.google.appengine.api.utils.SystemProperty;
@@ -47,6 +47,10 @@ public final class TransactionManagerFactory {
private static Supplier<JpaTransactionManager> jpaTm =
Suppliers.memoize(TransactionManagerFactory::createJpaTransactionManager);
@NonFinalForTesting
private static Supplier<JpaTransactionManager> replicaJpaTm =
Suppliers.memoize(TransactionManagerFactory::createReplicaJpaTransactionManager);
private static boolean onBeam = false;
private TransactionManagerFactory() {}
@@ -61,6 +65,14 @@ public final class TransactionManagerFactory {
}
}
private static JpaTransactionManager createReplicaJpaTransactionManager() {
if (isInAppEngine()) {
return DaggerPersistenceComponent.create().readOnlyReplicaJpaTransactionManager();
} else {
return DummyJpaTransactionManager.create();
}
}
private static DatastoreTransactionManager createTransactionManager() {
return new DatastoreTransactionManager(null);
}
@@ -108,6 +120,21 @@ public final class TransactionManagerFactory {
return jpaTm.get();
}
/** Returns a read-only {@link JpaTransactionManager} instance if configured. */
public static JpaTransactionManager replicaJpaTm() {
return replicaJpaTm.get();
}
/**
* Returns a {@link TransactionManager} that uses a replica database if one exists.
*
* <p>In Datastore mode, this is unchanged from the regular transaction manager. In SQL mode,
* however, this will be a reference to the read-only replica database if one is configured.
*/
public static TransactionManager replicaTm() {
return tm().isOfy() ? tm() : replicaJpaTm();
}
/** Returns {@link DatastoreTransactionManager} instance. */
@VisibleForTesting
public static DatastoreTransactionManager ofyTm() {
@@ -116,7 +143,7 @@ public final class TransactionManagerFactory {
/** Sets the return of {@link #jpaTm()} to the given instance of {@link JpaTransactionManager}. */
public static void setJpaTm(Supplier<JpaTransactionManager> jpaTmSupplier) {
checkNotNull(jpaTmSupplier, "jpaTmSupplier");
checkArgumentNotNull(jpaTmSupplier, "jpaTmSupplier");
checkState(
RegistryEnvironment.get().equals(RegistryEnvironment.UNITTEST)
|| RegistryToolEnvironment.get() != null,
@@ -124,13 +151,23 @@ public final class TransactionManagerFactory {
jpaTm = Suppliers.memoize(jpaTmSupplier::get);
}
/** Sets the value of {@link #replicaJpaTm()} to the given {@link JpaTransactionManager}. */
public static void setReplicaJpaTm(Supplier<JpaTransactionManager> replicaJpaTmSupplier) {
checkArgumentNotNull(replicaJpaTmSupplier, "replicaJpaTmSupplier");
checkState(
RegistryEnvironment.get().equals(RegistryEnvironment.UNITTEST)
|| RegistryToolEnvironment.get() != null,
"setReplicaJpaTm() should only be called by tools and tests.");
replicaJpaTm = Suppliers.memoize(replicaJpaTmSupplier::get);
}
/**
* Makes {@link #jpaTm()} return the {@link JpaTransactionManager} instance provided by {@code
* jpaTmSupplier} from now on. This method should only be called by an implementor of {@link
* org.apache.beam.sdk.harness.JvmInitializer}.
*/
public static void setJpaTmOnBeamWorker(Supplier<JpaTransactionManager> jpaTmSupplier) {
checkNotNull(jpaTmSupplier, "jpaTmSupplier");
checkArgumentNotNull(jpaTmSupplier, "jpaTmSupplier");
jpaTm = Suppliers.memoize(jpaTmSupplier::get);
onBeam = true;
}

View File

@@ -18,6 +18,7 @@ import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.model.index.ForeignKeyIndex.loadAndGetKey;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.GET;
import static google.registry.request.Action.Method.HEAD;
@@ -37,10 +38,8 @@ import com.google.common.primitives.Booleans;
import com.googlecode.objectify.cmd.Query;
import google.registry.model.domain.DomainBase;
import google.registry.model.host.HostResource;
import google.registry.persistence.PersistenceModule.ReadOnlyReplicaJpaTm;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.rdap.RdapJsonFormatter.OutputDataType;
import google.registry.rdap.RdapMetrics.EndpointType;
import google.registry.rdap.RdapMetrics.SearchType;
@@ -93,8 +92,6 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
@Inject @Parameter("nsLdhName") Optional<String> nsLdhNameParam;
@Inject @Parameter("nsIp") Optional<String> nsIpParam;
@Inject @ReadOnlyReplicaJpaTm JpaTransactionManager readOnlyJpaTm;
@Inject
public RdapDomainSearchAction() {
super("domain search", EndpointType.DOMAINS);
@@ -228,31 +225,32 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
resultSet = getMatchingResources(query, true, querySizeLimit);
} else {
resultSet =
readOnlyJpaTm.transact(
() -> {
CriteriaBuilder criteriaBuilder =
readOnlyJpaTm.getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<DomainBase> queryBuilder =
CriteriaQueryBuilder.create(DomainBase.class)
.where(
"fullyQualifiedDomainName",
criteriaBuilder::like,
String.format("%s%%", partialStringQuery.getInitialString()))
.orderByAsc("fullyQualifiedDomainName");
if (cursorString.isPresent()) {
queryBuilder =
queryBuilder.where(
"fullyQualifiedDomainName",
criteriaBuilder::greaterThan,
cursorString.get());
}
if (partialStringQuery.getSuffix() != null) {
queryBuilder =
queryBuilder.where(
"tld", criteriaBuilder::equal, partialStringQuery.getSuffix());
}
return getMatchingResourcesSql(queryBuilder, true, querySizeLimit);
});
replicaJpaTm()
.transact(
() -> {
CriteriaBuilder criteriaBuilder =
replicaJpaTm().getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<DomainBase> queryBuilder =
CriteriaQueryBuilder.create(replicaJpaTm(), DomainBase.class)
.where(
"fullyQualifiedDomainName",
criteriaBuilder::like,
String.format("%s%%", partialStringQuery.getInitialString()))
.orderByAsc("fullyQualifiedDomainName");
if (cursorString.isPresent()) {
queryBuilder =
queryBuilder.where(
"fullyQualifiedDomainName",
criteriaBuilder::greaterThan,
cursorString.get());
}
if (partialStringQuery.getSuffix() != null) {
queryBuilder =
queryBuilder.where(
"tld", criteriaBuilder::equal, partialStringQuery.getSuffix());
}
return getMatchingResourcesSql(queryBuilder, true, querySizeLimit);
});
}
return makeSearchResults(resultSet);
}
@@ -274,19 +272,20 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
resultSet = getMatchingResources(query, true, querySizeLimit);
} else {
resultSet =
readOnlyJpaTm.transact(
() -> {
CriteriaQueryBuilder<DomainBase> builder =
queryItemsSql(
DomainBase.class,
"tld",
tld,
Optional.of("fullyQualifiedDomainName"),
cursorString,
DeletedItemHandling.INCLUDE)
.orderByAsc("fullyQualifiedDomainName");
return getMatchingResourcesSql(builder, true, querySizeLimit);
});
replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<DomainBase> builder =
queryItemsSql(
DomainBase.class,
"tld",
tld,
Optional.of("fullyQualifiedDomainName"),
cursorString,
DeletedItemHandling.INCLUDE)
.orderByAsc("fullyQualifiedDomainName");
return getMatchingResourcesSql(builder, true, querySizeLimit);
});
}
return makeSearchResults(resultSet);
}
@@ -357,28 +356,29 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
.map(VKey::from)
.collect(toImmutableSet());
} else {
return readOnlyJpaTm.transact(
() -> {
CriteriaQueryBuilder<HostResource> builder =
queryItemsSql(
HostResource.class,
"fullyQualifiedHostName",
partialStringQuery,
Optional.empty(),
DeletedItemHandling.EXCLUDE);
if (desiredRegistrar.isPresent()) {
builder =
builder.where(
"currentSponsorClientId",
readOnlyJpaTm.getEntityManager().getCriteriaBuilder()::equal,
desiredRegistrar.get());
}
return getMatchingResourcesSql(builder, true, maxNameserversInFirstStage)
.resources()
.stream()
.map(HostResource::createVKey)
.collect(toImmutableSet());
});
return replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<HostResource> builder =
queryItemsSql(
HostResource.class,
"fullyQualifiedHostName",
partialStringQuery,
Optional.empty(),
DeletedItemHandling.EXCLUDE);
if (desiredRegistrar.isPresent()) {
builder =
builder.where(
"currentSponsorClientId",
replicaJpaTm().getEntityManager().getCriteriaBuilder()::equal,
desiredRegistrar.get());
}
return getMatchingResourcesSql(builder, true, maxNameserversInFirstStage)
.resources()
.stream()
.map(HostResource::createVKey)
.collect(toImmutableSet());
});
}
}
@@ -512,20 +512,21 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
parameters.put("desiredRegistrar", desiredRegistrar.get());
}
hostKeys =
readOnlyJpaTm.transact(
() -> {
javax.persistence.Query query =
readOnlyJpaTm
.getEntityManager()
.createNativeQuery(queryBuilder.toString())
.setMaxResults(maxNameserversInFirstStage);
parameters.build().forEach(query::setParameter);
@SuppressWarnings("unchecked")
Stream<String> resultStream = query.getResultStream();
return resultStream
.map(repoId -> VKey.create(HostResource.class, repoId))
.collect(toImmutableSet());
});
replicaJpaTm()
.transact(
() -> {
javax.persistence.Query query =
replicaJpaTm()
.getEntityManager()
.createNativeQuery(queryBuilder.toString())
.setMaxResults(maxNameserversInFirstStage);
parameters.build().forEach(query::setParameter);
@SuppressWarnings("unchecked")
Stream<String> resultStream = query.getResultStream();
return resultStream
.map(repoId -> VKey.create(HostResource.class, repoId))
.collect(toImmutableSet());
});
}
return searchByNameserverRefs(hostKeys);
}
@@ -570,38 +571,39 @@ public class RdapDomainSearchAction extends RdapSearchActionBase {
}
stream.forEach(domainSetBuilder::add);
} else {
readOnlyJpaTm.transact(
() -> {
for (VKey<HostResource> hostKey : hostKeys) {
CriteriaQueryBuilder<DomainBase> queryBuilder =
CriteriaQueryBuilder.create(DomainBase.class)
.whereFieldContains("nsHosts", hostKey)
.orderByAsc("fullyQualifiedDomainName");
CriteriaBuilder criteriaBuilder =
readOnlyJpaTm.getEntityManager().getCriteriaBuilder();
if (!shouldIncludeDeleted()) {
queryBuilder =
queryBuilder.where(
"deletionTime", criteriaBuilder::greaterThan, getRequestTime());
}
if (cursorString.isPresent()) {
queryBuilder =
queryBuilder.where(
"fullyQualifiedDomainName",
criteriaBuilder::greaterThan,
cursorString.get());
}
readOnlyJpaTm
.criteriaQuery(queryBuilder.build())
.getResultStream()
.filter(this::isAuthorized)
.forEach(
(domain) -> {
Hibernate.initialize(domain.getDsData());
domainSetBuilder.add(domain);
});
}
});
replicaJpaTm()
.transact(
() -> {
for (VKey<HostResource> hostKey : hostKeys) {
CriteriaQueryBuilder<DomainBase> queryBuilder =
CriteriaQueryBuilder.create(replicaJpaTm(), DomainBase.class)
.whereFieldContains("nsHosts", hostKey)
.orderByAsc("fullyQualifiedDomainName");
CriteriaBuilder criteriaBuilder =
replicaJpaTm().getEntityManager().getCriteriaBuilder();
if (!shouldIncludeDeleted()) {
queryBuilder =
queryBuilder.where(
"deletionTime", criteriaBuilder::greaterThan, getRequestTime());
}
if (cursorString.isPresent()) {
queryBuilder =
queryBuilder.where(
"fullyQualifiedDomainName",
criteriaBuilder::greaterThan,
cursorString.get());
}
replicaJpaTm()
.criteriaQuery(queryBuilder.build())
.getResultStream()
.filter(this::isAuthorized)
.forEach(
(domain) -> {
Hibernate.initialize(domain.getDsData());
domainSetBuilder.add(domain);
});
}
});
}
}
List<DomainBase> domains = domainSetBuilder.build().asList();

View File

@@ -15,7 +15,7 @@
package google.registry.rdap;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerUtil.transactIfJpaTm;
import static google.registry.rdap.RdapUtils.getRegistrarByIanaIdentifier;
@@ -277,7 +277,7 @@ public class RdapEntitySearchAction extends RdapSearchActionBase {
resultSet = getMatchingResources(query, false, rdapResultSetMaxSize + 1);
} else {
resultSet =
jpaTm()
replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<ContactResource> builder =
@@ -399,7 +399,7 @@ public class RdapEntitySearchAction extends RdapSearchActionBase {
querySizeLimit);
} else {
contactResultSet =
jpaTm()
replicaJpaTm()
.transact(
() ->
getMatchingResourcesSql(

View File

@@ -15,7 +15,7 @@
package google.registry.rdap;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.GET;
import static google.registry.request.Action.Method.HEAD;
@@ -233,7 +233,7 @@ public class RdapNameserverSearchAction extends RdapSearchActionBase {
return makeSearchResults(
getMatchingResources(query, shouldIncludeDeleted(), querySizeLimit), CursorType.NAME);
} else {
return jpaTm()
return replicaJpaTm()
.transact(
() -> {
CriteriaQueryBuilder<HostResource> queryBuilder =
@@ -290,11 +290,11 @@ public class RdapNameserverSearchAction extends RdapSearchActionBase {
}
queryBuilder.append(" ORDER BY repo_id ASC");
rdapResultSet =
jpaTm()
replicaJpaTm()
.transact(
() -> {
javax.persistence.Query query =
jpaTm()
replicaJpaTm()
.getEntityManager()
.createNativeQuery(queryBuilder.toString(), HostResource.class)
.setMaxResults(querySizeLimit);

View File

@@ -16,7 +16,7 @@ package google.registry.rdap;
import static com.google.common.base.Charsets.UTF_8;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.collect.ImmutableList;
@@ -193,16 +193,17 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
*/
<T extends EppResource> RdapResultSet<T> getMatchingResourcesSql(
CriteriaQueryBuilder<T> builder, boolean checkForVisibility, int querySizeLimit) {
jpaTm().assertInTransaction();
replicaJpaTm().assertInTransaction();
Optional<String> desiredRegistrar = getDesiredRegistrar();
if (desiredRegistrar.isPresent()) {
builder =
builder.where(
"currentSponsorClientId", jpaTm().getEntityManager().getCriteriaBuilder()::equal,
"currentSponsorClientId",
replicaJpaTm().getEntityManager().getCriteriaBuilder()::equal,
desiredRegistrar.get());
}
List<T> queryResult =
jpaTm().criteriaQuery(builder.build()).setMaxResults(querySizeLimit).getResultList();
replicaJpaTm().criteriaQuery(builder.build()).setMaxResults(querySizeLimit).getResultList();
if (checkForVisibility) {
return filterResourcesByVisibility(queryResult, querySizeLimit);
} else {
@@ -395,7 +396,7 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
RdapSearchPattern partialStringQuery,
Optional<String> cursorString,
DeletedItemHandling deletedItemHandling) {
jpaTm().assertInTransaction();
replicaJpaTm().assertInTransaction();
if (partialStringQuery.getInitialString().length()
< RdapSearchPattern.MIN_INITIAL_STRING_LENGTH) {
throw new UnprocessableEntityException(
@@ -403,8 +404,8 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
"Initial search string must be at least %d characters",
RdapSearchPattern.MIN_INITIAL_STRING_LENGTH));
}
CriteriaBuilder criteriaBuilder = jpaTm().getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(clazz);
CriteriaBuilder criteriaBuilder = replicaJpaTm().getEntityManager().getCriteriaBuilder();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(replicaJpaTm(), clazz);
if (partialStringQuery.getHasWildcard()) {
builder =
builder.where(
@@ -493,9 +494,9 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
"Initial search string must be at least %d characters",
RdapSearchPattern.MIN_INITIAL_STRING_LENGTH));
}
jpaTm().assertInTransaction();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(clazz);
CriteriaBuilder criteriaBuilder = jpaTm().getEntityManager().getCriteriaBuilder();
replicaJpaTm().assertInTransaction();
CriteriaQueryBuilder<T> builder = CriteriaQueryBuilder.create(replicaJpaTm(), clazz);
CriteriaBuilder criteriaBuilder = replicaJpaTm().getEntityManager().getCriteriaBuilder();
builder = builder.where(filterField, criteriaBuilder::equal, queryString);
if (cursorString.isPresent()) {
if (cursorField.isPresent()) {
@@ -544,7 +545,7 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
RdapSearchPattern partialStringQuery,
Optional<String> cursorString,
DeletedItemHandling deletedItemHandling) {
jpaTm().assertInTransaction();
replicaJpaTm().assertInTransaction();
return queryItemsSql(clazz, "repoId", partialStringQuery, cursorString, deletedItemHandling);
}
@@ -553,7 +554,9 @@ public abstract class RdapSearchActionBase extends RdapActionBase {
if (!Objects.equals(deletedItemHandling, DeletedItemHandling.INCLUDE)) {
builder =
builder.where(
"deletionTime", jpaTm().getEntityManager().getCriteriaBuilder()::equal, END_OF_TIME);
"deletionTime",
replicaJpaTm().getEntityManager().getCriteriaBuilder()::equal,
END_OF_TIME);
}
return builder;
}

View File

@@ -24,6 +24,7 @@ import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
@@ -86,7 +87,13 @@ public final class BrdaCopyAction implements Runnable {
}
private void copyAsRyde() throws IOException {
String nameWithoutPrefix = RdeNamingUtils.makeRydeFilename(tld, watermark, THIN, 1, 0);
// TODO(b/217772483): consider guarding this action with a lock and check if there is work.
// Not urgent since file writes on GCS are atomic.
int revision =
RdeRevision.getCurrentRevision(tld, watermark, THIN)
.orElseThrow(
() -> new IllegalStateException("RdeRevision was not set on generated deposit"));
String nameWithoutPrefix = RdeNamingUtils.makeRydeFilename(tld, watermark, THIN, 1, revision);
String name = prefix.orElse("") + nameWithoutPrefix;
BlobId xmlFilename = BlobId.of(stagingBucket, name + ".xml.ghostryde");
BlobId xmlLengthFilename = BlobId.of(stagingBucket, name + ".xml.length");

View File

@@ -391,9 +391,6 @@ public final class RdeStagingAction implements Runnable {
if (revision.isPresent()) {
throw new BadRequestException("Revision parameter not allowed in standard operation");
}
if (beam) {
throw new BadRequestException("Beam parameter not allowed in standard operation");
}
return ImmutableSetMultimap.copyOf(
Multimaps.filterValues(

View File

@@ -14,8 +14,6 @@
package google.registry.rde;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Verify.verify;
import static google.registry.model.common.Cursor.getCursorTimeOrStartOfTime;
@@ -26,6 +24,7 @@ import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.appengine.tools.mapreduce.Reducer;
import com.google.appengine.tools.mapreduce.ReducerInput;
import com.google.cloud.storage.BlobId;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
@@ -36,10 +35,11 @@ import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.request.Action.Service;
import google.registry.request.RequestParameters;
import google.registry.request.lock.LockHandler;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.util.TaskQueueUtils;
import google.registry.util.CloudTasksUtils;
import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdeheader.XjcRdeHeaderElement;
import google.registry.xml.ValidationMode;
@@ -65,7 +65,7 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final TaskQueueUtils taskQueueUtils;
private final CloudTasksUtils cloudTasksUtils;
private final LockHandler lockHandler;
private final String bucket;
private final Duration lockTimeout;
@@ -74,14 +74,14 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
private final GcsUtils gcsUtils;
RdeStagingReducer(
TaskQueueUtils taskQueueUtils,
CloudTasksUtils cloudTasksUtils,
LockHandler lockHandler,
String bucket,
Duration lockTimeout,
byte[] stagingKeyBytes,
ValidationMode validationMode,
GcsUtils gcsUtils) {
this.taskQueueUtils = taskQueueUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.lockHandler = lockHandler;
this.bucket = bucket;
this.lockTimeout = lockTimeout;
@@ -226,23 +226,35 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), tld, newPosition);
RdeRevision.saveRevision(tld, watermark, mode, revision);
// Enqueueing a task is a side effect that is not undone if the transaction rolls
// back. So this may result in multiple copies of the same task being processed. This
// is fine because the RdeUploadAction is guarded by a lock and tracks progress by
// cursor. The BrdaCopyAction writes a file to GCS, which is an atomic action.
if (mode == RdeMode.FULL) {
taskQueueUtils.enqueue(
getQueue("rde-upload"),
withUrl(RdeUploadAction.PATH).param(RequestParameters.PARAM_TLD, tld));
cloudTasksUtils.enqueue(
"rde-upload",
cloudTasksUtils.createPostTask(
RdeUploadAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(RequestParameters.PARAM_TLD, tld)));
} else {
taskQueueUtils.enqueue(
getQueue("brda"),
withUrl(BrdaCopyAction.PATH)
.param(RequestParameters.PARAM_TLD, tld)
.param(RdeModule.PARAM_WATERMARK, watermark.toString()));
cloudTasksUtils.enqueue(
"brda",
cloudTasksUtils.createPostTask(
BrdaCopyAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
tld,
RdeModule.PARAM_WATERMARK,
watermark.toString())));
}
});
}
/** Injectible factory for creating {@link RdeStagingReducer}. */
static class Factory {
@Inject TaskQueueUtils taskQueueUtils;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject LockHandler lockHandler;
@Inject @Config("rdeBucket") String bucket;
@Inject @Config("rdeStagingLockTimeout") Duration lockTimeout;
@@ -252,7 +264,7 @@ public final class RdeStagingReducer extends Reducer<PendingDeposit, DepositFrag
RdeStagingReducer create(ValidationMode validationMode, GcsUtils gcsUtils) {
return new RdeStagingReducer(
taskQueueUtils,
cloudTasksUtils,
lockHandler,
bucket,
lockTimeout,

View File

@@ -134,7 +134,7 @@ public final class RdeUploadAction implements Runnable, EscrowTask {
}
cloudTasksUtils.enqueue(
RDE_REPORT_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeReportAction.PATH, Service.BACKEND.getServiceId(), params));
}

View File

@@ -40,6 +40,8 @@ public class ReportingModule {
public static final String BEAM_QUEUE = "beam-reporting";
/** The amount of time expected for the Dataflow jobs to complete. */
public static final int ENQUEUE_DELAY_MINUTES = 10;
/**
* The request parameter name used by reporting actions that takes a year/month parameter, which
* defaults to the last month.

View File

@@ -1,38 +0,0 @@
// Copyright 2018 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.reporting;
import com.google.appengine.api.taskqueue.QueueFactory;
import com.google.appengine.api.taskqueue.TaskOptions;
import java.util.Map;
import org.joda.time.Duration;
import org.joda.time.YearMonth;
/** Static methods common to various reporting tasks. */
public class ReportingUtils {
private static final int ENQUEUE_DELAY_MINUTES = 10;
/** Enqueues a task that takes a Beam jobId and the {@link YearMonth} as parameters. */
public static void enqueueBeamReportingTask(String path, Map<String, String> parameters) {
TaskOptions publishTask =
TaskOptions.Builder.withUrl(path)
.method(TaskOptions.Method.POST)
// Dataflow jobs tend to take about 10 minutes to complete.
.countdownMillis(Duration.standardMinutes(ENQUEUE_DELAY_MINUTES).getMillis());
parameters.forEach(publishTask::param);
QueueFactory.getQueue(ReportingModule.BEAM_QUEUE).add(publishTask);
}
}

View File

@@ -17,7 +17,6 @@ package google.registry.reporting.billing;
import static google.registry.beam.BeamUtils.createJobName;
import static google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase.CLOUD_SQL;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.reporting.ReportingUtils.enqueueBeamReportingTask;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
@@ -27,6 +26,7 @@ import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
@@ -35,14 +35,16 @@ import google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDataba
import google.registry.persistence.PersistenceModule;
import google.registry.reporting.ReportingModule;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.io.IOException;
import java.util.Map;
import javax.inject.Inject;
import org.joda.time.Duration;
import org.joda.time.YearMonth;
/**
@@ -76,6 +78,7 @@ public class GenerateInvoicesAction implements Runnable {
private final Response response;
private final Dataflow dataflow;
private final PrimaryDatabase database;
private final CloudTasksUtils cloudTasksUtils;
@Inject
GenerateInvoicesAction(
@@ -88,6 +91,7 @@ public class GenerateInvoicesAction implements Runnable {
@Parameter(RequestParameters.PARAM_DATABASE) PrimaryDatabase database,
YearMonth yearMonth,
BillingEmailUtils emailUtils,
CloudTasksUtils cloudTasksUtils,
Clock clock,
Response response,
Dataflow dataflow) {
@@ -105,6 +109,7 @@ public class GenerateInvoicesAction implements Runnable {
this.database = database;
this.yearMonth = yearMonth;
this.emailUtils = emailUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.clock = clock;
this.response = response;
this.dataflow = dataflow;
@@ -144,13 +149,17 @@ public class GenerateInvoicesAction implements Runnable {
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
String jobId = launchResponse.getJob().getId();
if (shouldPublish) {
Map<String, String> beamTaskParameters =
ImmutableMap.of(
ReportingModule.PARAM_JOB_ID,
jobId,
ReportingModule.PARAM_YEAR_MONTH,
yearMonth.toString());
enqueueBeamReportingTask(PublishInvoicesAction.PATH, beamTaskParameters);
cloudTasksUtils.enqueue(
ReportingModule.BEAM_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
PublishInvoicesAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
ReportingModule.PARAM_JOB_ID,
jobId,
ReportingModule.PARAM_YEAR_MONTH,
yearMonth.toString()),
Duration.standardMinutes(ReportingModule.ENQUEUE_DELAY_MINUTES)));
}
response.setStatus(SC_OK);
response.setPayload(String.format("Launched invoicing pipeline: %s", jobId));

View File

@@ -125,7 +125,7 @@ public class PublishInvoicesAction implements Runnable {
private void enqueueCopyDetailReportsTask() {
cloudTasksUtils.enqueue(
BillingModule.CRON_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
CopyDetailReportsAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(PARAM_YEAR_MONTH, yearMonth.toString())));

View File

@@ -21,9 +21,6 @@ import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.taskqueue.QueueFactory;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
@@ -33,9 +30,11 @@ import google.registry.bigquery.BigqueryJobFailureException;
import google.registry.config.RegistryConfig.Config;
import google.registry.reporting.icann.IcannReportingModule.ReportType;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.CloudTasksUtils;
import google.registry.util.EmailMessage;
import google.registry.util.Retrier;
import google.registry.util.SendEmailService;
@@ -86,6 +85,7 @@ public final class IcannReportingStagingAction implements Runnable {
@Inject @Config("gSuiteOutgoingEmailAddress") InternetAddress sender;
@Inject @Config("alertRecipientEmailAddress") InternetAddress recipient;
@Inject SendEmailService emailService;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject IcannReportingStagingAction() {}
@@ -119,11 +119,13 @@ public final class IcannReportingStagingAction implements Runnable {
response.setPayload("Completed staging action.");
logger.atInfo().log("Enqueueing report upload.");
TaskOptions uploadTask =
TaskOptions.Builder.withUrl(IcannReportingUploadAction.PATH)
.method(Method.POST)
.countdownMillis(Duration.standardMinutes(2).getMillis());
QueueFactory.getQueue(CRON_QUEUE).add(uploadTask);
cloudTasksUtils.enqueue(
CRON_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
IcannReportingUploadAction.PATH,
Service.BACKEND.toString(),
null,
Duration.standardMinutes(2)));
return null;
},
BigqueryJobFailureException.class);

View File

@@ -16,7 +16,6 @@ package google.registry.reporting.spec11;
import static google.registry.beam.BeamUtils.createJobName;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.reporting.ReportingUtils.enqueueBeamReportingTask;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
@@ -26,6 +25,7 @@ import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
@@ -34,14 +34,16 @@ import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase;
import google.registry.reporting.ReportingModule;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.io.IOException;
import java.util.Map;
import javax.inject.Inject;
import org.joda.time.Duration;
import org.joda.time.LocalDate;
/**
@@ -73,6 +75,7 @@ public class GenerateSpec11ReportAction implements Runnable {
private final Dataflow dataflow;
private final PrimaryDatabase database;
private final boolean sendEmail;
private final CloudTasksUtils cloudTasksUtils;
@Inject
GenerateSpec11ReportAction(
@@ -86,7 +89,8 @@ public class GenerateSpec11ReportAction implements Runnable {
@Parameter(ReportingModule.SEND_EMAIL) boolean sendEmail,
Clock clock,
Response response,
Dataflow dataflow) {
Dataflow dataflow,
CloudTasksUtils cloudTasksUtils) {
this.projectId = projectId;
this.jobRegion = jobRegion;
this.stagingBucketUrl = stagingBucketUrl;
@@ -101,6 +105,7 @@ public class GenerateSpec11ReportAction implements Runnable {
this.response = response;
this.dataflow = dataflow;
this.sendEmail = sendEmail;
this.cloudTasksUtils = cloudTasksUtils;
}
@Override
@@ -136,11 +141,18 @@ public class GenerateSpec11ReportAction implements Runnable {
.execute();
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
String jobId = launchResponse.getJob().getId();
Map<String, String> beamTaskParameters =
ImmutableMap.of(
ReportingModule.PARAM_JOB_ID, jobId, ReportingModule.PARAM_DATE, date.toString());
if (sendEmail) {
enqueueBeamReportingTask(PublishSpec11ReportAction.PATH, beamTaskParameters);
cloudTasksUtils.enqueue(
ReportingModule.BEAM_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
PublishSpec11ReportAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
ReportingModule.PARAM_JOB_ID,
jobId,
ReportingModule.PARAM_DATE,
date.toString()),
Duration.standardMinutes(ReportingModule.ENQUEUE_DELAY_MINUTES)));
}
response.setStatus(SC_OK);
response.setPayload(String.format("Launched Spec11 pipeline: %s", jobId));

View File

@@ -39,8 +39,11 @@ import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.apphosting.api.DeadlineExceededException;
import com.google.common.base.Joiner;
import com.google.common.base.Strings;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Ordering;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action;
@@ -125,15 +128,19 @@ public final class NordnUploadAction implements Runnable {
* delimited String.
*/
static String convertTasksToCsv(List<TaskHandle> tasks, DateTime now, String columns) {
String header = String.format("1,%s,%d\n%s\n", now, tasks.size(), columns);
StringBuilder csv = new StringBuilder(header);
// Use a Set for deduping purposes so we can be idempotent in case tasks happened to be
// enqueued multiple times for a given domain create.
ImmutableSortedSet.Builder<String> builder =
new ImmutableSortedSet.Builder<>(Ordering.natural());
for (TaskHandle task : checkNotNull(tasks)) {
String payload = new String(task.getPayload(), UTF_8);
if (!Strings.isNullOrEmpty(payload)) {
csv.append(payload).append("\n");
builder.add(payload + '\n');
}
}
return csv.toString();
ImmutableSortedSet<String> csvLines = builder.build();
String header = String.format("1,%s,%d\n%s\n", now, csvLines.size(), columns);
return header + Joiner.on("").join(csvLines);
}
/** Leases and returns all tasks from the queue with the specified tag tld, in batches. */
@@ -168,6 +175,11 @@ public final class NordnUploadAction implements Runnable {
: LordnTaskUtils.QUEUE_CLAIMS);
String columns = phase.equals(PARAM_LORDN_PHASE_SUNRISE) ? COLUMNS_SUNRISE : COLUMNS_CLAIMS;
List<TaskHandle> tasks = loadAllTasks(queue, tld);
// Note: This upload/task deletion isn't done atomically (it's not clear how one would do so
// anyway). As a result, it is possible that the upload might succeed yet the deletion of
// enqueued tasks might fail. If so, this would result in the same lines being uploaded to NORDN
// across mulitple uploads. This is probably OK; all that we really cannot have is a missing
// line.
if (!tasks.isEmpty()) {
String csvData = convertTasksToCsv(tasks, now, columns);
uploadCsvToLordn(String.format("/LORDN/%s/%s", tld, phase), csvData);

View File

@@ -0,0 +1,58 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import java.util.Optional;
/**
* Enumerates the DNSSEC digest types for use with Delegation Signer records.
*
* <p>This also enforces the set of types that are valid for use with Cloud DNS. Customers cannot
* create DS records containing any other digest type.
*
* <p>The complete list can be found here:
* https://www.iana.org/assignments/ds-rr-types/ds-rr-types.xhtml
*/
public enum DigestType {
SHA1(1),
SHA256(2),
// Algorithm number 3 is GOST R 34.11-94 and is deliberately NOT SUPPORTED.
// This algorithm was reviewed by ise-crypto and deemed academically broken (b/207029800).
// In addition, RFC 8624 specifies that this algorithm MUST NOT be used for DNSSEC delegations.
// TODO(sarhabot@): Add note in Cloud DNS code to notify the Registry of any new changes to
// supported digest types.
SHA384(4);
private final int wireValue;
DigestType(int wireValue) {
this.wireValue = wireValue;
}
/** Fetches a DigestType enumeration constant by its IANA assigned value. */
public static Optional<DigestType> fromWireValue(int wireValue) {
for (DigestType alg : DigestType.values()) {
if (alg.getWireValue() == wireValue) {
return Optional.of(alg);
}
}
return Optional.empty();
}
/** Fetches a value in the range [0, 255] that encodes this DS digest type on the wire. */
public int getWireValue() {
return wireValue;
}
}

View File

@@ -16,6 +16,7 @@ package google.registry.tools;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.util.PreconditionsUtils.checkArgumentPresent;
import com.beust.jcommander.IStringConverter;
import com.google.auto.value.AutoValue;
@@ -25,6 +26,7 @@ import com.google.common.base.Splitter;
import com.google.common.io.BaseEncoding;
import com.google.template.soy.data.SoyListData;
import com.google.template.soy.data.SoyMapData;
import google.registry.flows.domain.DomainFlowUtils;
import java.util.List;
@AutoValue
@@ -46,6 +48,15 @@ abstract class DsRecord {
"digest should be even-lengthed hex, but is %s (length %s)",
digest,
digest.length());
checkArgumentPresent(
DigestType.fromWireValue(digestType),
String.format("DS record uses an unrecognized digest type: %d", digestType));
if (!DomainFlowUtils.validateAlgorithm(alg)) {
throw new IllegalArgumentException(
String.format("DS record uses an unrecognized algorithm: %d", alg));
}
return new AutoValue_DsRecord(keyTag, alg, digestType, digest);
}

View File

@@ -18,6 +18,7 @@ import static google.registry.util.DomainNameUtils.canonicalizeDomainName;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import google.registry.model.rde.RdeMode;
import google.registry.tools.params.PathParameter;
import java.nio.file.Path;
import java.nio.file.Paths;
@@ -46,11 +47,20 @@ class EncryptEscrowDepositCommand implements CommandWithRemoteApi {
validateWith = PathParameter.OutputDirectory.class)
private Path outdir = Paths.get(".");
@Inject
EscrowDepositEncryptor encryptor;
@Parameter(
names = {"-m", "--mode"},
description = "Specify the escrow mode, FULL for RDE and THIN for BRDA.")
private RdeMode mode = RdeMode.FULL;
@Parameter(
names = {"-r", "--revision"},
description = "Specify the revision.")
private int revision = 0;
@Inject EscrowDepositEncryptor encryptor;
@Override
public final void run() throws Exception {
encryptor.encrypt(canonicalizeDomainName(tld), input, outdir);
encryptor.encrypt(mode, canonicalizeDomainName(tld), revision, input, outdir);
}
}

View File

@@ -18,6 +18,7 @@ import static google.registry.model.rde.RdeMode.FULL;
import com.google.common.io.ByteStreams;
import google.registry.keyring.api.KeyModule.Key;
import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.rde.RdeUtil;
import google.registry.rde.RydeEncoder;
@@ -42,26 +43,44 @@ final class EscrowDepositEncryptor {
@Inject @Key("rdeSigningKey") Provider<PGPKeyPair> rdeSigningKey;
@Inject @Key("rdeReceiverKey") Provider<PGPPublicKey> rdeReceiverKey;
@Inject
@Key("brdaSigningKey")
Provider<PGPKeyPair> brdaSigningKey;
@Inject
@Key("brdaReceiverKey")
Provider<PGPPublicKey> brdaReceiverKey;
@Inject EscrowDepositEncryptor() {}
/** Creates a {@code .ryde} and {@code .sig} file, provided an XML deposit file. */
void encrypt(String tld, Path xmlFile, Path outdir)
void encrypt(RdeMode mode, String tld, Integer revision, Path xmlFile, Path outdir)
throws IOException, XmlException {
try (InputStream xmlFileInput = Files.newInputStream(xmlFile);
BufferedInputStream xmlInput = new BufferedInputStream(xmlFileInput, PEEK_BUFFER_SIZE)) {
DateTime watermark = RdeUtil.peekWatermark(xmlInput);
String name = RdeNamingUtils.makeRydeFilename(tld, watermark, FULL, 1, 0);
String name = RdeNamingUtils.makeRydeFilename(tld, watermark, mode, 1, revision);
Path rydePath = outdir.resolve(name + ".ryde");
Path sigPath = outdir.resolve(name + ".sig");
Path pubPath = outdir.resolve(tld + ".pub");
PGPKeyPair signingKey = rdeSigningKey.get();
PGPKeyPair signingKey;
PGPPublicKey receiverKey;
if (mode == FULL) {
signingKey = rdeSigningKey.get();
receiverKey = rdeReceiverKey.get();
} else {
signingKey = brdaSigningKey.get();
receiverKey = brdaReceiverKey.get();
}
try (OutputStream rydeOutput = Files.newOutputStream(rydePath);
OutputStream sigOutput = Files.newOutputStream(sigPath);
RydeEncoder rydeEncoder = new RydeEncoder.Builder()
.setRydeOutput(rydeOutput, rdeReceiverKey.get())
.setSignatureOutput(sigOutput, signingKey)
.setFileMetadata(name, Files.size(xmlFile), watermark)
.build()) {
RydeEncoder rydeEncoder =
new RydeEncoder.Builder()
.setRydeOutput(rydeOutput, receiverKey)
.setSignatureOutput(sigOutput, signingKey)
.setFileMetadata(name, Files.size(xmlFile), watermark)
.build()) {
ByteStreams.copy(xmlInput, rydeEncoder);
}
try (OutputStream pubOutput = Files.newOutputStream(pubPath);

View File

@@ -133,7 +133,7 @@ final class GenerateEscrowDepositCommand implements CommandWithRemoteApi {
}
cloudTasksUtils.enqueue(
RDE_REPORT_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeStagingAction.PATH, Service.BACKEND.toString(), paramsBuilder.build()));
}

View File

@@ -27,9 +27,9 @@ import google.registry.model.tld.Registries;
class LoadTestCommand extends ConfirmingCommand
implements CommandWithConnection, CommandWithRemoteApi {
// This is a mostly arbitrary value, roughly an hour and a quarter. It served as a generous
// This is a mostly arbitrary value, roughly two and a half hours. It served as a generous
// timespan for initial backup/restore testing, but has no other special significance.
private static final int DEFAULT_RUN_SECONDS = 4600;
private static final int DEFAULT_RUN_SECONDS = 9200;
@Parameter(
names = {"--tld"},

View File

@@ -123,6 +123,7 @@ public final class RegistryTool {
.put("update_server_locks", UpdateServerLocksCommand.class)
.put("update_tld", UpdateTldCommand.class)
.put("upload_claims_list", UploadClaimsListCommand.class)
.put("validate_datastore_with_sql", ValidateDatastoreWithSqlCommand.class)
.put("validate_escrow_deposit", ValidateEscrowDepositCommand.class)
.put("validate_login_credentials", ValidateLoginCredentialsCommand.class)
.put("verify_ote", VerifyOteCommand.class)

View File

@@ -76,6 +76,7 @@ import javax.inject.Singleton;
LocalCredentialModule.class,
PersistenceModule.class,
RdeModule.class,
RegistryToolDataflowModule.class,
RequestFactoryModule.class,
SecretManagerModule.class,
URLFetchServiceModule.class,
@@ -170,6 +171,8 @@ interface RegistryToolComponent {
void inject(UpdateTldCommand command);
void inject(ValidateDatastoreWithSqlCommand command);
void inject(ValidateEscrowDepositCommand command);
void inject(ValidateLoginCredentialsCommand command);

View File

@@ -0,0 +1,39 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import com.google.api.services.dataflow.Dataflow;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.LocalCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.GoogleCredentialsBundle;
/** Provides a {@link Dataflow} API client for use in {@link RegistryTool}. */
@Module
public class RegistryToolDataflowModule {
@Provides
static Dataflow provideDataflow(
@LocalCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Dataflow.Builder(
credentialsBundle.getHttpTransport(),
credentialsBundle.getJsonFactory(),
credentialsBundle.getHttpRequestInitializer())
.setApplicationName(String.format("%s nomulus", projectId))
.build();
}
}

View File

@@ -0,0 +1,229 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.tools;
import static google.registry.beam.BeamUtils.createJobName;
import static google.registry.model.replay.ReplicateToDatastoreAction.REPLICATE_TO_DATASTORE_LOCK_NAME;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.beust.jcommander.Parameter;
import com.beust.jcommander.Parameters;
import com.google.api.services.dataflow.Dataflow;
import com.google.api.services.dataflow.model.Job;
import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.net.MediaType;
import google.registry.backup.SyncDatastoreToSqlSnapshotAction;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
import google.registry.model.replay.ReplicateToDatastoreAction;
import google.registry.model.server.Lock;
import google.registry.request.Action.Service;
import google.registry.util.Clock;
import google.registry.util.RequestStatusChecker;
import google.registry.util.Sleeper;
import java.io.IOException;
import java.util.Optional;
import java.util.UUID;
import javax.inject.Inject;
import org.joda.time.Duration;
/**
* Validates asynchronously replicated data from the primary Cloud SQL database to Datastore.
*
* <p>This command suspends the replication process (by acquiring the replication lock), take a
* snapshot of the Cloud SQL database, invokes a Nomulus server action to sync Datastore to this
* snapshot (See {@link SyncDatastoreToSqlSnapshotAction} for details), and finally launches a BEAM
* pipeline to compare Datastore with the given SQL snapshot.
*
* <p>This command does not lock up the SQL database. Normal processing can proceed.
*/
@Parameters(commandDescription = "Validates Datastore with Cloud SQL.")
public class ValidateDatastoreWithSqlCommand
implements CommandWithConnection, CommandWithRemoteApi {
private static final Service NOMULUS_SERVICE = Service.BACKEND;
private static final String PIPELINE_NAME = "validate_datastore_pipeline";
// States indicating a job is not finished yet.
private static final ImmutableSet<String> DATAFLOW_JOB_RUNNING_STATES =
ImmutableSet.of(
"JOB_STATE_RUNNING", "JOB_STATE_STOPPED", "JOB_STATE_PENDING", "JOB_STATE_QUEUED");
private static final Duration JOB_POLLING_INTERVAL = Duration.standardSeconds(60);
@Parameter(
names = {"-m", "--manual"},
description =
"If true, let user launch the comparison pipeline manually out of band. "
+ "Command will wait for user key-press to exit after syncing Datastore.")
boolean manualLaunchPipeline;
@Inject Clock clock;
@Inject Dataflow dataflow;
@Inject
@Config("defaultJobRegion")
String jobRegion;
@Inject
@Config("beamStagingBucketUrl")
String stagingBucketUrl;
@Inject
@Config("projectId")
String projectId;
@Inject Sleeper sleeper;
private AppEngineConnection connection;
@Override
public void setConnection(AppEngineConnection connection) {
this.connection = connection;
}
@Override
public void run() throws Exception {
MigrationState state = DatabaseMigrationStateSchedule.getValueAtTime(clock.nowUtc());
if (!state.getReplayDirection().equals(ReplayDirection.SQL_TO_DATASTORE)) {
throw new IllegalStateException("Cannot sync Datastore to SQL in migration step " + state);
}
Optional<Lock> lock =
Lock.acquireSql(
REPLICATE_TO_DATASTORE_LOCK_NAME,
null,
ReplicateToDatastoreAction.REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH,
new FakeRequestStatusChecker(),
false);
if (!lock.isPresent()) {
throw new IllegalStateException("Cannot acquire the async propagation lock.");
}
try {
try (DatabaseSnapshot snapshot = DatabaseSnapshot.createSnapshot()) {
System.out.printf("Obtained snapshot %s\n", snapshot.getSnapshotId());
AppEngineConnection connectionToService = connection.withService(NOMULUS_SERVICE);
String response =
connectionToService.sendPostRequest(
getNomulusEndpoint(snapshot.getSnapshotId()),
ImmutableMap.<String, String>of(),
MediaType.PLAIN_TEXT_UTF_8,
"".getBytes(UTF_8));
System.out.println(response);
lock.ifPresent(Lock::releaseSql);
lock = Optional.empty();
// See SyncDatastoreToSqlSnapshotAction for response format.
String latestCommitTimestamp =
response.substring(response.lastIndexOf('(') + 1, response.lastIndexOf(')'));
if (manualLaunchPipeline) {
System.out.print("\nEnter any key to continue when the pipeline ends:");
System.in.read();
} else {
Job pipelineJob =
launchComparisonPipeline(snapshot.getSnapshotId(), latestCommitTimestamp).getJob();
String jobId = pipelineJob.getId();
System.out.printf(
"Launched comparison pipeline %s (%s).\n", pipelineJob.getName(), jobId);
while (DATAFLOW_JOB_RUNNING_STATES.contains(getDataflowJobStatus(jobId))) {
sleeper.sleepInterruptibly(JOB_POLLING_INTERVAL);
}
System.out.printf(
"Pipeline ended with %s state. Please check counters for results.\n",
getDataflowJobStatus(jobId));
}
}
} finally {
lock.ifPresent(Lock::releaseSql);
}
}
private static String getNomulusEndpoint(String sqlSnapshotId) {
return String.format(
"%s?sqlSnapshotId=%s", SyncDatastoreToSqlSnapshotAction.PATH, sqlSnapshotId);
}
private LaunchFlexTemplateResponse launchComparisonPipeline(
String sqlSnapshotId, String latestCommitLogTimestamp) {
try {
LaunchFlexTemplateParameter parameter =
new LaunchFlexTemplateParameter()
.setJobName(createJobName("validate-datastore", clock))
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(
ImmutableMap.of(
"sqlSnapshotId",
sqlSnapshotId,
"latestCommitLogTimestamp",
latestCommitLogTimestamp,
"registryEnvironment",
RegistryToolEnvironment.get().name()));
return dataflow
.projects()
.locations()
.flexTemplates()
.launch(
projectId, jobRegion, new LaunchFlexTemplateRequest().setLaunchParameter(parameter))
.execute();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private String getDataflowJobStatus(String jobId) {
try {
return dataflow
.projects()
.locations()
.jobs()
.get(projectId, jobRegion, jobId)
.execute()
.getCurrentState();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
/**
* A fake implementation of {@link RequestStatusChecker} for managing SQL-backed locks from
* non-AppEngine platforms. This is only required until the Nomulus server is migrated off
* AppEngine.
*/
static class FakeRequestStatusChecker implements RequestStatusChecker {
@Override
public String getLogId() {
return ValidateDatastoreWithSqlCommand.class.getSimpleName() + "-" + UUID.randomUUID();
}
@Override
public boolean isRunning(String requestLogId) {
return false;
}
}
}

View File

@@ -19,7 +19,6 @@ import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static com.google.common.collect.Sets.difference;
import static google.registry.config.RegistryEnvironment.PRODUCTION;
import static google.registry.export.sheet.SyncRegistrarsSheetAction.enqueueRegistrarSheetSync;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.security.JsonResponseHelper.Status.ERROR;
@@ -32,18 +31,21 @@ import com.google.common.base.Strings;
import com.google.common.collect.HashMultimap;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Multimap;
import com.google.common.collect.Sets;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryEnvironment;
import google.registry.export.sheet.SyncRegistrarsSheetAction;
import google.registry.flows.certs.CertificateChecker;
import google.registry.flows.certs.CertificateChecker.InsecureCertificateException;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarContact;
import google.registry.model.registrar.RegistrarContact.Type;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.HttpException.BadRequestException;
import google.registry.request.HttpException.ForbiddenException;
import google.registry.request.JsonActionRunner;
@@ -58,6 +60,7 @@ import google.registry.ui.forms.FormFieldException;
import google.registry.ui.server.RegistrarFormFields;
import google.registry.ui.server.SendEmailUtils;
import google.registry.util.AppEngineServiceUtils;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CollectionUtils;
import google.registry.util.DiffUtils;
import java.util.HashSet;
@@ -88,6 +91,22 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
static final String ARGS_PARAM = "args";
static final String ID_PARAM = "id";
/**
* Allows task enqueueing to be disabled when executing registrar console test cases.
*
* <p>The existing workflow in UI test cases triggers task enqueueing, which was not an issue with
* Task Queue since it's a native App Engine feature simulated by the App Engine SDK's
* environment. However, with Cloud Tasks, the server enqueues and fails to deliver to the actual
* Cloud Tasks endpoint due to lack of permission.
*
* <p>One way to allow enqueuing in backend test and avoid enqueuing in UI test is to disable
* enqueuing when the test server starts and enable enqueueing once the test server stops. This
* can be done by utilizing a ThreadLocal<Boolean> variable isInTestDriver, which is set to false
* by default. Enqueuing is allowed only if the value of isInTestDriver is false. It's set to true
* in start() and set to false in stop() inside TestDriver.java, a class used in testing.
*/
private static ThreadLocal<Boolean> isInTestDriver = ThreadLocal.withInitial(() -> false);
@Inject JsonActionRunner jsonActionRunner;
@Inject AppEngineServiceUtils appEngineServiceUtils;
@Inject RegistrarConsoleMetrics registrarConsoleMetrics;
@@ -95,6 +114,7 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
@Inject AuthenticatedRegistrarAccessor registrarAccessor;
@Inject AuthResult authResult;
@Inject CertificateChecker certificateChecker;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject RegistrarSettingsAction() {}
@@ -102,6 +122,14 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
return contact.getPhoneNumber() != null;
}
public static void setIsInTestDriverToFalse() {
isInTestDriver.set(false);
}
public static void setIsInTestDriverToTrue() {
isInTestDriver.set(true);
}
@Override
public void run() {
jsonActionRunner.run(this);
@@ -170,6 +198,26 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
}
}
@AutoValue
abstract static class EmailInfo {
abstract Registrar registrar();
abstract Registrar updatedRegistrar();
abstract ImmutableSet<RegistrarContact> contacts();
abstract ImmutableSet<RegistrarContact> updatedContacts();
static EmailInfo create(
Registrar registrar,
Registrar updatedRegistrar,
ImmutableSet<RegistrarContact> contacts,
ImmutableSet<RegistrarContact> updatedContacts) {
return new AutoValue_RegistrarSettingsAction_EmailInfo(
registrar, updatedRegistrar, contacts, updatedContacts);
}
}
private RegistrarResult read(String registrarId) {
return RegistrarResult.create("Success", loadRegistrarUnchecked(registrarId));
}
@@ -183,72 +231,69 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
}
private RegistrarResult update(final Map<String, ?> args, String registrarId) {
tm().transact(
() -> {
// We load the registrar here rather than outside of the transaction - to make
// sure we have the latest version. This one is loaded inside the transaction, so it's
// guaranteed to not change before we update it.
Registrar registrar = loadRegistrarUnchecked(registrarId);
// Detach the registrar to avoid Hibernate object-updates, since we wish to email
// out the diffs between the existing and updated registrar objects
if (!tm().isOfy()) {
jpaTm().getEntityManager().detach(registrar);
}
// Verify that the registrar hasn't been changed.
// To do that - we find the latest update time (or null if the registrar has been
// deleted) and compare to the update time from the args. The update time in the args
// comes from the read that gave the UI the data - if it's out of date, then the UI
// had out of date data.
DateTime latest = registrar.getLastUpdateTime();
DateTime latestFromArgs =
RegistrarFormFields.LAST_UPDATE_TIME.extractUntyped(args).get();
if (!latestFromArgs.equals(latest)) {
logger.atWarning().log(
"Registrar changed since reading the data!"
+ " Last updated at %s, but args data last updated at %s.",
latest, latestFromArgs);
throw new IllegalStateException(
"Registrar has been changed by someone else. Please reload and retry.");
}
// Keep the current contacts so we can later check that no required contact was
// removed, email the changes to the contacts
ImmutableSet<RegistrarContact> contacts = registrar.getContacts();
Registrar updatedRegistrar = registrar;
// Do OWNER only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateOwnerControlledFields(updatedRegistrar, args);
// Do ADMIN only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateAdminControlledFields(updatedRegistrar, args);
// read the contacts from the request.
ImmutableSet<RegistrarContact> updatedContacts =
readContacts(registrar, contacts, args);
// Save the updated contacts
if (!updatedContacts.equals(contacts)) {
if (!registrarAccessor.hasRoleOnRegistrar(Role.OWNER, registrar.getRegistrarId())) {
throw new ForbiddenException("Only OWNERs can update the contacts");
}
checkContactRequirements(contacts, updatedContacts);
RegistrarContact.updateContacts(updatedRegistrar, updatedContacts);
updatedRegistrar =
updatedRegistrar.asBuilder().setContactsRequireSyncing(true).build();
}
// Save the updated registrar
if (!updatedRegistrar.equals(registrar)) {
tm().put(updatedRegistrar);
}
// Email the updates
sendExternalUpdatesIfNecessary(
registrar, contacts, updatedRegistrar, updatedContacts);
});
// Email the updates
sendExternalUpdatesIfNecessary(tm().transact(() -> saveUpdates(args, registrarId)));
// Reload the result outside of the transaction to get the most recent version
return RegistrarResult.create("Saved " + registrarId, loadRegistrarUnchecked(registrarId));
}
/** Saves the updates and returns info needed for the update email */
private EmailInfo saveUpdates(final Map<String, ?> args, String registrarId) {
// We load the registrar here rather than outside of the transaction - to make
// sure we have the latest version. This one is loaded inside the transaction, so it's
// guaranteed to not change before we update it.
Registrar registrar = loadRegistrarUnchecked(registrarId);
// Detach the registrar to avoid Hibernate object-updates, since we wish to email
// out the diffs between the existing and updated registrar objects
if (!tm().isOfy()) {
jpaTm().getEntityManager().detach(registrar);
}
// Verify that the registrar hasn't been changed.
// To do that - we find the latest update time (or null if the registrar has been
// deleted) and compare to the update time from the args. The update time in the args
// comes from the read that gave the UI the data - if it's out of date, then the UI
// had out of date data.
DateTime latest = registrar.getLastUpdateTime();
DateTime latestFromArgs = RegistrarFormFields.LAST_UPDATE_TIME.extractUntyped(args).get();
if (!latestFromArgs.equals(latest)) {
logger.atWarning().log(
"Registrar changed since reading the data!"
+ " Last updated at %s, but args data last updated at %s.",
latest, latestFromArgs);
throw new IllegalStateException(
"Registrar has been changed by someone else. Please reload and retry.");
}
// Keep the current contacts so we can later check that no required contact was
// removed, email the changes to the contacts
ImmutableSet<RegistrarContact> contacts = registrar.getContacts();
Registrar updatedRegistrar = registrar;
// Do OWNER only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateOwnerControlledFields(updatedRegistrar, args);
// Do ADMIN only updates to the registrar from the request.
updatedRegistrar = checkAndUpdateAdminControlledFields(updatedRegistrar, args);
// read the contacts from the request.
ImmutableSet<RegistrarContact> updatedContacts = readContacts(registrar, contacts, args);
// Save the updated contacts
if (!updatedContacts.equals(contacts)) {
if (!registrarAccessor.hasRoleOnRegistrar(Role.OWNER, registrar.getRegistrarId())) {
throw new ForbiddenException("Only OWNERs can update the contacts");
}
checkContactRequirements(contacts, updatedContacts);
RegistrarContact.updateContacts(updatedRegistrar, updatedContacts);
updatedRegistrar = updatedRegistrar.asBuilder().setContactsRequireSyncing(true).build();
}
// Save the updated registrar
if (!updatedRegistrar.equals(registrar)) {
tm().put(updatedRegistrar);
}
return EmailInfo.create(registrar, updatedRegistrar, contacts, updatedContacts);
}
private Map<String, Object> expandRegistrarWithContacts(
Iterable<RegistrarContact> contacts, Registrar registrar) {
ImmutableSet<Map<String, Object>> expandedContacts =
@@ -408,6 +453,13 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
Map<?, ?> diffs =
DiffUtils.deepDiff(
originalRegistrar.toDiffableFieldMap(), updatedRegistrar.toDiffableFieldMap(), true);
// It's expected that the update timestamp will be changed, as it gets reset whenever we change
// nested collections. If it's the only change, just return the original registrar.
if (diffs.keySet().equals(ImmutableSet.of("lastUpdateTime"))) {
return originalRegistrar;
}
throw new ForbiddenException(
String.format("Unauthorized: only %s can change fields %s", allowedRole, diffs.keySet()));
}
@@ -575,26 +627,30 @@ public class RegistrarSettingsAction implements Runnable, JsonActionRunner.JsonA
* sends an email with a diff of the changes to the configured notification email address and all
* contact addresses and enqueues a task to re-sync the registrar sheet.
*/
private void sendExternalUpdatesIfNecessary(
Registrar existingRegistrar,
ImmutableSet<RegistrarContact> existingContacts,
Registrar updatedRegistrar,
ImmutableSet<RegistrarContact> updatedContacts) {
private void sendExternalUpdatesIfNecessary(EmailInfo emailInfo) {
ImmutableSet<RegistrarContact> existingContacts = emailInfo.contacts();
if (!sendEmailUtils.hasRecipients() && existingContacts.isEmpty()) {
return;
}
Registrar existingRegistrar = emailInfo.registrar();
Map<?, ?> diffs =
DiffUtils.deepDiff(
expandRegistrarWithContacts(existingContacts, existingRegistrar),
expandRegistrarWithContacts(updatedContacts, updatedRegistrar),
expandRegistrarWithContacts(emailInfo.updatedContacts(), emailInfo.updatedRegistrar()),
true);
@SuppressWarnings("unchecked")
Set<String> changedKeys = (Set<String>) diffs.keySet();
if (CollectionUtils.difference(changedKeys, "lastUpdateTime").isEmpty()) {
return;
}
enqueueRegistrarSheetSync(appEngineServiceUtils.getCurrentVersionHostname("backend"));
if (!isInTestDriver.get()) {
// Enqueues a sync registrar sheet task if enqueuing is not triggered by console tests and
// there's an update besides the lastUpdateTime
cloudTasksUtils.enqueue(
SyncRegistrarsSheetAction.QUEUE,
cloudTasksUtils.createGetTask(
SyncRegistrarsSheetAction.PATH, Service.BACKEND.toString(), ImmutableMultimap.of()));
}
String environment = Ascii.toLowerCase(String.valueOf(RegistryEnvironment.get()));
sendEmailUtils.sendEmail(
String.format(

View File

@@ -0,0 +1,21 @@
{
"name": "Validate Cloud SQL with Datastore being primary",
"description": "An Apache Beam batch pipeline that compares Cloud SQL with the primary Datastore.",
"parameters": [
{
"name": "registryEnvironment",
"label": "The Registry environment.",
"helpText": "The Registry environment.",
"is_optional": false,
"regexes": [
"^PRODUCTION|SANDBOX|CRASH|QA|ALPHA$"
]
},
{
"name": "comparisonStartTimestamp",
"label": "Only entities updated at or after this time are included for validation.",
"helpText": "The earliest entity update time allowed for inclusion in validation, in ISO8601 format.",
"is_optional": true
}
]
}

View File

@@ -48,7 +48,6 @@ import google.registry.model.common.Cursor;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.ofy.Ofy;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry;
@@ -56,7 +55,6 @@ import google.registry.model.tld.Registry;
import google.registry.testing.DualDatabaseTest;
import google.registry.testing.FakeClock;
import google.registry.testing.FakeResponse;
import google.registry.testing.InjectExtension;
import google.registry.testing.ReplayExtension;
import google.registry.testing.TestOfyAndSql;
import google.registry.testing.TestOfyOnly;
@@ -78,11 +76,6 @@ public class ExpandRecurringBillingEventsActionTest
private DateTime currentTestTime = DateTime.parse("1999-01-05T00:00:00Z");
private final FakeClock clock = new FakeClock(currentTestTime);
@Order(Order.DEFAULT - 1)
@RegisterExtension
public final InjectExtension inject =
new InjectExtension().withStaticFieldOverride(Ofy.class, "clock", clock);
@Order(Order.DEFAULT - 2)
@RegisterExtension
public final ReplayExtension replayExtension = ReplayExtension.createWithDoubleReplay(clock);

View File

@@ -195,7 +195,7 @@ public class DomainBaseUtilTest {
domainTransformedByUtil = domainTransformedByUtil.asBuilder().build();
assertAboutImmutableObjects()
.that(domainTransformedByUtil)
.isEqualExceptFields(domainTransformedByOfy, "revisions");
.isEqualExceptFields(domainTransformedByOfy, "revisions", "updateTimestamp");
}
@Test
@@ -218,7 +218,7 @@ public class DomainBaseUtilTest {
domainTransformedByUtil = domainTransformedByUtil.asBuilder().build();
assertAboutImmutableObjects()
.that(domainTransformedByUtil)
.isEqualExceptFields(domainWithoutFKeys, "revisions");
.isEqualExceptFields(domainWithoutFKeys, "revisions", "updateTimestamp");
}
@Test

View File

@@ -33,7 +33,7 @@ class CommitLogFanoutActionTest {
private static final String ENDPOINT = "/the/servlet";
private static final String QUEUE = "the-queue";
private final CloudTasksHelper cloudTasksHelper = new CloudTasksHelper();
private final CloudTasksHelper cloudTasksHelper = new CloudTasksHelper(new FakeClock());
@RegisterExtension
final AppEngineExtension appEngineExtension =
@@ -58,7 +58,6 @@ class CommitLogFanoutActionTest {
action.endpoint = ENDPOINT;
action.queue = QUEUE;
action.jitterSeconds = Optional.empty();
action.clock = new FakeClock();
action.run();
List<TaskMatcher> matchers = new ArrayList<>();
for (int bucketId : CommitLogBucket.getBucketIds()) {

View File

@@ -45,7 +45,7 @@ class TldFanoutActionTest {
private static final String ENDPOINT = "/the/servlet";
private static final String QUEUE = "the-queue";
private final FakeResponse response = new FakeResponse();
private final CloudTasksHelper cloudTasksHelper = new CloudTasksHelper();
private final CloudTasksHelper cloudTasksHelper = new CloudTasksHelper(new FakeClock());
@RegisterExtension
final AppEngineExtension appEngine =
@@ -61,7 +61,6 @@ class TldFanoutActionTest {
private void run(ImmutableListMultimap<String, String> params) {
TldFanoutAction action = new TldFanoutAction();
action.clock = new FakeClock();
action.params = params;
action.endpoint = ENDPOINT;
action.queue = QUEUE;

View File

@@ -106,6 +106,7 @@ import google.registry.flows.domain.DomainFlowUtils.FeeDescriptionParseException
import google.registry.flows.domain.DomainFlowUtils.FeesMismatchException;
import google.registry.flows.domain.DomainFlowUtils.FeesRequiredDuringEarlyAccessProgramException;
import google.registry.flows.domain.DomainFlowUtils.FeesRequiredForPremiumNameException;
import google.registry.flows.domain.DomainFlowUtils.InvalidDsRecordException;
import google.registry.flows.domain.DomainFlowUtils.InvalidIdnDomainLabelException;
import google.registry.flows.domain.DomainFlowUtils.InvalidPunycodeException;
import google.registry.flows.domain.DomainFlowUtils.InvalidTcnIdChecksumException;
@@ -346,6 +347,12 @@ class DomainCreateFlowTest extends ResourceFlowTestCase<DomainCreateFlow, Domain
createBillingEvent));
assertDnsTasksEnqueued(getUniqueIdFromCommand());
assertEppResourceIndexEntityFor(domain);
replayExtension.expectUpdateFor(domain);
// Verify that all timestamps are correct after SQL -> DS replay.
// Added to confirm that timestamps get updated correctly.
replayExtension.enableDomainTimestampChecks();
}
private void assertNoLordn() throws Exception {
@@ -546,6 +553,7 @@ class DomainCreateFlowTest extends ResourceFlowTestCase<DomainCreateFlow, Domain
.hasValue(HistoryEntry.createVKey(Key.create(historyEntry)));
}
// DomainTransactionRecord is not propagated.
@TestOfyAndSql
void testSuccess_validAllocationToken_multiUse() throws Exception {
setEppInput(
@@ -573,6 +581,7 @@ class DomainCreateFlowTest extends ResourceFlowTestCase<DomainCreateFlow, Domain
ImmutableMap.of("DOMAIN", "otherexample.tld", "YEARS", "2"));
runFlowAssertResponse(
loadFile("domain_create_response.xml", ImmutableMap.of("DOMAIN", "otherexample.tld")));
replayExtension.expectUpdateFor(reloadResourceByForeignKey());
}
@TestOfyAndSql
@@ -825,7 +834,8 @@ class DomainCreateFlowTest extends ResourceFlowTestCase<DomainCreateFlow, Domain
@TestOfyAndSql
void testSuccess_existedButWasDeleted() throws Exception {
persistContactsAndHosts();
persistDeletedDomain(getUniqueIdFromCommand(), clock.nowUtc().minusDays(1));
replayExtension.expectUpdateFor(
persistDeletedDomain(getUniqueIdFromCommand(), clock.nowUtc().minusDays(1)));
clock.advanceOneMilli();
doSuccessfulTest();
}
@@ -1002,6 +1012,22 @@ class DomainCreateFlowTest extends ResourceFlowTestCase<DomainCreateFlow, Domain
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_secDnsInvalidDigestType() throws Exception {
setEppInput("domain_create_dsdata_bad_digest_types.xml");
persistContactsAndHosts();
EppException thrown = assertThrows(InvalidDsRecordException.class, this::runFlow);
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_secDnsInvalidAlgorithm() throws Exception {
setEppInput("domain_create_dsdata_bad_algorithms.xml");
persistContactsAndHosts();
EppException thrown = assertThrows(InvalidDsRecordException.class, this::runFlow);
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_wrongExtension() {
setEppInput("domain_create_wrong_extension.xml");

View File

@@ -72,6 +72,7 @@ import google.registry.flows.domain.DomainFlowUtils.DuplicateContactForRoleExcep
import google.registry.flows.domain.DomainFlowUtils.EmptySecDnsUpdateException;
import google.registry.flows.domain.DomainFlowUtils.FeesMismatchException;
import google.registry.flows.domain.DomainFlowUtils.FeesRequiredForNonFreeOperationException;
import google.registry.flows.domain.DomainFlowUtils.InvalidDsRecordException;
import google.registry.flows.domain.DomainFlowUtils.LinkedResourceInPendingDeleteProhibitsOperationException;
import google.registry.flows.domain.DomainFlowUtils.LinkedResourcesDoNotExistException;
import google.registry.flows.domain.DomainFlowUtils.MaxSigLifeChangeNotSupportedException;
@@ -108,6 +109,7 @@ import google.registry.persistence.VKey;
import google.registry.testing.DatabaseHelper;
import google.registry.testing.DualDatabaseTest;
import google.registry.testing.ReplayExtension;
import google.registry.testing.ReplayExtension.NoDatabaseCompare;
import google.registry.testing.TestOfyAndSql;
import google.registry.testing.TestOfyOnly;
import java.util.Optional;
@@ -122,7 +124,7 @@ import org.junit.jupiter.api.extension.RegisterExtension;
class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, DomainBase> {
private static final DelegationSignerData SOME_DSDATA =
DelegationSignerData.create(1, 2, 3, base16().decode("0123"));
DelegationSignerData.create(1, 2, 2, base16().decode("0123"));
private static final ImmutableMap<String, String> OTHER_DSDATA_TEMPLATE_MAP =
ImmutableMap.of(
"KEY_TAG", "12346",
@@ -201,6 +203,7 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
.setDomain(domain)
.build());
clock.advanceOneMilli();
replayExtension.expectUpdateFor(domain);
return domain;
}
@@ -211,9 +214,11 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
private void doSuccessfulTest(String expectedXmlFilename) throws Exception {
assertTransactionalFlow(true);
runFlowAssertResponse(loadFile(expectedXmlFilename));
DomainBase domain = reloadResourceByForeignKey();
replayExtension.expectUpdateFor(domain);
// Check that the domain was updated. These values came from the xml.
assertAboutDomains()
.that(reloadResourceByForeignKey())
.that(domain)
.hasStatusValue(StatusValue.CLIENT_HOLD)
.and()
.hasAuthInfoPwd("2BARfoo")
@@ -228,6 +233,10 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
assertNoBillingEvents();
assertDnsTasksEnqueued("example.tld");
assertLastHistoryContainsResource(reloadResourceByForeignKey());
// Verify that all timestamps are correct after SQL -> DS replay.
// Added to confirm that timestamps get updated correctly.
replayExtension.enableDomainTimestampChecks();
}
@TestOfyAndSql
@@ -307,6 +316,8 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
}
persistResource(
reloadResourceByForeignKey().asBuilder().setNameservers(nameservers.build()).build());
// Add a null update here so we don't compare.
replayExtension.expectUpdateFor(null);
clock.advanceOneMilli();
}
@@ -536,7 +547,7 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
"domain_update_dsdata_add.xml",
ImmutableSet.of(SOME_DSDATA),
ImmutableSet.of(SOME_DSDATA),
ImmutableMap.of("KEY_TAG", "1", "ALG", "2", "DIGEST_TYPE", "3", "DIGEST", "0123"));
ImmutableMap.of("KEY_TAG", "1", "ALG", "2", "DIGEST_TYPE", "2", "DIGEST", "0123"));
}
@TestOfyAndSql
@@ -556,8 +567,8 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
"domain_update_dsdata_add.xml",
ImmutableSet.of(SOME_DSDATA),
ImmutableSet.of(
SOME_DSDATA, DelegationSignerData.create(12346, 2, 3, base16().decode("0123"))),
ImmutableMap.of("KEY_TAG", "12346", "ALG", "2", "DIGEST_TYPE", "3", "DIGEST", "0123"));
SOME_DSDATA, DelegationSignerData.create(12346, 2, 2, base16().decode("0123"))),
ImmutableMap.of("KEY_TAG", "12346", "ALG", "2", "DIGEST_TYPE", "2", "DIGEST", "0123"));
}
@TestOfyAndSql
@@ -565,8 +576,8 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
doSecDnsSuccessfulTest(
"domain_update_dsdata_add.xml",
ImmutableSet.of(SOME_DSDATA),
ImmutableSet.of(SOME_DSDATA, DelegationSignerData.create(1, 8, 3, base16().decode("0123"))),
ImmutableMap.of("KEY_TAG", "1", "ALG", "8", "DIGEST_TYPE", "3", "DIGEST", "0123"));
ImmutableSet.of(SOME_DSDATA, DelegationSignerData.create(1, 8, 2, base16().decode("0123"))),
ImmutableMap.of("KEY_TAG", "1", "ALG", "8", "DIGEST_TYPE", "2", "DIGEST", "0123"));
}
@TestOfyAndSql
@@ -583,15 +594,15 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
doSecDnsSuccessfulTest(
"domain_update_dsdata_add.xml",
ImmutableSet.of(SOME_DSDATA),
ImmutableSet.of(SOME_DSDATA, DelegationSignerData.create(1, 2, 3, base16().decode("4567"))),
ImmutableMap.of("KEY_TAG", "1", "ALG", "2", "DIGEST_TYPE", "3", "DIGEST", "4567"));
ImmutableSet.of(SOME_DSDATA, DelegationSignerData.create(1, 2, 2, base16().decode("4567"))),
ImmutableMap.of("KEY_TAG", "1", "ALG", "2", "DIGEST_TYPE", "2", "DIGEST", "4567"));
}
@TestOfyAndSql
void testSuccess_secDnsAddToMaxRecords() throws Exception {
ImmutableSet.Builder<DelegationSignerData> builder = new ImmutableSet.Builder<>();
for (int i = 0; i < 7; ++i) {
builder.add(DelegationSignerData.create(i, 2, 3, new byte[] {0, 1, 2}));
builder.add(DelegationSignerData.create(i, 2, 2, new byte[] {0, 1, 2}));
}
ImmutableSet<DelegationSignerData> commonDsData = builder.build();
@@ -643,7 +654,7 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
void testSuccess_secDnsAddRemoveToMaxRecords() throws Exception {
ImmutableSet.Builder<DelegationSignerData> builder = new ImmutableSet.Builder<>();
for (int i = 0; i < 7; ++i) {
builder.add(DelegationSignerData.create(i, 2, 3, new byte[] {0, 1, 2}));
builder.add(DelegationSignerData.create(i, 2, 2, new byte[] {0, 1, 2}));
}
ImmutableSet<DelegationSignerData> commonDsData = builder.build();
@@ -813,11 +824,69 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
MaxSigLifeChangeNotSupportedException.class, "domain_update_maxsiglife.xml");
}
@TestOfyAndSql
void testFailure_secDnsInvalidDigestType() throws Exception {
setEppInput("domain_update_dsdata_add.xml", OTHER_DSDATA_TEMPLATE_MAP);
persistResource(
newDomainBase(getUniqueIdFromCommand())
.asBuilder()
.setDsData(ImmutableSet.of(DelegationSignerData.create(1, 2, 3, new byte[] {0, 1, 2})))
.build());
EppException thrown = assertThrows(InvalidDsRecordException.class, this::runFlow);
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_secDnsMultipleInvalidDigestTypes() throws Exception {
setEppInput("domain_update_dsdata_add.xml", OTHER_DSDATA_TEMPLATE_MAP);
persistResource(
newDomainBase(getUniqueIdFromCommand())
.asBuilder()
.setDsData(
ImmutableSet.of(
DelegationSignerData.create(1, 2, 3, new byte[] {0, 1, 2}),
DelegationSignerData.create(2, 2, 6, new byte[] {0, 1, 2})))
.build());
EppException thrown = assertThrows(InvalidDsRecordException.class, this::runFlow);
assertThat(thrown).hasMessageThat().contains("digestType=3");
assertThat(thrown).hasMessageThat().contains("digestType=6");
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_secDnsInvalidAlgorithm() throws Exception {
setEppInput("domain_update_dsdata_add.xml", OTHER_DSDATA_TEMPLATE_MAP);
persistResource(
newDomainBase(getUniqueIdFromCommand())
.asBuilder()
.setDsData(ImmutableSet.of(DelegationSignerData.create(1, 99, 2, new byte[] {0, 1, 2})))
.build());
EppException thrown = assertThrows(InvalidDsRecordException.class, this::runFlow);
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_secDnsMultipleInvalidAlgorithms() throws Exception {
setEppInput("domain_update_dsdata_add.xml", OTHER_DSDATA_TEMPLATE_MAP);
persistResource(
newDomainBase(getUniqueIdFromCommand())
.asBuilder()
.setDsData(
ImmutableSet.of(
DelegationSignerData.create(1, 998, 2, new byte[] {0, 1, 2}),
DelegationSignerData.create(2, 99, 2, new byte[] {0, 1, 2})))
.build());
EppException thrown = assertThrows(InvalidDsRecordException.class, this::runFlow);
assertThat(thrown).hasMessageThat().contains("algorithm=998");
assertThat(thrown).hasMessageThat().contains("algorithm=99");
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
@TestOfyAndSql
void testFailure_secDnsTooManyDsRecords() throws Exception {
ImmutableSet.Builder<DelegationSignerData> builder = new ImmutableSet.Builder<>();
for (int i = 0; i < 8; ++i) {
builder.add(DelegationSignerData.create(i, 2, 3, new byte[] {0, 1, 2}));
builder.add(DelegationSignerData.create(i, 2, 2, new byte[] {0, 1, 2}));
}
setEppInput("domain_update_dsdata_add.xml", OTHER_DSDATA_TEMPLATE_MAP);
@@ -886,6 +955,7 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
assertThat(thrown).hasMessageThat().contains("(sh8013)");
}
@NoDatabaseCompare
@TestOfyAndSql
void testFailure_addingDuplicateContact() throws Exception {
persistReferencedEntities();
@@ -1213,6 +1283,8 @@ class DomainUpdateFlowTest extends ResourceFlowTestCase<DomainUpdateFlow, Domain
assertAboutEppExceptions().that(thrown).marshalsToXml();
}
// Contacts mismatch.
@NoDatabaseCompare
@TestOfyAndSql
void testFailure_sameContactAddedAndRemoved() throws Exception {
setEppInput("domain_update_add_remove_same_contact.xml");

View File

@@ -671,4 +671,56 @@ public class DomainBaseSqlTest {
.that(domain.getTransferData())
.isEqualExceptFields(thatDomain.getTransferData(), "serverApproveEntities");
}
@TestSqlOnly
void testUpdateTimeAfterNameserverUpdate() {
persistDomain();
DomainBase persisted = loadByKey(domain.createVKey());
DateTime originalUpdateTime = persisted.getUpdateTimestamp().getTimestamp();
fakeClock.advanceOneMilli();
DateTime transactionTime =
jpaTm()
.transact(
() -> {
HostResource host2 =
new HostResource.Builder()
.setRepoId("host2")
.setHostName("ns2.example.com")
.setCreationRegistrarId("registrar1")
.setPersistedCurrentSponsorRegistrarId("registrar2")
.build();
insertInDb(host2);
domain = persisted.asBuilder().addNameserver(host2.createVKey()).build();
updateInDb(domain);
return jpaTm().getTransactionTime();
});
domain = loadByKey(domain.createVKey());
assertThat(domain.getUpdateTimestamp().getTimestamp()).isEqualTo(transactionTime);
assertThat(domain.getUpdateTimestamp().getTimestamp()).isNotEqualTo(originalUpdateTime);
}
@TestSqlOnly
void testUpdateTimeAfterDsDataUpdate() {
persistDomain();
DomainBase persisted = loadByKey(domain.createVKey());
DateTime originalUpdateTime = persisted.getUpdateTimestamp().getTimestamp();
fakeClock.advanceOneMilli();
DateTime transactionTime =
jpaTm()
.transact(
() -> {
domain =
persisted
.asBuilder()
.setDsData(
ImmutableSet.of(
DelegationSignerData.create(1, 2, 3, new byte[] {0, 1, 2})))
.build();
updateInDb(domain);
return jpaTm().getTransactionTime();
});
domain = loadByKey(domain.createVKey());
assertThat(domain.getUpdateTimestamp().getTimestamp()).isEqualTo(transactionTime);
assertThat(domain.getUpdateTimestamp().getTimestamp()).isNotEqualTo(originalUpdateTime);
}
}

View File

@@ -15,6 +15,7 @@
package google.registry.model.replay;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.model.replay.ReplicateToDatastoreAction.applyTransaction;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.testing.DatabaseHelper.insertInDb;
@@ -158,23 +159,23 @@ public class ReplicateToDatastoreActionTest {
// Write a transaction and run just the batch fetch.
insertInDb(foo);
List<TransactionEntity> txns1 = action.getTransactionBatch();
List<TransactionEntity> txns1 = action.getTransactionBatchAtSnapshot();
assertThat(txns1).hasSize(1);
// Write a second transaction and do another batch fetch.
insertInDb(bar);
List<TransactionEntity> txns2 = action.getTransactionBatch();
List<TransactionEntity> txns2 = action.getTransactionBatchAtSnapshot();
assertThat(txns2).hasSize(2);
// Apply the first batch.
action.applyTransaction(txns1.get(0));
applyTransaction(txns1.get(0));
// Remove the foo record so we can ensure that this transaction doesn't get doublle-played.
ofyTm().transact(() -> ofyTm().delete(foo.key()));
// Apply the second batch.
for (TransactionEntity txn : txns2) {
action.applyTransaction(txn);
applyTransaction(txn);
}
// Verify that the first transaction didn't get replayed but the second one did.
@@ -212,10 +213,9 @@ public class ReplicateToDatastoreActionTest {
// Force the last transaction id back to -1 so that we look for transaction 0.
ofyTm().transact(() -> ofyTm().insert(new LastSqlTransaction(-1)));
List<TransactionEntity> txns = action.getTransactionBatch();
List<TransactionEntity> txns = action.getTransactionBatchAtSnapshot();
assertThat(txns).hasSize(1);
assertThat(
assertThrows(IllegalStateException.class, () -> action.applyTransaction(txns.get(0))))
assertThat(assertThrows(IllegalStateException.class, () -> applyTransaction(txns.get(0))))
.hasMessageThat()
.isEqualTo("Missing transaction: last txn id = -1, next available txn = 1");
}

View File

@@ -22,6 +22,7 @@ import static google.registry.testing.DatabaseHelper.insertInDb;
import com.google.common.collect.ImmutableSet;
import google.registry.model.ImmutableObject;
import google.registry.model.replay.NonReplicatedEntity;
import google.registry.persistence.transaction.JpaTestExtensions;
import google.registry.persistence.transaction.JpaTestExtensions.JpaUnitTestExtension;
import java.lang.reflect.Method;
@@ -168,7 +169,7 @@ class EntityCallbacksListenerTest {
}
@Entity(name = "TestEntity")
private static class TestEntity extends ParentEntity {
private static class TestEntity extends ParentEntity implements NonReplicatedEntity {
@Id String name = "id";
int nonTransientField = 0;

View File

@@ -49,7 +49,7 @@ public class JpaEntityCoverageExtension implements BeforeEachCallback, AfterEach
// TransactionEntity is trivial; its persistence is tested in TransactionTest.
"TransactionEntity");
private static final ImmutableSet<Class<?>> ALL_JPA_ENTITIES =
public static final ImmutableSet<Class<?>> ALL_JPA_ENTITIES =
PersistenceXmlUtility.getManagedClasses().stream()
.filter(e -> !IGNORE_ENTITIES.contains(e.getSimpleName()))
.filter(e -> e.isAnnotationPresent(Entity.class))

View File

@@ -217,11 +217,14 @@ abstract class JpaTransactionManagerExtension implements BeforeEachCallback, Aft
JpaTransactionManagerImpl txnManager = new JpaTransactionManagerImpl(emf, clock);
cachedTm = TransactionManagerFactory.jpaTm();
TransactionManagerFactory.setJpaTm(Suppliers.ofInstance(txnManager));
TransactionManagerFactory.setReplicaJpaTm(
Suppliers.ofInstance(new ReplicaSimulatingJpaTransactionManager(txnManager)));
}
@Override
public void afterEach(ExtensionContext context) {
TransactionManagerFactory.setJpaTm(Suppliers.ofInstance(cachedTm));
TransactionManagerFactory.setReplicaJpaTm(Suppliers.ofInstance(cachedTm));
// Even though we didn't set this, reset it to make sure no other tests are affected
JpaTransactionManagerImpl.removeReplaySqlToDsOverrideForTest();
cachedTm = null;

View File

@@ -16,6 +16,7 @@ package google.registry.persistence.transaction;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.testing.DatabaseHelper.insertInDb;
import static org.junit.jupiter.api.Assertions.assertThrows;
@@ -62,6 +63,17 @@ public class JpaTransactionManagerExtensionTest {
});
}
@Test
void testReplicaJpaTm() {
TestEntity testEntity = new TestEntity("foo", "bar");
assertThat(
assertThrows(
PersistenceException.class,
() -> replicaJpaTm().transact(() -> replicaJpaTm().put(testEntity))))
.hasMessageThat()
.isEqualTo("Error while committing the transaction");
}
@Test
void testExtraParameters() {
// This test verifies that 1) withEntityClass() has registered TestEntity and 2) The table

View File

@@ -91,9 +91,15 @@ public class ReplicaSimulatingJpaTransactionManager implements JpaTransactionMan
@Override
public <T> T transact(Supplier<T> work) {
if (delegate.inTransaction()) {
return work.get();
}
return delegate.transact(
() -> {
delegate.getEntityManager().createQuery("SET TRANSACTION READ ONLY").executeUpdate();
delegate
.getEntityManager()
.createNativeQuery("SET TRANSACTION READ ONLY")
.executeUpdate();
return work.get();
});
}

View File

@@ -30,6 +30,7 @@ import com.googlecode.objectify.annotation.Id;
import google.registry.model.ImmutableObject;
import google.registry.model.ofy.DatastoreTransactionManager;
import google.registry.model.ofy.Ofy;
import google.registry.model.replay.NonReplicatedEntity;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.TransactionManagerFactory.ReadOnlyModeException;
import google.registry.testing.AppEngineExtension;
@@ -448,7 +449,7 @@ public class TransactionManagerTest {
@Entity(name = "TxnMgrTestEntity")
@javax.persistence.Entity(name = "TestEntity")
private static class TestEntity extends TestEntityBase {
private static class TestEntity extends TestEntityBase implements NonReplicatedEntity {
private String data;

View File

@@ -15,7 +15,6 @@
package google.registry.rdap;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.rdap.RdapTestHelper.assertThat;
import static google.registry.rdap.RdapTestHelper.parseJsonObject;
import static google.registry.request.Action.Method.POST;
@@ -46,7 +45,6 @@ import google.registry.model.registrar.Registrar;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.tld.Registry;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.ReplicaSimulatingJpaTransactionManager;
import google.registry.rdap.RdapMetrics.EndpointType;
import google.registry.rdap.RdapMetrics.SearchType;
import google.registry.rdap.RdapMetrics.WildcardType;
@@ -376,7 +374,6 @@ class RdapDomainSearchActionTest extends RdapSearchActionTestCase<RdapDomainSear
action.nsIpParam = Optional.empty();
action.cursorTokenParam = Optional.empty();
action.requestPath = actionPath;
action.readOnlyJpaTm = jpaTm();
}
private JsonObject generateExpectedJsonForTwoDomainsNsReply() {
@@ -724,18 +721,6 @@ class RdapDomainSearchActionTest extends RdapSearchActionTestCase<RdapDomainSear
verifyMetrics(SearchType.BY_DOMAIN_NAME, Optional.of(1L));
}
@TestSqlOnly
void testDomainMatch_readOnlyReplica() {
login("evilregistrar");
rememberWildcardType("cat.lol");
action.readOnlyJpaTm = new ReplicaSimulatingJpaTransactionManager(jpaTm());
action.nameParam = Optional.of("cat.lol");
action.parameterMap = ImmutableListMultimap.of("name", "cat.lol");
action.run();
assertThat(response.getPayload()).contains("Yes Virginia <script>");
assertThat(response.getStatus()).isEqualTo(200);
}
@TestOfyAndSql
void testDomainMatch_foundWithUpperCase() {
login("evilregistrar");

View File

@@ -16,6 +16,7 @@ package google.registry.rde;
import static com.google.common.truth.Truth.assertThat;
import static com.google.common.truth.Truth.assertWithMessage;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.testing.SystemInfo.hasCommand;
import static java.nio.charset.StandardCharsets.UTF_8;
import static org.junit.jupiter.api.Assumptions.assumeTrue;
@@ -27,6 +28,8 @@ import com.google.common.io.CharStreams;
import com.google.common.io.Files;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.Keyring;
import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeRevision;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.BouncyCastleProviderExtension;
import google.registry.testing.FakeKeyringModule;
@@ -109,6 +112,10 @@ public class BrdaCopyActionTest {
action.receiverKey = receiverKey;
action.signingKey = signingKey;
action.stagingDecryptionKey = decryptKey;
tm().transact(
() -> {
RdeRevision.saveRevision("lol", DateTime.parse("2010-10-17TZ"), RdeMode.THIN, 0);
});
}
@ParameterizedTest

View File

@@ -27,7 +27,6 @@ import static google.registry.testing.DatabaseHelper.persistResource;
import static google.registry.testing.DatabaseHelper.persistResourceWithCommitLog;
import static google.registry.testing.TaskQueueHelper.assertAtLeastOneTaskIsEnqueued;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static google.registry.testing.TestDataHelper.loadFile;
import static google.registry.tldconfig.idn.IdnTableEnum.EXTENDED_LATIN;
import static java.nio.charset.StandardCharsets.UTF_8;
@@ -50,17 +49,15 @@ import google.registry.model.ofy.Ofy;
import google.registry.model.tld.Registry;
import google.registry.request.HttpException.BadRequestException;
import google.registry.request.RequestParameters;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.FakeClock;
import google.registry.testing.FakeKeyringModule;
import google.registry.testing.FakeLockHandler;
import google.registry.testing.FakeResponse;
import google.registry.testing.InjectExtension;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.testing.mapreduce.MapreduceTestCase;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.util.Retrier;
import google.registry.util.SystemSleeper;
import google.registry.util.TaskQueueUtils;
import google.registry.xjc.XjcXmlTransformer;
import google.registry.xjc.rde.XjcRdeContentType;
import google.registry.xjc.rde.XjcRdeDeposit;
@@ -100,10 +97,19 @@ public class RdeStagingActionDatastoreTest extends MapreduceTestCase<RdeStagingA
@RegisterExtension public final InjectExtension inject = new InjectExtension();
private final FakeClock clock = new FakeClock();
/**
* Without autoIncrement mode, the fake clock won't advance between Mapper and Reducer
* transactions when action is invoked, resulting in rolled back reducer transaction (due to
* TimestampInversionException if both transactions are mapped to the same CommitLog bucket) and
* multiple RdeUplaod/BrdaCopy tasks being enqueued (due to transaction retries, since Cloud Tasks
* enqueuing is not transactional with Datastore transactions).
*/
private final FakeClock clock = new FakeClock().setAutoIncrementByOneMilli();
private final FakeResponse response = new FakeResponse();
private final GcsUtils gcsUtils = new GcsUtils(LocalStorageHelper.getOptions());
private final List<? super XjcRdeContentType> alreadyExtracted = new ArrayList<>();
private final CloudTasksHelper cloudTasksHelper = new CloudTasksHelper();
private static PGPPublicKey encryptKey;
private static PGPPrivateKey decryptKey;
@@ -124,7 +130,7 @@ public class RdeStagingActionDatastoreTest extends MapreduceTestCase<RdeStagingA
action.mrRunner = makeDefaultRunner();
action.lenient = false;
action.reducerFactory = new RdeStagingReducer.Factory();
action.reducerFactory.taskQueueUtils = new TaskQueueUtils(new Retrier(new SystemSleeper(), 1));
action.reducerFactory.cloudTasksUtils = cloudTasksHelper.getTestCloudTasksUtils();
action.reducerFactory.lockHandler = new FakeLockHandler(true);
action.reducerFactory.bucket = "rde-bucket";
action.reducerFactory.lockTimeout = Duration.standardHours(1);
@@ -467,11 +473,13 @@ public class RdeStagingActionDatastoreTest extends MapreduceTestCase<RdeStagingA
clock.setTo(DateTime.parse("2000-01-04TZ")); // Tuesday
action.run();
executeTasksUntilEmpty("mapreduce", clock);
assertTasksEnqueued("rde-upload",
new TaskMatcher()
.url(RdeUploadAction.PATH)
.param(RequestParameters.PARAM_TLD, "lol"));
assertTasksEnqueued("brda",
// TODO(b/217773051): duplicate tasks are possible though unlikely. Consider if below calls are
// appropriate since they don't allow duplicates.
cloudTasksHelper.assertTasksEnqueued(
"rde-upload",
new TaskMatcher().url(RdeUploadAction.PATH).param(RequestParameters.PARAM_TLD, "lol"));
cloudTasksHelper.assertTasksEnqueued(
"brda",
new TaskMatcher()
.url(BrdaCopyAction.PATH)
.param(RequestParameters.PARAM_TLD, "lol")

View File

@@ -20,8 +20,6 @@ import static google.registry.model.rde.RdeMode.FULL;
import static google.registry.model.rde.RdeMode.THIN;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.testing.DatabaseHelper.createTld;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static google.registry.util.ResourceUtils.readResourceUtf8;
import static java.nio.charset.StandardCharsets.UTF_8;
import static org.junit.jupiter.api.Assertions.assertThrows;
@@ -41,13 +39,10 @@ import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.request.RequestParameters;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.FakeClock;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.FakeKeyringModule;
import google.registry.testing.FakeLockHandler;
import google.registry.testing.FakeSleeper;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.util.Retrier;
import google.registry.util.TaskQueueUtils;
import google.registry.xml.ValidationMode;
import java.io.IOException;
import java.util.Iterator;
@@ -74,6 +69,7 @@ class RdeStagingReducerTest {
private static final PGPPublicKey encryptionKey =
new FakeKeyringModule().get().getRdeStagingEncryptionKey();
private static final DateTime now = DateTime.parse("2000-01-01TZ");
private final CloudTasksHelper cloudTasksHelper = new CloudTasksHelper();
private Fragments brdaFragments =
new Fragments(
@@ -94,7 +90,7 @@ class RdeStagingReducerTest {
private RdeStagingReducer reducer =
new RdeStagingReducer(
new TaskQueueUtils(new Retrier(new FakeSleeper(new FakeClock()), 1)),
cloudTasksHelper.getTestCloudTasksUtils(),
new FakeLockHandler(true),
GCS_BUCKET,
Duration.ZERO,
@@ -133,7 +129,7 @@ class RdeStagingReducerTest {
assertThat(loadCursorTime(CursorType.BRDA))
.isEquivalentAccordingToCompareTo(now.plus(Duration.standardDays(1)));
assertThat(loadRevision(THIN)).isEqualTo(1);
assertTasksEnqueued(
cloudTasksHelper.assertTasksEnqueued(
"brda",
new TaskMatcher()
.url(BrdaCopyAction.PATH)
@@ -159,7 +155,7 @@ class RdeStagingReducerTest {
// No extra operations in manual mode.
assertThat(loadCursorTime(CursorType.BRDA)).isEquivalentAccordingToCompareTo(now);
assertThat(loadRevision(THIN)).isEqualTo(0);
assertNoTasksEnqueued("brda");
cloudTasksHelper.assertNoTasksEnqueued("brda");
}
@Test
@@ -179,7 +175,7 @@ class RdeStagingReducerTest {
assertThat(loadCursorTime(CursorType.RDE_STAGING))
.isEquivalentAccordingToCompareTo(now.plus(Duration.standardDays(1)));
assertThat(loadRevision(FULL)).isEqualTo(1);
assertTasksEnqueued(
cloudTasksHelper.assertTasksEnqueued(
"rde-upload",
new TaskMatcher().url(RdeUploadAction.PATH).param(RequestParameters.PARAM_TLD, "soy"));
}
@@ -200,7 +196,7 @@ class RdeStagingReducerTest {
// No extra operations in manual mode.
assertThat(loadCursorTime(CursorType.RDE_STAGING)).isEquivalentAccordingToCompareTo(now);
assertThat(loadRevision(FULL)).isEqualTo(0);
assertNoTasksEnqueued("rde-upload");
cloudTasksHelper.assertNoTasksEnqueued("rde-upload");
}
private static void compareLength(String outputFile, String lengthFilename) throws IOException {

View File

@@ -15,23 +15,26 @@
package google.registry.reporting.billing;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.common.net.MediaType;
import google.registry.beam.BeamActionTestBase;
import google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase;
import google.registry.reporting.ReportingModule;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.DualDatabaseTest;
import google.registry.testing.FakeClock;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.testing.TestOfyAndSql;
import google.registry.util.CloudTasksUtils;
import java.io.IOException;
import org.joda.time.Duration;
import org.joda.time.YearMonth;
import org.junit.jupiter.api.extension.RegisterExtension;
@@ -45,6 +48,8 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
private final BillingEmailUtils emailUtils = mock(BillingEmailUtils.class);
private FakeClock clock = new FakeClock();
private CloudTasksHelper cloudTasksHelper = new CloudTasksHelper();
private CloudTasksUtils cloudTasksUtils = cloudTasksHelper.getTestCloudTasksUtils();
private GenerateInvoicesAction action;
@TestOfyAndSql
@@ -60,6 +65,7 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
PrimaryDatabase.DATASTORE,
new YearMonth(2017, 10),
emailUtils,
cloudTasksUtils,
clock,
response,
dataflow);
@@ -68,13 +74,17 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
assertThat(response.getStatus()).isEqualTo(SC_OK);
assertThat(response.getPayload()).isEqualTo("Launched invoicing pipeline: jobid");
TaskMatcher matcher =
cloudTasksHelper.assertTasksEnqueued(
"beam-reporting",
new TaskMatcher()
.url("/_dr/task/publishInvoices")
.method("POST")
.method(HttpMethod.POST)
.param("jobId", "jobid")
.param("yearMonth", "2017-10");
assertTasksEnqueued("beam-reporting", matcher);
.param("yearMonth", "2017-10")
.scheduleTime(
clock
.nowUtc()
.plus(Duration.standardMinutes(ReportingModule.ENQUEUE_DELAY_MINUTES))));
}
@TestOfyAndSql
@@ -90,6 +100,7 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
PrimaryDatabase.DATASTORE,
new YearMonth(2017, 10),
emailUtils,
cloudTasksUtils,
clock,
response,
dataflow);
@@ -97,7 +108,7 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
assertThat(response.getContentType()).isEqualTo(MediaType.PLAIN_TEXT_UTF_8);
assertThat(response.getStatus()).isEqualTo(SC_OK);
assertThat(response.getPayload()).isEqualTo("Launched invoicing pipeline: jobid");
assertNoTasksEnqueued("beam-reporting");
cloudTasksHelper.assertNoTasksEnqueued("beam-reporting");
}
@TestOfyAndSql
@@ -114,6 +125,7 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
PrimaryDatabase.DATASTORE,
new YearMonth(2017, 10),
emailUtils,
cloudTasksUtils,
clock,
response,
dataflow);
@@ -121,6 +133,6 @@ class GenerateInvoicesActionTest extends BeamActionTestBase {
assertThat(response.getStatus()).isEqualTo(SC_INTERNAL_SERVER_ERROR);
assertThat(response.getPayload()).isEqualTo("Pipeline launch failed: Pipeline error");
verify(emailUtils).sendAlertEmail("Pipeline Launch failed due to Pipeline error");
assertNoTasksEnqueued("beam-reporting");
cloudTasksHelper.assertNoTasksEnqueued("beam-reporting");
}
}

View File

@@ -15,29 +15,31 @@
package google.registry.reporting.icann;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static org.junit.jupiter.api.Assertions.assertThrows;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import google.registry.bigquery.BigqueryJobFailureException;
import google.registry.reporting.icann.IcannReportingModule.ReportType;
import google.registry.request.HttpException.BadRequestException;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.FakeClock;
import google.registry.testing.FakeResponse;
import google.registry.testing.FakeSleeper;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.util.EmailMessage;
import google.registry.util.Retrier;
import google.registry.util.SendEmailService;
import java.util.Optional;
import javax.mail.internet.InternetAddress;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.joda.time.YearMonth;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@@ -51,6 +53,8 @@ class IcannReportingStagingActionTest {
private YearMonth yearMonth = new YearMonth(2017, 6);
private String subdir = "default/dir";
private IcannReportingStagingAction action;
private FakeClock clock = new FakeClock(DateTime.parse("2021-01-02T11:00:00Z"));
private CloudTasksHelper cloudTasksHelper = new CloudTasksHelper(clock);
@RegisterExtension
final AppEngineExtension appEngine =
@@ -72,6 +76,7 @@ class IcannReportingStagingActionTest {
action.sender = new InternetAddress("sender@example.com");
action.recipient = new InternetAddress("recipient@example.com");
action.emailService = mock(SendEmailService.class);
action.cloudTasksUtils = cloudTasksHelper.getTestCloudTasksUtils();
when(stager.stageReports(yearMonth, subdir, ReportType.ACTIVITY))
.thenReturn(ImmutableList.of("a", "b"));
@@ -79,9 +84,13 @@ class IcannReportingStagingActionTest {
.thenReturn(ImmutableList.of("c", "d"));
}
private static void assertUploadTaskEnqueued() {
TaskMatcher matcher = new TaskMatcher().url("/_dr/task/icannReportingUpload").method("POST");
assertTasksEnqueued("retryable-cron-tasks", matcher);
private void assertUploadTaskEnqueued() {
cloudTasksHelper.assertTasksEnqueued(
"retryable-cron-tasks",
new TaskMatcher()
.url("/_dr/task/icannReportingUpload")
.method(HttpMethod.POST)
.scheduleTime(clock.nowUtc().plus(Duration.standardMinutes(2))));
}
@Test
@@ -157,7 +166,7 @@ class IcannReportingStagingActionTest {
new InternetAddress("recipient@example.com"),
new InternetAddress("sender@example.com")));
// Assert no upload task enqueued
assertNoTasksEnqueued("retryable-cron-tasks");
cloudTasksHelper.assertNoTasksEnqueued("retryable-cron-tasks");
}
@Test

View File

@@ -15,20 +15,23 @@
package google.registry.reporting.spec11;
import static com.google.common.truth.Truth.assertThat;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static org.apache.http.HttpStatus.SC_OK;
import static org.mockito.Mockito.when;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.common.net.MediaType;
import google.registry.beam.BeamActionTestBase;
import google.registry.model.common.DatabaseMigrationStateSchedule.PrimaryDatabase;
import google.registry.reporting.ReportingModule;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.FakeClock;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.util.CloudTasksUtils;
import java.io.IOException;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
@@ -40,6 +43,8 @@ class GenerateSpec11ReportActionTest extends BeamActionTestBase {
AppEngineExtension.builder().withDatastoreAndCloudSql().withTaskQueue().build();
private final FakeClock clock = new FakeClock(DateTime.parse("2018-06-11T12:23:56Z"));
private CloudTasksHelper cloudTasksHelper = new CloudTasksHelper(clock);
private CloudTasksUtils cloudTasksUtils = cloudTasksHelper.getTestCloudTasksUtils();
private GenerateSpec11ReportAction action;
@Test
@@ -56,13 +61,14 @@ class GenerateSpec11ReportActionTest extends BeamActionTestBase {
true,
clock,
response,
dataflow);
dataflow,
cloudTasksUtils);
when(launch.execute()).thenThrow(new IOException("Dataflow failure"));
action.run();
assertThat(response.getStatus()).isEqualTo(SC_INTERNAL_SERVER_ERROR);
assertThat(response.getContentType()).isEqualTo(MediaType.PLAIN_TEXT_UTF_8);
assertThat(response.getPayload()).contains("Dataflow failure");
assertNoTasksEnqueued("beam-reporting");
cloudTasksHelper.assertNoTasksEnqueued("beam-reporting");
}
@Test
@@ -79,18 +85,24 @@ class GenerateSpec11ReportActionTest extends BeamActionTestBase {
true,
clock,
response,
dataflow);
dataflow,
cloudTasksUtils);
action.run();
assertThat(response.getStatus()).isEqualTo(SC_OK);
assertThat(response.getContentType()).isEqualTo(MediaType.PLAIN_TEXT_UTF_8);
assertThat(response.getPayload()).isEqualTo("Launched Spec11 pipeline: jobid");
TaskMatcher matcher =
cloudTasksHelper.assertTasksEnqueued(
"beam-reporting",
new TaskMatcher()
.url("/_dr/task/publishSpec11")
.method("POST")
.method(HttpMethod.POST)
.param("jobId", "jobid")
.param("date", "2018-06-11");
assertTasksEnqueued("beam-reporting", matcher);
.param("date", "2018-06-11")
.scheduleTime(
clock
.nowUtc()
.plus(Duration.standardMinutes(ReportingModule.ENQUEUE_DELAY_MINUTES))));
}
@Test
@@ -107,11 +119,12 @@ class GenerateSpec11ReportActionTest extends BeamActionTestBase {
false,
clock,
response,
dataflow);
dataflow,
cloudTasksUtils);
action.run();
assertThat(response.getStatus()).isEqualTo(SC_OK);
assertThat(response.getContentType()).isEqualTo(MediaType.PLAIN_TEXT_UTF_8);
assertThat(response.getPayload()).isEqualTo("Launched Spec11 pipeline: jobid");
assertNoTasksEnqueued("beam-reporting");
cloudTasksHelper.assertNoTasksEnqueued("beam-reporting");
}
}

View File

@@ -24,6 +24,7 @@ import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.net.HostAndPort;
import com.google.common.util.concurrent.SimpleTimeLimiter;
import google.registry.ui.server.registrar.RegistrarSettingsAction;
import google.registry.util.UrlChecker;
import java.net.MalformedURLException;
import java.net.URL;
@@ -96,6 +97,7 @@ public final class TestServer {
/** Starts the HTTP server in a new thread and returns once it's online. */
public void start() {
try {
RegistrarSettingsAction.setIsInTestDriverToTrue();
server.start();
} catch (Exception e) {
throwIfUnchecked(e);
@@ -128,14 +130,16 @@ public final class TestServer {
/** Stops the HTTP server. */
public void stop() {
try {
Void unusedReturnValue = SimpleTimeLimiter.create(newCachedThreadPool())
.callWithTimeout(
() -> {
server.stop();
return null;
},
SHUTDOWN_TIMEOUT_MS,
TimeUnit.MILLISECONDS);
Void unusedReturnValue =
SimpleTimeLimiter.create(newCachedThreadPool())
.callWithTimeout(
() -> {
server.stop();
RegistrarSettingsAction.setIsInTestDriverToFalse();
return null;
},
SHUTDOWN_TIMEOUT_MS,
TimeUnit.MILLISECONDS);
} catch (Exception e) {
throwIfUnchecked(e);
throw new RuntimeException(e);

View File

@@ -39,6 +39,8 @@ import com.google.common.collect.Multimaps;
import com.google.common.net.HttpHeaders;
import com.google.common.net.MediaType;
import com.google.common.truth.Truth8;
import com.google.protobuf.Timestamp;
import com.google.protobuf.util.Timestamps;
import google.registry.model.ImmutableObject;
import google.registry.util.CloudTasksUtils;
import google.registry.util.Retrier;
@@ -59,6 +61,7 @@ import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Function;
import java.util.function.Predicate;
import javax.annotation.Nonnull;
import org.joda.time.DateTime;
/**
* Static utility functions for testing task queues.
@@ -91,13 +94,22 @@ public class CloudTasksHelper implements Serializable {
private static final String PROJECT_ID = "test-project";
private static final String LOCATION_ID = "test-location";
private final Retrier retrier = new Retrier(new FakeSleeper(new FakeClock()), 1);
private final int instanceId = nextInstanceId.getAndIncrement();
private final CloudTasksUtils cloudTasksUtils =
new CloudTasksUtils(retrier, PROJECT_ID, LOCATION_ID, new FakeCloudTasksClient());
private final CloudTasksUtils cloudTasksUtils;
public CloudTasksHelper(FakeClock clock) {
this.cloudTasksUtils =
new CloudTasksUtils(
new Retrier(new FakeSleeper(clock), 1),
clock,
PROJECT_ID,
LOCATION_ID,
new FakeCloudTasksClient());
testTasks.put(instanceId, Multimaps.synchronizedListMultimap(LinkedListMultimap.create()));
}
public CloudTasksHelper() {
testTasks.put(instanceId, Multimaps.synchronizedListMultimap(LinkedListMultimap.create()));
this(new FakeClock());
}
public CloudTasksUtils getTestCloudTasksUtils() {
@@ -195,6 +207,7 @@ public class CloudTasksHelper implements Serializable {
// tests.
HttpMethod method = HttpMethod.POST;
String url;
Timestamp scheduleTime;
Multimap<String, String> headers = ArrayListMultimap.create();
Multimap<String, String> params = ArrayListMultimap.create();
@@ -216,6 +229,7 @@ public class CloudTasksHelper implements Serializable {
Ascii.toLowerCase(task.getAppEngineHttpRequest().getAppEngineRouting().getService());
method = task.getAppEngineHttpRequest().getHttpMethod();
url = uri.getPath();
scheduleTime = task.getScheduleTime();
ImmutableMultimap.Builder<String, String> headerBuilder = new ImmutableMultimap.Builder<>();
task.getAppEngineHttpRequest()
.getHeadersMap()
@@ -229,7 +243,7 @@ public class CloudTasksHelper implements Serializable {
ImmutableMultimap.Builder<String, String> paramBuilder = new ImmutableMultimap.Builder<>();
// Note that UriParameters.parse() does not throw an IAE on a bad query string (e.g. one
// where parameters are not properly URL-encoded); it always does a best-effort parse.
if (method == HttpMethod.GET) {
if (method == HttpMethod.GET && uri.getQuery() != null) {
paramBuilder.putAll(UriParameters.parse(uri.getQuery()));
} else if (method == HttpMethod.POST && !task.getAppEngineHttpRequest().getBody().isEmpty()) {
assertThat(
@@ -251,6 +265,7 @@ public class CloudTasksHelper implements Serializable {
builder.put("url", url);
builder.put("headers", headers);
builder.put("params", params);
builder.put("scheduleTime", scheduleTime);
return Maps.filterValues(builder, not(in(asList(null, "", Collections.EMPTY_MAP))));
}
}
@@ -293,6 +308,15 @@ public class CloudTasksHelper implements Serializable {
return this;
}
public TaskMatcher scheduleTime(Timestamp scheduleTime) {
expected.scheduleTime = scheduleTime;
return this;
}
public TaskMatcher scheduleTime(DateTime scheduleTime) {
return scheduleTime(Timestamps.fromMillis(scheduleTime.getMillis()));
}
public TaskMatcher param(String key, String value) {
checkNotNull(value, "Test error: A param can never have a null value, so don't assert it");
expected.params.put(key, value);
@@ -316,6 +340,8 @@ public class CloudTasksHelper implements Serializable {
*
* <p>Match fails if any headers or params expected on the TaskMatcher are not found on the
* Task. Note that the inverse is not true (i.e. there may be extra headers on the Task).
*
* <p>Schedule time by default is Timestamp.getDefaultInstance() or null.
*/
@Override
public boolean test(@Nonnull Task task) {
@@ -324,6 +350,8 @@ public class CloudTasksHelper implements Serializable {
&& (expected.url == null || Objects.equals(expected.url, actual.url))
&& (expected.method == null || Objects.equals(expected.method, actual.method))
&& (expected.service == null || Objects.equals(expected.service, actual.service))
&& (expected.scheduleTime == null
|| Objects.equals(expected.scheduleTime, actual.scheduleTime))
&& containsEntries(actual.params, expected.params)
&& containsEntries(actual.headers, expected.headers);
}

View File

@@ -14,18 +14,27 @@
package google.registry.testing;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static com.google.common.truth.Truth.assertThat;
import static com.google.common.truth.Truth.assertWithMessage;
import static google.registry.model.ImmutableObjectSubject.assertAboutImmutableObjects;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import static org.junit.Assert.fail;
import com.google.common.base.Ascii;
import com.google.common.base.Suppliers;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.model.EntityClasses;
import google.registry.model.ImmutableObject;
import google.registry.model.domain.DomainBase;
import google.registry.model.ofy.CommitLogBucket;
import google.registry.model.ofy.ReplayQueue;
import google.registry.model.ofy.TransactionInfo;
@@ -33,6 +42,7 @@ import google.registry.model.replay.DatastoreEntity;
import google.registry.model.replay.ReplicateToDatastoreAction;
import google.registry.model.replay.SqlEntity;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.JpaEntityCoverageExtension;
import google.registry.persistence.transaction.JpaTransactionManagerImpl;
import google.registry.persistence.transaction.Transaction;
import google.registry.persistence.transaction.Transaction.Delete;
@@ -41,9 +51,15 @@ import google.registry.persistence.transaction.Transaction.Update;
import google.registry.persistence.transaction.TransactionEntity;
import google.registry.util.RequestStatusChecker;
import java.io.IOException;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Optional;
import javax.annotation.Nullable;
import org.junit.jupiter.api.TestTemplate;
import org.junit.jupiter.api.extension.AfterEachCallback;
import org.junit.jupiter.api.extension.BeforeEachCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
@@ -56,28 +72,74 @@ import org.mockito.Mockito;
* that extension are also replayed. If AppEngineExtension is not used,
* JpaTransactionManagerExtension must be, and this extension should be ordered _after_
* JpaTransactionManagerExtension so that writes to SQL work.
*
* <p>If the "compare" flag is set in the constructor, this will also compare all touched objects in
* both databases after performing the replay.
*/
public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static ImmutableSet<String> NON_REPLICATED_TYPES =
ImmutableSet.of(
"PremiumList",
"PremiumListRevision",
"PremiumListEntry",
"ReservedList",
"RdeRevision",
"ServerSecret",
"SignedMarkRevocationList",
"ClaimsListShard",
"TmchCrl",
"EppResourceIndex",
"ForeignKeyIndex",
"ForeignKeyHostIndex",
"ForeignKeyContactIndex",
"ForeignKeyDomainIndex");
// Entity classes to be ignored during the final database comparison. Note that this is just a
// mash-up of Datastore and SQL classes, and used for filtering both sets. We could split them
// out, but there is plenty of overlap and no name collisions so it doesn't matter very much.
private static ImmutableSet<String> IGNORED_ENTITIES =
Streams.concat(
ImmutableSet.of(
// These entities *should* be comparable, but this isn't working yet so exclude
// them so we can tackle them independently.
"GracePeriod",
"GracePeriodHistory",
"HistoryEntry",
"DomainHistory",
"ContactHistory",
"HostHistory",
"DomainDsDataHistory",
"DelegationSignerData",
"DomainTransactionRecord",
// These entities are legitimately not comparable.
"ClaimsEntry",
"ClaimsList",
"CommitLogBucket",
"CommitLogManifest",
"CommitLogMutation",
"PremiumEntry",
"ReservedListEntry")
.stream(),
NON_REPLICATED_TYPES.stream())
.collect(toImmutableSet());
FakeClock clock;
boolean compare;
boolean replayed = false;
boolean inOfyContext;
InjectExtension injectExtension = new InjectExtension();
@Nullable ReplicateToDatastoreAction sqlToDsReplicator;
List<DomainBase> expectedUpdates = new ArrayList<>();
boolean enableDomainTimestampChecks;
boolean enableDatabaseCompare = true;
private ReplayExtension(
FakeClock clock, boolean compare, @Nullable ReplicateToDatastoreAction sqlToDsReplicator) {
private ReplayExtension(FakeClock clock, @Nullable ReplicateToDatastoreAction sqlToDsReplicator) {
this.clock = clock;
this.compare = compare;
this.sqlToDsReplicator = sqlToDsReplicator;
}
public static ReplayExtension createWithCompare(FakeClock clock) {
return new ReplayExtension(clock, true, null);
return new ReplayExtension(clock, null);
}
// This allows us to disable the replay tests from an environment variable in specific
@@ -100,7 +162,6 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
if (replayTestsEnabled()) {
return new ReplayExtension(
clock,
true,
new ReplicateToDatastoreAction(
clock, Mockito.mock(RequestStatusChecker.class), new FakeResponse()));
} else {
@@ -108,8 +169,37 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
}
}
/**
* Enable checking of domain timestamps during replay.
*
* <p>This was added to facilitate testing of a very specific bug wherein create/update
* auto-timestamps serialized to the SQL -> DS Transaction table had different values from those
* actually stored in SQL.
*
* <p>In order to use this, you also need to use expectUpdateFor() to store the states of a
* DomainBase object at a given point in time.
*/
public void enableDomainTimestampChecks() {
enableDomainTimestampChecks = true;
}
/**
* If we're doing domain time checks, add the current state of a domain to check against.
*
* <p>A null argument is a placeholder to deal with b/217952766. Basically it allows us to ignore
* one particular state in the sequence (where the timestamp is not what we expect it to be).
*/
public void expectUpdateFor(@Nullable DomainBase domain) {
expectedUpdates.add(domain);
}
@Override
public void beforeEach(ExtensionContext context) {
Optional<Method> elem = context.getTestMethod();
if (elem.isPresent() && elem.get().isAnnotationPresent(NoDatabaseCompare.class)) {
enableDatabaseCompare = false;
}
// Use a single bucket to expose timestamp inversion problems. This typically happens when
// a test with this extension rolls back the fake clock in the setup method, creating inverted
// timestamp with the canned data preloaded by AppengineExtension. The solution is to move
@@ -142,23 +232,6 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
}
}
private static ImmutableSet<String> NON_REPLICATED_TYPES =
ImmutableSet.of(
"PremiumList",
"PremiumListRevision",
"PremiumListEntry",
"ReservedList",
"RdeRevision",
"ServerSecret",
"SignedMarkRevocationList",
"ClaimsListShard",
"TmchCrl",
"EppResourceIndex",
"ForeignKeyIndex",
"ForeignKeyHostIndex",
"ForeignKeyContactIndex",
"ForeignKeyDomainIndex");
public void replay() {
if (!replayed) {
if (inOfyContext) {
@@ -183,34 +256,32 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
ImmutableMap<Key<?>, Object> changes = ReplayQueue.replay();
// Compare JPA to OFY, if requested.
if (compare) {
for (ImmutableMap.Entry<Key<?>, Object> entry : changes.entrySet()) {
// Don't verify non-replicated types.
if (NON_REPLICATED_TYPES.contains(entry.getKey().getKind())) {
continue;
}
// Since the object may have changed in datastore by the time we're doing the replay, we
// have to compare the current value in SQL (which we just mutated) against the value that
// we originally would have persisted (that being the object in the entry).
VKey<?> vkey = VKey.from(entry.getKey());
jpaTm()
.transact(
() -> {
Optional<?> jpaValue = jpaTm().loadByKeyIfPresent(vkey);
if (entry.getValue().equals(TransactionInfo.Delete.SENTINEL)) {
assertThat(jpaValue.isPresent()).isFalse();
} else {
ImmutableObject immutJpaObject = (ImmutableObject) jpaValue.get();
assertAboutImmutableObjects().that(immutJpaObject).hasCorrectHashValue();
assertAboutImmutableObjects()
.that(immutJpaObject)
.isEqualAcrossDatabases(
(ImmutableObject)
((DatastoreEntity) entry.getValue()).toSqlEntity().get());
}
});
for (ImmutableMap.Entry<Key<?>, Object> entry : changes.entrySet()) {
// Don't verify non-replicated types.
if (NON_REPLICATED_TYPES.contains(entry.getKey().getKind())) {
continue;
}
// Since the object may have changed in datastore by the time we're doing the replay, we
// have to compare the current value in SQL (which we just mutated) against the value that
// we originally would have persisted (that being the object in the entry).
VKey<?> vkey = VKey.from(entry.getKey());
jpaTm()
.transact(
() -> {
Optional<?> jpaValue = jpaTm().loadByKeyIfPresent(vkey);
if (entry.getValue().equals(TransactionInfo.Delete.SENTINEL)) {
assertThat(jpaValue.isPresent()).isFalse();
} else {
ImmutableObject immutJpaObject = (ImmutableObject) jpaValue.get();
assertAboutImmutableObjects().that(immutJpaObject).hasCorrectHashValue();
assertAboutImmutableObjects()
.that(immutJpaObject)
.isEqualAcrossDatabases(
(ImmutableObject)
((DatastoreEntity) entry.getValue()).toSqlEntity().get());
}
});
}
}
@@ -221,15 +292,18 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
List<TransactionEntity> transactionBatch;
do {
transactionBatch = sqlToDsReplicator.getTransactionBatch();
transactionBatch = sqlToDsReplicator.getTransactionBatchAtSnapshot();
for (TransactionEntity txn : transactionBatch) {
sqlToDsReplicator.applyTransaction(txn);
if (compare) {
ofyTm().transact(() -> compareSqlTransaction(txn));
}
ReplicateToDatastoreAction.applyTransaction(txn);
ofyTm().transact(() -> compareSqlTransaction(txn));
clock.advanceOneMilli();
}
} while (!transactionBatch.isEmpty());
// Now that everything has been replayed, compare the databases.
if (enableDatabaseCompare) {
compareDatabases();
}
}
/** Verifies that the replaying the SQL transaction created the same entities in Datastore. */
@@ -253,6 +327,21 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
assertAboutImmutableObjects()
.that(fromDatastore)
.isEqualAcrossDatabases(fromTransactionEntity);
// Check DomainBase timestamps if appropriate.
if (enableDomainTimestampChecks && fromTransactionEntity instanceof DomainBase) {
DomainBase expectedDomain = expectedUpdates.remove(0);
// Just skip it if the expectedDomain is null.
if (expectedDomain == null) {
continue;
}
DomainBase domainEntity = (DomainBase) fromTransactionEntity;
assertThat(domainEntity.getCreationTime()).isEqualTo(expectedDomain.getCreationTime());
assertThat(domainEntity.getUpdateTimestamp())
.isEqualTo(expectedDomain.getUpdateTimestamp());
}
} else {
Delete delete = (Delete) mutation;
VKey<?> key = delete.getKey();
@@ -262,4 +351,84 @@ public class ReplayExtension implements BeforeEachCallback, AfterEachCallback {
}
}
}
/** Compares the final state of both databases after replay is complete. */
private void compareDatabases() {
boolean gotDiffs = false;
// Build a map containing all of the SQL entities indexed by their key.
HashMap<Object, Object> sqlEntities = new HashMap<>();
for (Class<?> cls : JpaEntityCoverageExtension.ALL_JPA_ENTITIES) {
if (IGNORED_ENTITIES.contains(cls.getSimpleName())) {
continue;
}
jpaTm()
.transact(
() -> jpaTm().loadAllOfStream(cls).forEach(e -> sqlEntities.put(getSqlKey(e), e)));
}
for (Class<? extends ImmutableObject> cls : EntityClasses.ALL_CLASSES) {
if (IGNORED_ENTITIES.contains(cls.getSimpleName())) {
continue;
}
for (ImmutableObject entity : auditedOfy().load().type(cls).list()) {
// Find the entity in SQL and verify that it's the same.
Key<?> ofyKey = Key.create(entity);
Object sqlKey = VKey.from(ofyKey).getSqlKey();
ImmutableObject sqlEntity = (ImmutableObject) sqlEntities.get(sqlKey);
Optional<SqlEntity> expectedSqlEntity = ((DatastoreEntity) entity).toSqlEntity();
if (expectedSqlEntity.isPresent()) {
// Check for null just so we get a better error message.
if (sqlEntity == null) {
logger.atSevere().log("Entity %s is in Datastore but not in SQL.", ofyKey);
gotDiffs = true;
} else {
try {
assertAboutImmutableObjects()
.that((ImmutableObject) expectedSqlEntity.get())
.isEqualAcrossDatabases(sqlEntity);
} catch (AssertionError e) {
// Show the message but swallow the stack trace (we'll get that from the fail() at
// the end of the comparison).
logger.atSevere().log("For entity %s: %s", ofyKey, e.getMessage());
gotDiffs = true;
}
}
} else {
logger.atInfo().log("Datastore entity has no sql representation for %s", ofyKey);
}
sqlEntities.remove(sqlKey);
}
}
// Report any objects in the SQL set that we didn't remove while iterating over the Datastore
// objects.
if (!sqlEntities.isEmpty()) {
for (Object item : sqlEntities.values()) {
logger.atSevere().log(
"Entity of %s found in SQL but not in datastore: %s", item.getClass().getName(), item);
}
gotDiffs = true;
}
if (gotDiffs) {
fail("There were differences between the final SQL and Datastore contents.");
}
}
private static Object getSqlKey(Object entity) {
return jpaTm()
.getEntityManager()
.getEntityManagerFactory()
.getPersistenceUnitUtil()
.getIdentifier(entity);
}
/** Annotation to use for test methods where we don't want to do a database comparison yet. */
@Target({METHOD})
@Retention(RUNTIME)
@TestTemplate
public @interface NoDatabaseCompare {}
}

View File

@@ -138,13 +138,26 @@ class NordnUploadActionTest {
void test_convertTasksToCsv() {
List<TaskHandle> tasks =
ImmutableList.of(
makeTaskHandle("task1", "example", "csvLine1", "lordn-sunrise"),
makeTaskHandle("task2", "example", "csvLine2", "lordn-sunrise"),
makeTaskHandle("task1", "example", "csvLine1", "lordn-sunrise"),
makeTaskHandle("task3", "example", "ending", "lordn-sunrise"));
assertThat(NordnUploadAction.convertTasksToCsv(tasks, clock.nowUtc(), "col1,col2"))
.isEqualTo("1,2010-05-01T10:11:12.000Z,3\ncol1,col2\ncsvLine1\ncsvLine2\nending\n");
}
@MockitoSettings(strictness = Strictness.LENIENT)
@Test
void test_convertTasksToCsv_dedupesDuplicates() {
List<TaskHandle> tasks =
ImmutableList.of(
makeTaskHandle("task2", "example", "csvLine2", "lordn-sunrise"),
makeTaskHandle("task1", "example", "csvLine1", "lordn-sunrise"),
makeTaskHandle("task3", "example", "ending", "lordn-sunrise"),
makeTaskHandle("task1", "example", "csvLine1", "lordn-sunrise"));
assertThat(NordnUploadAction.convertTasksToCsv(tasks, clock.nowUtc(), "col1,col2"))
.isEqualTo("1,2010-05-01T10:11:12.000Z,3\ncol1,col2\ncsvLine1\ncsvLine2\nending\n");
}
@MockitoSettings(strictness = Strictness.LENIENT)
@Test
void test_convertTasksToCsv_doesntFailOnEmptyTasks() {

View File

@@ -50,7 +50,7 @@ class CreateDomainCommandTest extends EppToolCommandTestCase<CreateDomainCommand
"--admins=crr-admin",
"--techs=crr-tech",
"--password=2fooBAR",
"--ds_records=1 2 3 abcd,4 5 6 EF01",
"--ds_records=1 2 2 abcd,4 5 1 EF01",
"--ds_records=60485 5 2 D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A",
"example.tld");
eppVerifier.verifySent("domain_create_complete.xml");
@@ -66,7 +66,7 @@ class CreateDomainCommandTest extends EppToolCommandTestCase<CreateDomainCommand
"--admins=crr-admin",
"--techs=crr-tech",
"--password=2fooBAR",
"--ds_records=1 2 3 abcd,4 5 6 EF01",
"--ds_records=1 2 2 abcd,4 5 1 EF01",
"--ds_records=60485 5 2 D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A",
"example.tld");
eppVerifier.verifySent("domain_create_complete.xml");
@@ -314,6 +314,38 @@ class CreateDomainCommandTest extends EppToolCommandTestCase<CreateDomainCommand
assertThat(thrown).hasMessageThat().contains("--period");
}
@Test
void testFailure_invalidDigestType() {
IllegalArgumentException thrown =
assertThrows(
IllegalArgumentException.class,
() ->
runCommandForced(
"--client=NewRegistrar",
"--registrant=crr-admin",
"--admins=crr-admin",
"--techs=crr-tech",
"--ds_records=1 2 3 abcd",
"example.tld"));
assertThat(thrown).hasMessageThat().isEqualTo("DS record uses an unrecognized digest type: 3");
}
@Test
void testFailure_invalidAlgorithm() {
IllegalArgumentException thrown =
assertThrows(
IllegalArgumentException.class,
() ->
runCommandForced(
"--client=NewRegistrar",
"--registrant=crr-admin",
"--admins=crr-admin",
"--techs=crr-tech",
"--ds_records=1 999 4 abcd",
"example.tld"));
assertThat(thrown).hasMessageThat().isEqualTo("DS record uses an unrecognized algorithm: 999");
}
@Test
void testFailure_dsRecordsNot4Parts() {
IllegalArgumentException thrown =

View File

@@ -40,6 +40,8 @@ public class EncryptEscrowDepositCommandTest
EscrowDepositEncryptor res = new EscrowDepositEncryptor();
res.rdeReceiverKey = () -> new FakeKeyringModule().get().getRdeReceiverKey();
res.rdeSigningKey = () -> new FakeKeyringModule().get().getRdeSigningKey();
res.brdaReceiverKey = () -> new FakeKeyringModule().get().getBrdaReceiverKey();
res.brdaSigningKey = () -> new FakeKeyringModule().get().getBrdaSigningKey();
return res;
}
@@ -61,4 +63,34 @@ public class EncryptEscrowDepositCommandTest
"lol_2010-10-17_full_S1_R0.sig",
"lol.pub");
}
@Test
void testSuccess_brda() throws Exception {
Path depositFile = tmpDir.resolve("deposit.xml");
Files.write(depositXml.read(), depositFile.toFile());
runCommand(
"--mode=THIN", "--tld=lol", "--input=" + depositFile, "--outdir=" + tmpDir.toString());
assertThat(tmpDir.toFile().list())
.asList()
.containsExactly(
"deposit.xml",
"lol_2010-10-17_thin_S1_R0.ryde",
"lol_2010-10-17_thin_S1_R0.sig",
"lol.pub");
}
@Test
void testSuccess_revision() throws Exception {
Path depositFile = tmpDir.resolve("deposit.xml");
Files.write(depositXml.read(), depositFile.toFile());
runCommand(
"--revision=1", "--tld=lol", "--input=" + depositFile, "--outdir=" + tmpDir.toString());
assertThat(tmpDir.toFile().list())
.asList()
.containsExactly(
"deposit.xml",
"lol_2010-10-17_full_S1_R1.ryde",
"lol_2010-10-17_full_S1_R1.sig",
"lol.pub");
}
}

View File

@@ -52,7 +52,7 @@ class LoadTestCommandTest extends CommandTestCase<LoadTestCommand> {
.put("hostInfos", 1)
.put("domainInfos", 1)
.put("contactInfos", 1)
.put("runSeconds", 4600)
.put("runSeconds", 9200)
.build();
verify(connection)
.sendPostRequest(

View File

@@ -86,12 +86,12 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
"--add_admins=crr-admin2",
"--add_techs=crr-tech2",
"--add_statuses=serverDeleteProhibited",
"--add_ds_records=1 2 3 abcd,4 5 6 EF01",
"--add_ds_records=1 2 2 abcd,4 5 1 EF01",
"--remove_nameservers=ns3.zdns.google,ns4.zdns.google",
"--remove_admins=crr-admin1",
"--remove_techs=crr-tech1",
"--remove_statuses=serverHold",
"--remove_ds_records=7 8 9 12ab,6 5 4 34CD",
"--remove_ds_records=7 8 1 12ab,6 5 4 34CD",
"--registrant=crr-admin",
"--password=2fooBAR",
"example.tld");
@@ -106,12 +106,12 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
"--add_admins=crr-admin2",
"--add_techs=crr-tech2",
"--add_statuses=serverDeleteProhibited",
"--add_ds_records=1 2 3 abcd,4 5 6 EF01",
"--add_ds_records=1 2 2 abcd,4 5 1 EF01",
"--remove_nameservers=ns[3-4].zdns.google",
"--remove_admins=crr-admin1",
"--remove_techs=crr-tech1",
"--remove_statuses=serverHold",
"--remove_ds_records=7 8 9 12ab,6 5 4 34CD",
"--remove_ds_records=7 8 1 12ab,6 5 4 34CD",
"--registrant=crr-admin",
"--password=2fooBAR",
"example.tld");
@@ -128,12 +128,12 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
"--add_admins=crr-admin2",
"--add_techs=crr-tech2",
"--add_statuses=serverDeleteProhibited",
"--add_ds_records=1 2 3 abcd,4 5 6 EF01",
"--add_ds_records=1 2 2 abcd,4 5 1 EF01",
"--remove_nameservers=ns[3-4].zdns.google",
"--remove_admins=crr-admin1",
"--remove_techs=crr-tech1",
"--remove_statuses=serverHold",
"--remove_ds_records=7 8 9 12ab,6 5 4 34CD",
"--remove_ds_records=7 8 1 12ab,6 5 4 34CD",
"--registrant=crr-admin",
"--password=2fooBAR",
"example.tld",
@@ -186,7 +186,7 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
"--add_admins=crr-admin2",
"--add_techs=crr-tech2",
"--add_statuses=serverDeleteProhibited",
"--add_ds_records=1 2 3 abcd,4 5 6 EF01",
"--add_ds_records=1 2 2 abcd,4 5 1 EF01",
"example.tld");
eppVerifier.verifySent("domain_update_add.xml");
}
@@ -199,7 +199,7 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
"--remove_admins=crr-admin1",
"--remove_techs=crr-tech1",
"--remove_statuses=serverHold",
"--remove_ds_records=7 8 9 12ab,6 5 4 34CD",
"--remove_ds_records=7 8 1 12ab,6 5 4 34CD",
"example.tld");
eppVerifier.verifySent("domain_update_remove.xml");
}
@@ -277,8 +277,7 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
@TestOfyAndSql
void testSuccess_setDsRecords() throws Exception {
runCommandForced(
"--client=NewRegistrar", "--ds_records=1 2 3 abcd,4 5 6 EF01", "example.tld");
runCommandForced("--client=NewRegistrar", "--ds_records=1 2 2 abcd,4 5 1 EF01", "example.tld");
eppVerifier.verifySent("domain_update_set_ds_records.xml");
}
@@ -286,7 +285,7 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
void testSuccess_setDsRecords_withUnneededClear() throws Exception {
runCommandForced(
"--client=NewRegistrar",
"--ds_records=1 2 3 abcd,4 5 6 EF01",
"--ds_records=1 2 2 abcd,4 5 1 EF01",
"--clear_ds_records",
"example.tld");
eppVerifier.verifySent("domain_update_set_ds_records.xml");
@@ -630,6 +629,28 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
+ "you cannot use the add_statuses and remove_statuses flags.");
}
@TestOfyAndSql
void testFailure_invalidDsRecordAlgorithm() {
IllegalArgumentException thrown =
assertThrows(
IllegalArgumentException.class,
() ->
runCommandForced(
"--client=NewRegistrar", "--add_ds_records=1 299 2 abcd", "example.tld"));
assertThat(thrown).hasMessageThat().isEqualTo("DS record uses an unrecognized algorithm: 299");
}
@TestOfyAndSql
void testFailure_invalidDsRecordDigestType() {
IllegalArgumentException thrown =
assertThrows(
IllegalArgumentException.class,
() ->
runCommandForced(
"--client=NewRegistrar", "--add_ds_records=1 2 3 abcd", "example.tld"));
assertThat(thrown).hasMessageThat().isEqualTo("DS record uses an unrecognized digest type: 3");
}
@TestOfyAndSql
void testFailure_provideDsRecordsAndAddDsRecords() {
IllegalArgumentException thrown =
@@ -638,8 +659,8 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
() ->
runCommandForced(
"--client=NewRegistrar",
"--add_ds_records=1 2 3 abcd",
"--ds_records=4 5 6 EF01",
"--add_ds_records=1 2 2 abcd",
"--ds_records=4 5 1 EF01",
"example.tld"));
assertThat(thrown)
.hasMessageThat()
@@ -656,8 +677,8 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
() ->
runCommandForced(
"--client=NewRegistrar",
"--remove_ds_records=7 8 9 12ab",
"--ds_records=4 5 6 EF01",
"--remove_ds_records=7 8 1 12ab",
"--ds_records=4 5 1 EF01",
"example.tld"));
assertThat(thrown)
.hasMessageThat()
@@ -674,7 +695,7 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
() ->
runCommandForced(
"--client=NewRegistrar",
"--add_ds_records=1 2 3 abcd",
"--add_ds_records=1 2 2 abcd",
"--clear_ds_records",
"example.tld"));
assertThat(thrown)
@@ -692,7 +713,7 @@ class UpdateDomainCommandTest extends EppToolCommandTestCase<UpdateDomainCommand
() ->
runCommandForced(
"--client=NewRegistrar",
"--remove_ds_records=7 8 9 12ab",
"--remove_ds_records=7 8 1 12ab",
"--clear_ds_records",
"example.tld"));
assertThat(thrown)

View File

@@ -18,13 +18,12 @@ import static com.google.common.truth.Truth.assertThat;
import static google.registry.model.ImmutableObjectSubject.assertAboutImmutableObjects;
import static google.registry.testing.DatabaseHelper.loadRegistrar;
import static google.registry.testing.DatabaseHelper.persistResource;
import static google.registry.testing.TaskQueueHelper.assertNoTasksEnqueued;
import static google.registry.testing.TaskQueueHelper.assertTasksEnqueued;
import static google.registry.testing.TestDataHelper.loadFile;
import static org.mockito.ArgumentMatchers.anyInt;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.verify;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
@@ -37,9 +36,9 @@ import google.registry.model.registrar.Registrar;
import google.registry.request.auth.AuthenticatedRegistrarAccessor;
import google.registry.request.auth.AuthenticatedRegistrarAccessor.Role;
import google.registry.testing.CertificateSamples;
import google.registry.testing.CloudTasksHelper.TaskMatcher;
import google.registry.testing.DualDatabaseTest;
import google.registry.testing.SystemPropertyExtension;
import google.registry.testing.TaskQueueHelper.TaskMatcher;
import google.registry.testing.TestOfyAndSql;
import google.registry.util.CidrAddressBlock;
import google.registry.util.EmailMessage;
@@ -56,6 +55,7 @@ import org.mockito.ArgumentCaptor;
@DualDatabaseTest
class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
@RegisterExtension
final SystemPropertyExtension systemPropertyExtension = new SystemPropertyExtension();
@@ -70,10 +70,12 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
ArgumentCaptor<EmailMessage> contentCaptor = ArgumentCaptor.forClass(EmailMessage.class);
verify(emailService).sendEmail(contentCaptor.capture());
assertThat(contentCaptor.getValue().body()).isEqualTo(expectedEmailBody);
assertTasksEnqueued("sheet", new TaskMatcher()
.url(SyncRegistrarsSheetAction.PATH)
.method("GET")
.header("Host", "backend.hostname"));
cloudTasksHelper.assertTasksEnqueued(
"sheet",
new TaskMatcher()
.url(SyncRegistrarsSheetAction.PATH)
.service("Backend")
.method(HttpMethod.GET));
assertMetric(CLIENT_ID, "update", "[OWNER]", "SUCCESS");
}
@@ -86,7 +88,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"results", ImmutableList.of(),
"message",
"One email address (etphonehome@example.com) cannot be used for multiple contacts");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: ContactRequirementException");
}
@@ -103,7 +105,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"status", "ERROR",
"results", ImmutableList.of(),
"message", "TestUserId doesn't have access to registrar TheRegistrar");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "read", "[]", "ERROR: ForbiddenException");
}
@@ -134,7 +136,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"field", "lastUpdateTime",
"results", ImmutableList.of(),
"message", "This field is required.");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: FormFieldException");
}
@@ -153,7 +155,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"field", "emailAddress",
"results", ImmutableList.of(),
"message", "This field is required.");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: FormFieldException");
}
@@ -171,7 +173,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"status", "ERROR",
"results", ImmutableList.of(),
"message", "TestUserId doesn't have access to registrar TheRegistrar");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[]", "ERROR: ForbiddenException");
}
@@ -190,7 +192,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"field", "emailAddress",
"results", ImmutableList.of(),
"message", "Please enter a valid email address.");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: FormFieldException");
}
@@ -209,7 +211,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"field", "lastUpdateTime",
"results", ImmutableList.of(),
"message", "Not a valid ISO date-time string.");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: FormFieldException");
}
@@ -228,7 +230,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"field", "emailAddress",
"results", ImmutableList.of(),
"message", "Please only use ASCII-US characters.");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: FormFieldException");
}
@@ -265,9 +267,16 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
Map<String, Object> response =
action.handleJsonRequest(
ImmutableMap.of(
"op", "update",
"id", CLIENT_ID,
"args", setter.apply(registrar.asBuilder(), newValue).build().toJsonMap()));
"op",
"update",
"id",
CLIENT_ID,
"args",
setter
.apply(registrar.asBuilder(), newValue)
.setLastUpdateTime(registrar.getLastUpdateTime())
.build()
.toJsonMap()));
Registrar updatedRegistrar = loadRegistrar(CLIENT_ID);
persistResource(registrar);
@@ -318,7 +327,11 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"id",
CLIENT_ID,
"args",
setter.apply(registrar.asBuilder(), newValue).build().toJsonMap()));
setter
.apply(registrar.asBuilder(), newValue)
.setLastUpdateTime(registrar.getLastUpdateTime())
.build()
.toJsonMap()));
Registrar updatedRegistrar = loadRegistrar(CLIENT_ID);
persistResource(registrar);
@@ -405,7 +418,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"Certificate validity period is too long; it must be less than or equal to 398"
+ " days.");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: IllegalArgumentException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -430,7 +443,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"Certificate is expired.\nCertificate validity period is too long; it must be less"
+ " than or equal to 398 days.");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: IllegalArgumentException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -473,7 +486,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
assertThat(response).containsEntry("status", "SUCCESS");
assertMetric(CLIENT_ID, "update", "[OWNER]", "SUCCESS");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -498,7 +511,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"Certificate validity period is too long; it must be less than or equal to 398"
+ " days.");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: IllegalArgumentException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -523,7 +536,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"Certificate is expired.\nCertificate validity period is too long; it must be less"
+ " than or equal to 398 days.");
assertMetric(CLIENT_ID, "update", "[OWNER]", "ERROR: IllegalArgumentException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -555,7 +568,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"results", ImmutableList.of(),
"message", "Cannot add allowed TLDs if there is no WHOIS abuse contact set.");
assertMetric(CLIENT_ID, "update", "[ADMIN]", "ERROR: IllegalArgumentException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -577,7 +590,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"results", ImmutableList.of(),
"message", "TLDs do not exist: invalidtld");
assertMetric(CLIENT_ID, "update", "[ADMIN]", "ERROR: IllegalArgumentException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql
@@ -599,7 +612,7 @@ class RegistrarSettingsActionTest extends RegistrarSettingsActionTestCase {
"results", ImmutableList.of(),
"message", "Can't remove allowed TLDs using the console.");
assertMetric(CLIENT_ID, "update", "[ADMIN]", "ERROR: ForbiddenException");
assertNoTasksEnqueued("sheet");
cloudTasksHelper.assertNoTasksEnqueued("sheet");
}
@TestOfyAndSql

View File

@@ -46,6 +46,7 @@ import google.registry.request.auth.AuthResult;
import google.registry.request.auth.AuthenticatedRegistrarAccessor;
import google.registry.request.auth.UserAuthInfo;
import google.registry.testing.AppEngineExtension;
import google.registry.testing.CloudTasksHelper;
import google.registry.testing.FakeClock;
import google.registry.testing.InjectExtension;
import google.registry.ui.server.SendEmailUtils;
@@ -97,6 +98,8 @@ public abstract class RegistrarSettingsActionTestCase {
RegistrarContact techContact;
CloudTasksHelper cloudTasksHelper = new CloudTasksHelper();
@BeforeEach
public void beforeEachRegistrarSettingsActionTestCase() throws Exception {
// Registrar "TheRegistrar" has access to TLD "currenttld" but not to "newtld".
@@ -132,6 +135,8 @@ public abstract class RegistrarSettingsActionTestCase {
2048,
ImmutableSet.of("secp256r1", "secp384r1"),
clock);
action.cloudTasksUtils = cloudTasksHelper.getTestCloudTasksUtils();
inject.setStaticField(Ofy.class, "clock", clock);
when(req.getMethod()).thenReturn("POST");
when(rsp.getWriter()).thenReturn(new PrintWriter(writer));

View File

@@ -0,0 +1,77 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<epp xmlns="urn:ietf:params:xml:ns:epp-1.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<command>
<create>
<domain:create
xmlns:domain="urn:ietf:params:xml:ns:domain-1.0">
<domain:name>example.tld</domain:name>
<domain:period unit="y">2</domain:period>
<domain:ns>
<domain:hostObj>ns1.example.net</domain:hostObj>
<domain:hostObj>ns2.example.net</domain:hostObj>
</domain:ns>
<domain:registrant>jd1234</domain:registrant>
<domain:contact type="admin">sh8013</domain:contact>
<domain:contact type="tech">sh8013</domain:contact>
<domain:authInfo>
<domain:pw>2fooBAR</domain:pw>
</domain:authInfo>
</domain:create>
</create>
<extension>
<secDNS:create
xmlns:secDNS="urn:ietf:params:xml:ns:secDNS-1.1">
<secDNS:dsData>
<secDNS:keyTag>12345</secDNS:keyTag>
<secDNS:alg>99</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12346</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12347</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12348</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12349</secDNS:keyTag>
<secDNS:alg>98</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12350</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12351</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12352</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
</secDNS:create>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>

View File

@@ -0,0 +1,77 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<epp xmlns="urn:ietf:params:xml:ns:epp-1.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<command>
<create>
<domain:create
xmlns:domain="urn:ietf:params:xml:ns:domain-1.0">
<domain:name>example.tld</domain:name>
<domain:period unit="y">2</domain:period>
<domain:ns>
<domain:hostObj>ns1.example.net</domain:hostObj>
<domain:hostObj>ns2.example.net</domain:hostObj>
</domain:ns>
<domain:registrant>jd1234</domain:registrant>
<domain:contact type="admin">sh8013</domain:contact>
<domain:contact type="tech">sh8013</domain:contact>
<domain:authInfo>
<domain:pw>2fooBAR</domain:pw>
</domain:authInfo>
</domain:create>
</create>
<extension>
<secDNS:create
xmlns:secDNS="urn:ietf:params:xml:ns:secDNS-1.1">
<secDNS:dsData>
<secDNS:keyTag>12345</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>100</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12346</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>3</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12347</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12348</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12349</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12350</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12351</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>12352</secDNS:keyTag>
<secDNS:alg>3</secDNS:alg>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>49FD46E6C4B45C55D4AC</secDNS:digest>
</secDNS:dsData>
</secDNS:create>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>

View File

@@ -39,6 +39,7 @@ PATH CLASS
/_dr/task/resaveAllEppResources ResaveAllEppResourcesAction GET n INTERNAL,API APP ADMIN
/_dr/task/resaveEntity ResaveEntityAction POST n INTERNAL,API APP ADMIN
/_dr/task/sendExpiringCertificateNotificationEmail SendExpiringCertificateNotificationEmailAction GET n INTERNAL,API APP ADMIN
/_dr/task/syncDatastoreToSqlSnapshot SyncDatastoreToSqlSnapshotAction POST n INTERNAL,API APP ADMIN
/_dr/task/syncGroupMembers SyncGroupMembersAction POST n INTERNAL,API APP ADMIN
/_dr/task/syncRegistrarsSheet SyncRegistrarsSheetAction POST n INTERNAL,API APP ADMIN
/_dr/task/tmchCrl TmchCrlAction POST y INTERNAL,API APP ADMIN

View File

@@ -25,13 +25,13 @@
<secDNS:dsData>
<secDNS:keyTag>1</secDNS:keyTag>
<secDNS:alg>2</secDNS:alg>
<secDNS:digestType>3</secDNS:digestType>
<secDNS:digestType>2</secDNS:digestType>
<secDNS:digest>ABCD</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>4</secDNS:keyTag>
<secDNS:alg>5</secDNS:alg>
<secDNS:digestType>6</secDNS:digestType>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>EF01</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>

View File

@@ -22,13 +22,13 @@
<secDNS:dsData>
<secDNS:keyTag>1</secDNS:keyTag>
<secDNS:alg>2</secDNS:alg>
<secDNS:digestType>3</secDNS:digestType>
<secDNS:digestType>2</secDNS:digestType>
<secDNS:digest>ABCD</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>4</secDNS:keyTag>
<secDNS:alg>5</secDNS:alg>
<secDNS:digestType>6</secDNS:digestType>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>EF01</secDNS:digest>
</secDNS:dsData>
</secDNS:add>

View File

@@ -37,7 +37,7 @@
<secDNS:dsData>
<secDNS:keyTag>7</secDNS:keyTag>
<secDNS:alg>8</secDNS:alg>
<secDNS:digestType>9</secDNS:digestType>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>12AB</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
@@ -51,13 +51,13 @@
<secDNS:dsData>
<secDNS:keyTag>1</secDNS:keyTag>
<secDNS:alg>2</secDNS:alg>
<secDNS:digestType>3</secDNS:digestType>
<secDNS:digestType>2</secDNS:digestType>
<secDNS:digest>ABCD</secDNS:digest>
</secDNS:dsData>
<secDNS:dsData>
<secDNS:keyTag>4</secDNS:keyTag>
<secDNS:alg>5</secDNS:alg>
<secDNS:digestType>6</secDNS:digestType>
<secDNS:digestType>1</secDNS:digestType>
<secDNS:digest>EF01</secDNS:digest>
</secDNS:dsData>
</secDNS:add>

Some files were not shown because too many files have changed in this diff Show More