1
0
mirror of https://github.com/google/nomulus synced 2026-01-30 17:42:20 +00:00

Compare commits

..

68 Commits

Author SHA1 Message Date
Weimin Yu
757803e985 Revise host.inet_addresses query to use gin index (#1550)
* Revise host.inet_addresses query to use gin index
2022-03-09 23:32:17 -05:00
Weimin Yu
4b1f4f96e3 Reorganize new schema changes (#1551)
* Reorganize new schema changes

Reorganized new schema changes and make each flyway script update a
single table.

Each flyway script is executed in a single database transaction so that
the script can be rolled back in one shot. It acquires a shared lock on
all tables touched by the script. This is deadlock-prone because in a
busy database, there may be user queries that attempt to lock the same
set of tables, but in different order. By limiting each script to one
table, we avoid the problem.

We should have some a presubmit check to enforce this rule.

All changes have been deployed to Sandbox out-of-band. When doing so,
we changed all CREATE INDEX statements to CREATE INDEX IF NOT EXISTS.

Future deployments should be able to proceed normally.
2022-03-09 20:47:24 -05:00
Ben McIlwain
bd49e8b238 Add anti-deadlock instructions to DB update README (#1552) 2022-03-09 18:50:15 -05:00
Weimin Yu
6249a8e118 Revise Host index on inet_addresses (#1549)
* Revise Host index on inet_addresses

The index on the 'inet_addresses array column should be of gin or gist
type, which index individual array elements. We use gin for now since
host updates are not often, and gin has better accuracy.

Since flyway script V108__... has not been deployed, we  edit the file
in place instead of adding a new script.

This will be followed up with a modified query that can take advantage
of the gin index. Until then we don't expect to see performance
improvement.

The suspected bottlenect query in the whois path is:

select * from "Host" where 'non-ip-string' = any(inet_address) and
deletion_time < now();

It needs to be revised into:

select * from "Host" where array['non-ip-string'] <@ inet_address and
deletion_time < now();

The combined change reduces the query time from 90ms to 30ms in Sandbox,
and from 150ms to 40ms in production.

It is unclear if this solves all problem with whois latency.
2022-03-08 14:22:01 -05:00
Ben McIlwain
9b7bb12cd1 Add deletionTime/inetAddresses indexes to Host table to support WHOIS (#1548)
* Add deletionTime/inetAddresses indexes to Host table to support WHOIS

Weimin identified these as missing, and being the cause of slowdowns in
NameserverLookupByIpCommand that we're seeing in sandbox.

This is the first of two PRs, adding just the Flyway/schema changes. The
second PR adding the Java object model changes is #1547.
2022-03-07 16:07:11 -05:00
Michael Muller
71d13bab71 Improve Transaction gap processing (#1546)
Skip multiple gaps in one pass and write the correct transaction id to
datastore.
2022-03-07 13:26:18 -05:00
Ben McIlwain
8db28b7e61 Add domainRepoId indexes to billing events (#1545)
* Add domainRepoId indexes to billing events #1544

The query analyzer identified this is a missing index on the BillingEvent table,
and I added it for recurrences and cancellations as well as it's likely to be a
problem for them too. "Give me all the billing events associated with a given
domain by its repo ID" seems like a pretty common use case for the DB (and does
appear to be used by our invoicing pipeline).

This is the first of two PRs that makes just the DB changes. The second PR
(#1544) will add the Java code changes, and will be committed after this one is
deployed.
2022-03-07 12:21:40 -05:00
Lai Jiang
5a7dc307c5 Update the the latest LTS Node version (#1543)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1543)
<!-- Reviewable:end -->
2022-03-07 11:49:18 -05:00
Ben McIlwain
67278af3cb Add 9 more indexes to SQL schema (#1541)
* Add 9 more indexes to SQL schema

This indexes were identified as missing by PostgreSQL's query analyzer in our
sandbox environment (where we get enough realistic EPP traffic to identify these
deficiencies).

This is the first of two PRs -- the second PR (#1540) will be merged only after
this one is live in production. Note that this is the PR that actually modifies
the database though, so once this one is deployed we will already have the
benefit of the new indexes.
2022-03-05 17:57:36 -05:00
sarahcaseybot
1eafc983ab Check signature length in DS records (#1538)
* Check signature length in DS records

* Small fixes

* Add unit tests

* Formatting fix
2022-03-04 15:18:14 -05:00
gbrodman
f1bbdc5a0b Use built-in Java URL connections instead of UrlFetchService (#1535)
- Use the standard HttpsURLConnection to write/read data
- Rewrite RdeReporter, Nordn*Action, and Marksdb classes and related
  tests to conform to the new format
- Remove FakeURLFetchService and ForwardingUrlFetchService as they weren't used
- Refactor UrlFetchException to UrlConnectionException
- Refactor UrlFetchUtils to UrlConnectionUtils

I will need to test this on Alpha. Fortunately the connections that
don't require auth (e.g. TMDB downloading) should be testable.
2022-03-04 14:16:22 -05:00
Michael Muller
b146301495 Allow replicateToDatastore to skip gaps (#1539)
* Allow replicateToDatastore to skip gaps

As it turns out, gaps in the transaction id sequence number are expected
because rollbacks do not rollback sequence numbers.

To deal with this, stop checking these.

This change is not adequate in and of itself, as it is possible for a gap to
be introduced if two transactions are committed out of order of their sequence
number.  We are currently discussing several strategies to mitigate this.

* Remove println, add a record verification
2022-03-04 09:04:13 -05:00
Weimin Yu
437a747eae Pass stack trace to validate_datastore user (#1537)
* Pass stack trace to validate_datastore user
2022-03-03 16:10:31 -05:00
Weimin Yu
a620b37c80 Fix hanging test (#1536)
* Fix hanging test

Tests using the TestServerExtension may hang forever if an underlying
component (e.g., testcontainer for psql) fails. This may be the cause
of the some kokoro runs that timeed out after three hours.
2022-03-03 14:43:32 -05:00
Rachel Guan
267cbeb95b Inject CloudTasksUtil to AsyncTaskEnqueuer (#1522)
* Inject CloudTasksUtil to AsyncTasksEnqueuer

* Rebase

* Remove QUEUE_ASYNC_DELETE from AsyncTasksEnqueuer

* Refactor create() 

* Remove AppEngineServiceUtil depdendency from AsyncTaskEnqueuer
2022-03-02 11:31:45 -05:00
Weimin Yu
b9fcabbc36 Save db discrepancies to GCS (#1527)
* Save db discrepancies to GCS
2022-02-28 16:52:09 -05:00
Weimin Yu
4f33de10f3 Add a tools command to launch SQL validation job (#1526)
* Add a tools command to launch SQL validation job

Stopping using Pipeline.run().waitUntilFinish in
ValidateDatastorePipeline. Flex-templalate does not support blocking
wait in the main thread.

This PR adds a new ValidateSqlCommand that launches the pipeline and
maintains the SQL snapshot while the pipeline is running.

This PR also added more parameters to both ValidateSqlCommand and
ValidateDatastoreCommand:
- The -c option to supply an optional incremental comparison start time
- The -r option to supply an optional release tag that is not 'live',
  e.g., nomulus-DDDDYYMM-RC00

If the manual launch option (-m) is enabled, the commands will print the
gcloud command that can launch the pipeline.

Tested with sandbox, qa and the dev project.
2022-02-28 13:14:57 -05:00
Michael Muller
d6bb83f6d3 Nullify contact fields in setContactFields (#1533)
When setting contact fields from a set of DesignatedContact's, nullify the
existing fields so they don't stick around if they're not in the new set.
2022-02-28 10:57:35 -05:00
Michael Muller
a8d3d22c5a Don't reset the update time for TLD updates (#1532)
* Don't reset the update time for TLD updates

It turns out that the reason that the Registrar update timestamp isn't updated
for some of the tests is because the record is updated unchanged.  We can
avoid this problem by not trying to update the registrar to the same value.
So in this case, if the registrar alreay contains the TLD we're adding, don't
try to add it.
2022-02-25 13:09:36 -05:00
Rachel Guan
fac659b520 Inject CloudTasksUtils to DomainLockUtils (#1519)
* Move enqueueDomainRelock to DomainLockUtils

* Rebase and improve PR

* Inject CloudTaskUtils to DomainLockUtils
2022-02-25 11:38:38 -05:00
gbrodman
178702ded3 Fix DTR creation in one location and clean up replay comparison (#1529)
* Fix DTR creation in one location and clean up replay comparison
2022-02-23 11:07:10 -05:00
Lai Jiang
59bca1a9ed Disable sending cert expiration emails on sandbox (#1528) 2022-02-22 14:46:27 -05:00
Michael Muller
f8198fa590 Do full database comparison during replay tests (#1524)
* Fix entity delete replication, compare db @ replay

Replay tests currently only verify that the contents of a transaction are
can be successfully replicated to the other database.  They do not verify that
the contents of both databases are equivalent.  As a result, we miss any
changes omitted from the transaction (as was the case with entity deletions).

This change adds a final database comparison to ReplayExtension so we can
safely say that the databases are in the same state.

This comparison is introduced in part as a unit test for the one-line fix for
replication of an "entity delete" operation (where we delete using an entity
object instead of the object's key) which so far has only affected PollMessage
deletion.  The fix is also included in this commit within
JpaTransactionManagerImpl.

* Exclude tests and entities with failing comparisons

* Get all tests to pass and fix more timestamp

Fix most of the unit tests that were broken by this change.

- Fix timestamp updates after grace period changes in DomainContent and for
  TLD changes in Registrar.
- Reenable full database comparison for most DomainCreateFlowTest's.
- Make some test entities NonReplicated so they don't break when used with
  jpaTm().delete()
- Diable checking of a few more entity types that are failing comparisons.
- Add some formatting fixes.

* Remove unnecessary "NoDatabaseCompare"

I turns out that after other fixes/elisions we no longer need these for
any tests in DomainCreateFlowTest.

* Changes for review

* Remove old "compare" flag.

* Reformatted.
2022-02-22 10:49:57 -05:00
Lai Jiang
bbac81996b Make a few quality-of-life improvements in CloudTasksUtils (#1521)
* Make a few quality-of-life improvements in CloudTasksUtils

1. Update the method names. There are too many overloaded methods and it
   is hard to figure out which one does which without checking the
   javadoc.

2. Added a method in the task matcher to specify the delay time in
   DateTime, so the caller does not need to convert it to Timestamp.

3. Remove the expilict dependency on a clock when enqueueing a task with
   delay, the clock is now injected directly into the util instance
   itself.
2022-02-18 20:21:56 -05:00
Ben McIlwain
52c759d1db Disable prober data deletion cron job in prod & sandbox (#1525)
* Disable prober data deletion cron job in prod & sandbox

This is going to unnecessarily make the database migration more complex, and we
don't need them that badly. We'll re-enable these cron jobs once we've written
the new version of this action that handles Cloud SQL correctly (the current
version only does Datastore anyway).
2022-02-17 08:46:40 -08:00
Weimin Yu
453af87615 Ignore prober data when comparing databases (#1523)
* Ignore prober data when comparing databases

Completely ignore prober data when comparing Datastore and SQL.

Prober data deletions are not propagated from Datastore to SQL. It is
difficult to distinguish soft-deletes from normal updates, therefore
difficult to avoid false positives when looking for differences.
2022-02-15 12:01:20 -05:00
Ben McIlwain
d0d7515c0a Make NordnUploadAction resilient to duplicate task queue tasks (#1516)
This is necessary because the Cloud Tasks API is not transactionally enrolled,
so it's possible that multiple tasks might end up being enqueued. We need to be
able to handle them.
2022-02-14 14:59:46 -05:00
Michael Muller
2c70127573 Fix update timestamps for DomainContent types (#1517)
* Fix update timestamps for DomainContent types

We expect update timestamps to be updated whenever a containing entity is
modified and persisted, but unfortunately Hibernate doesn't seem to do this --
instead it appears to regard such an entity as unchanged.

To work around this, we explicitly reset the update timestamp whenever a
nested collection is modified in the Builder.

Note that this change only solves the problem for DomainContent.  All other
entitities containing UpdateAutoTimestamp will need to be audited and
instrumented with a similar change.

* Fix a handful of tests broken by this change

* Reformatted.
2022-02-14 11:31:03 -05:00
Rachel Guan
d3fc6063c9 Use CloudTasksUtils to enqueue in RegistrarSettingsAction (#1467)
* Use CloudTaskUtils to enqueue

* Add CloudTasksUtilsModule to FrontendComponent

* Fix Uri query issue

* Remove header and check service in matcher

* Use a ThreadLocal boolean in TestServer to determine enqueueing

* Extract enqueuing and email sending from tm().transact()
2022-02-10 11:16:28 -05:00
Weimin Yu
82802ec85c Compare datastore to sql action (#1507)
* Add action to DB comparison pipeline

Add a backend Action in Nomulus server that lanuches the pipeline for
comparing datastore (secondary) with Cloud SQL (primary).

* Save progress

* Revert test changes

* Add pipeline launching
2022-02-10 10:43:36 -05:00
Rachel Guan
e53594a626 Fix protobuf-java-util dependency (#1518) 2022-02-09 14:11:09 -05:00
Rachel Guan
e6577e3f23 Use CloudTasksUtil to enqueue task in IcannReportingStagingAction (#1489)
* Use CloudTasksUtil to enqueue task

* Use schedule time helper and add schedule time comparison
2022-02-09 12:33:56 -05:00
Michael Muller
c9da36be9f Fix create/update timestamp replay problems (#1515)
* Fix create/update timestamp replay problems

When CreateAutoTimestamp and UpdateAutoTimestamp are inserted into a
Transaction, their values are not populated in the same way as when they are
stored in the course of an SQL commit.  This results in different timestamp
values between SQL and datastore during the SQL -> DS replay.

Fix this by providing these values from the JPA transaction time when we're
doing transaction serialization.

This change also removes the initialization of the Ofy clock in
ExpandRecurringBillingEventsActionTest.  It's not necessary as the
ReplayExtension already takes care of this and doing it after the
ReplayExtension as we were breaks a test now that the update timestamps are
correct.
2022-02-09 08:48:51 -05:00
Rachel Guan
2ccae00dae Remove ReportingUtils and use CloudTasksUtil to enqueue tasks in GenerateInvoicesAction and GenerateSpec11ReportAction (#1491)
* Remove ReportingUtils and use CloudTaskUtil to enqueue 

* Use schedule time helper to enqueue and update schedule time comparison

* Fix comment, indentation in gradle file and improve time comparison
2022-02-08 17:48:47 -05:00
Rachel Guan
00c8b6a76d Change from TaskQueueUtils to CloudTasksUtils in LoadTestAction (#1468)
* Change from TaskQueueUtils to CloudTasksUtils in LoadTestAction

* Put X_CSRF_TOKEN in task headers

* Fix schedule time and gradle issue

* Remove TaskQueue constant dependency

* Double run seconds

* Add comment for X_CSRF_TOKEN
2022-02-08 17:44:24 -05:00
Lai Jiang
09dca28122 Make EscrowDepositEncryptor work with BRDA deposits (#1512)
Also make it possible to specify a revision number.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1512)
<!-- Reviewable:end -->
2022-02-07 12:40:00 -05:00
Weimin Yu
b412bdef9f Fix flaky RdeStagingActionDatastoreTest (#1514)
* Fix flaky RdeStagingActionDatastoreTest

Fixed the most common cause that makes one method flaky (Clock and
timestamp problem). Added a TODO to rethink test case.

Also added notes on tasks potentially enqueued multiple times.
2022-02-04 10:40:52 -05:00
Rachel Guan
62e5de8a3a Add support for delay of duration when scheduling a task (#1493)
* Add support for delay by duration when scheduling task

* Fix comments

* Add test for negative duration

* Change delay parameter type to duration
2022-02-03 22:25:39 -05:00
Lai Jiang
fa9b784c5c Correctly delete all stopped versions except for the most recent 3 (#1511)
The gcloud command does some weird stuff with sorting when custom format
is used. Here we instead rely on linux sort and head command to sort the
versions list.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1511)
<!-- Reviewable:end -->
2022-02-03 16:04:58 -05:00
Weimin Yu
e2bd72a74e Add an index on Host.host_name column (#1510)
* Add an index on Host.host_name column

This field is queried during host creation and needs an index to speed
up the query.

Since Hibernate does not explicitly refer to indexes, we can change the
code and schema in one PR.
2022-02-03 15:57:15 -05:00
gbrodman
28d41488b1 Use the built-in replicaJpaTm() in RDAP (#1506)
* Use the built-in replicaJpaTm() in RDAP

This includes a test for the replica-simulating transaction manager and
removal of any replica-specific code in RDAP tests, because it's
unnecessary due to the existing tests.
2022-02-03 11:14:26 -05:00
Weimin Yu
1107b9f2e3 Count duplicates when comparing Databases (#1509)
* Count duplicates when comparing Databases

Cursors may have duplicates in Datastore if imported across projects.
Count them instead of throwing.
2022-02-03 10:59:03 -05:00
Lai Jiang
9624b483d4 Copy the latest revision of BRDA during upload (#1508)
The revision was hardcoded to 0, which caused problem when we need to
re-run BRDA.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1508)
<!-- Reviewable:end -->
2022-02-02 21:54:42 -05:00
Rachel Guan
365937f22d Change from TaskQueueUtils to CloudTasksUtils in RdeStaging (#1411)
* Change from TaskQueueUtils to CloudTaskUtils in RdeStaging
2022-02-01 20:41:56 -05:00
sarahcaseybot
d5db6c16bc Add DS validation to match Cloud DNS (#1487)
* Add DS validation to match Cloud DNS

* Add checks to flows

* Add some flow tests

* Add tests for DomainCreateFlow

* Add tests for UpdateDomainCommand

* Fix docs test

* Small fixes

* Remove builder from tests
2022-02-01 15:25:00 -05:00
Lai Jiang
c1ad06afd1 Allow the beam parameter in RDE standard mode (#1505)
Standard mode will determine the watermarks based on the cursors and
kick off subsequent uploading steps. In order to run both the Beam and
the Mapreduce pipeline in parallel, we need to allow setting the beam
parameter when in standard mode. This changes should have been part of
https://github.com/google/nomulus/pull/1500.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1505)
<!-- Reviewable:end -->
2022-01-31 14:20:23 -05:00
gbrodman
b24670f33a Use the replica jpaTm in FKI and EppResource cache methods (#1503)
The cached methods are only used in situations where we don't really
care about being 100% synchronously up to date (e.g. whois), and they're
not used frequently anyway, so it's safe to use the replica in these
locations.
2022-01-28 18:05:18 -05:00
Weimin Yu
1253fa479a Release ValidateSqlPipeline as container image (#1504)
* Release ValidateSqlPipeline as container image
2022-01-28 14:57:31 -05:00
Weimin Yu
5f0dd24906 Release ValidateDatastorePipeline (#1501)
* Release ValidateDatastorePipeline
2022-01-26 13:38:19 -05:00
Ben McIlwain
e25885e25f Remove obsolete scrap commands (#1502) 2022-01-25 15:23:00 -05:00
gbrodman
cbdf4704ba Add missing @Overrides (#1499)
Not sure how this snuck through
2022-01-24 16:58:38 -05:00
Weimin Yu
207c7e7ca8 Compare migration data with SQL as primary DB (#1497)
* Compare migration data with SQL as primary DB

Add a BEAM pipeline that compares the secondary Datastore against SQL.
This is a dumb pipeline to be launched by a driver (in a followup PR).
Manually tested pipeline in sandbox.

Also updated the ValidateSqlPipeline and the snapshot finder class so
that an appropriate Datastore export is found (one that ends before the
replay checkpoint value).
2022-01-24 11:20:48 -05:00
Lai Jiang
b3a0eb6bd8 Add a cron job to run the RDE Beam pipeline in parallel with MapReduce (#1500) 2022-01-21 23:36:13 -05:00
gbrodman
c602aa6e67 Use the read-only replica for JPA invoicing (#1494)
* Use the read-only replica for JPA invoicing
2022-01-20 20:50:10 +00:00
gbrodman
c6008b65a0 Use a read-only replica SQL instance in RdapDomainSearchAction (#1495)
We can use it more places later but this can serve as a template. We
should inject the connection to the read-only replica (only created
once) to the constructor of the action, then use that instead of the
regular transaction manager.

We add a transaction manager that simulates the read-only-replica
behavior for testing purposes as well.

In addition, we set the transaction isolation level to READ COMMITTED
for this transaction manager (this is fine since we're never writing to
it). Postgres requires this for replica SQL access (it fails if we try
to use SERIALIZABLE) transactions. We didn't see this with the pipelines
before since those already had transaction isolation level overrides
2022-01-20 15:39:07 -05:00
gbrodman
eded6813ab Add a bit of documentation about the replica config (#1488) 2022-01-13 15:44:04 -05:00
Rachel Guan
bbe5c058fe Add support for empty or null params for createTask() (#1448)
* Add support for null or empty params

* Add Null or empty check in CollectionUtils

* Remove content type header for empty params in POST request
2022-01-13 12:44:41 -05:00
Weimin Yu
4b0cf576f8 CommitLog handling code should call ofyTm (#1492)
* CommitLog handling code should call ofyTm

The tm() call will use JPA transaction manager after the switch-over to
SQL. These calls would lose their transaction semantics.

Both actions are to be invoked after the switchover in case we have to
switch back to Datastore as primary.
2022-01-13 12:33:19 -05:00
Michael Muller
045de3889b Allow database comparison when in read-only mode (#1490)
Note: this change was actually authored by @weiminyu, I'm checking it in for
expediency.
2022-01-13 09:32:49 -05:00
Weimin Yu
68fc4cd022 Only compare recent changes in Datastore and SQL (#1485)
* Only compare recent changes in Datastore and SQL

When comparing Datastore and SQL, ignore older History and EPP resource
objects. This cuts the run time in half compared with a full comparison.
The intention is to run a full comparison before the switch-over from
Datastore and SQL, and run this incremental comparison during the down
time.

The incremental comparison takes about 25 minutes in production.
Performance can be improved further by filtering out older billing
events (OneTime and Cancellation). However, we don't think further
optimization is worth the effort (considering that Recurring events
cannot be filtered since they are mutable but without lastUpdateTime).

Verified in Sandbox and prod with and without time filter.
2022-01-11 14:17:32 -05:00
Lai Jiang
ebe55146c3 Add a command to compare two escrow deposits (#1476)
We already have ValidateEscrowDepositCommand to check for internal
reference consistency of two deposits, i. e. making sure that all
contacts and hosts referenced by domains exist in the same deposit.
Therefore to compare whether two deposits are equal we only need to make
sure that they contain the same domains and registrars, assuming they
both pass the validation. We don't compare their contents directly
because the MapReduce deposit contains all contacts and domains whereas
the Beam deposit only contains referenced ones, making a direct
comparison impossible.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1476)
<!-- Reviewable:end -->
2022-01-11 11:47:58 -05:00
gbrodman
807ddf46b9 Add replicateToDatastore cron job to prod (#1459)
No issues with this in sandbox so we should add it in prod
2022-01-10 16:38:25 -05:00
gbrodman
ff8f86090d Speed up updating of premium lists (#1482)
* Speed up updating of premium lists

There are two parts to this:
1. Don't load the premium entries in the command prompt (this isn't
necessary and we didn't display that information anyway).
2. Set a proper batch size (rather than just 1) when saving all the
premium entries. This means that we generate only one INSERT statement
rather than N statements.
2022-01-10 16:33:35 -05:00
gbrodman
5822f53e14 Allow usage of a read-only Postgres replica (#1470)
* Allow usage of a read-only Postgres replica

This adds the Dagger provider code for both the regular and the BEAM
environments, which are similar but not quite the same.

In addition, this demonstrates usage of the replica DB in the
RdePipeline. I tested this on alpha with a modified version of the
RdePipeline that attempts to write some dummy values to the database and
it failed with the expected message that one cannot write to a replica.
2022-01-07 13:21:22 -05:00
Rachel Guan
d04b3299aa Replace all existing vkey string to vkey.stringify() (#1430)
* Resolve ResaveEntityAction related conflicts

* Replace string with existing constants

* Remove solved TODOs related to ofy string to new vkey string

* Add a TODO for clean up

* Fix missing annotation
2022-01-07 12:11:15 -05:00
Lai Jiang
ceade7f954 Use the service account credential to delete unused versions (#1484) 2022-01-07 11:06:19 -05:00
Rachel Guan
1fcf63facd Use CloudTasksUtils to enqueue in GenerateEscrowDepositCommand (#1465)
* Use CloudTasksUtils to enqueue in GenerateEscrowDepositCommand

* Add CloudTasksUtil to RegistryToolComponent

* Remove header param
2022-01-06 15:36:22 -05:00
sarahcaseybot
f87e7eb6e6 Label classes to be deleted after the database migration - Batch 2 (#1477)
* Add some more annotations

* Add some more classes
2022-01-04 12:26:18 -05:00
295 changed files with 6778 additions and 3714 deletions

View File

@@ -55,7 +55,7 @@ plugins {
node {
download = true
version = "14.15.5"
version = "16.14.0"
npmVersion = "6.14.11"
}

View File

@@ -41,4 +41,20 @@ public interface Sleeper {
* @see com.google.common.util.concurrent.Uninterruptibles#sleepUninterruptibly
*/
void sleepUninterruptibly(ReadableDuration duration);
/**
* Puts the current thread to interruptible sleep.
*
* <p>This is a convenience method for {@link #sleep} that properly converts an {@link
* InterruptedException} to a {@link RuntimeException}.
*/
default void sleepInterruptibly(ReadableDuration duration) {
try {
sleep(duration);
} catch (InterruptedException e) {
// Restore current thread's interrupted state.
Thread.currentThread().interrupt();
throw new RuntimeException("Interrupted.", e);
}
}
}

View File

@@ -319,6 +319,7 @@ dependencies {
testCompile deps['com.google.appengine:appengine-testing']
testCompile deps['com.google.guava:guava-testlib']
testCompile deps['com.google.monitoring-client:contrib']
testCompile deps['com.google.protobuf:protobuf-java-util']
testCompile deps['com.google.truth:truth']
testCompile deps['com.google.truth.extensions:truth-java8-extension']
testCompile deps['org.checkerframework:checker-qual']
@@ -676,9 +677,9 @@ Optional<List<String>> getToolArgsList() {
// To run the nomulus tools with these command line tokens:
// "--foo", "bar baz", "--qux=quz"
// gradle registryTool --args="--foo 'bar baz' --qux=quz"
// gradle core:registryTool --args="--foo 'bar baz' --qux=quz"
// or:
// gradle registryTool --PtoolArgs="--foo|bar baz|--qux=quz"
// gradle core:registryTool -PtoolArgs="--foo|bar baz|--qux=quz"
// Note that the delimiting pipe can be backslash escaped if it is part of a
// parameter.
ext.createToolTask = {
@@ -706,7 +707,7 @@ createToolTask(
'initSqlPipeline', 'google.registry.beam.initsql.InitSqlPipeline')
createToolTask(
'validateSqlPipeline', 'google.registry.beam.comparedb.ValidateSqlPipeline')
'validateDatabasePipeline', 'google.registry.beam.comparedb.ValidateDatabasePipeline')
createToolTask(
@@ -793,6 +794,11 @@ if (environment == 'alpha') {
mainClass: 'google.registry.beam.rde.RdePipeline',
metaData : 'google/registry/beam/rde_pipeline_metadata.json'
],
validateDatabase :
[
mainClass: 'google.registry.beam.comparedb.ValidateDatabasePipeline',
metaData: 'google/registry/beam/validate_database_pipeline_metadata.json'
],
]
project.tasks.create("stageBeamPipelines") {
doLast {

View File

@@ -21,6 +21,7 @@ import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_
import static google.registry.backup.RestoreCommitLogsAction.BUCKET_OVERRIDE_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.FROM_TIME_PARAM;
import static google.registry.backup.RestoreCommitLogsAction.TO_TIME_PARAM;
import static google.registry.backup.SyncDatastoreToSqlSnapshotAction.SQL_SNAPSHOT_ID_PARAM;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredDatetimeParameter;
import static google.registry.request.RequestParameters.extractRequiredParameter;
@@ -98,6 +99,12 @@ public final class BackupModule {
return extractRequiredDatetimeParameter(req, TO_TIME_PARAM);
}
@Provides
@Parameter(SQL_SNAPSHOT_ID_PARAM)
static String provideSqlSnapshotId(HttpServletRequest req) {
return extractRequiredParameter(req, SQL_SNAPSHOT_ID_PARAM);
}
@Provides
@Backups
static ListeningExecutorService provideListeningExecutorService() {

View File

@@ -17,7 +17,7 @@ package google.registry.backup;
import static google.registry.backup.ExportCommitLogDiffAction.LOWER_CHECKPOINT_TIME_PARAM;
import static google.registry.backup.ExportCommitLogDiffAction.UPPER_CHECKPOINT_TIME_PARAM;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.util.DateTimeUtils.isBeforeOrAt;
import com.google.common.collect.ImmutableMultimap;
@@ -30,6 +30,7 @@ import google.registry.request.Action.Service;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
@@ -64,32 +65,47 @@ public final class CommitLogCheckpointAction implements Runnable {
@Override
public void run() {
createCheckPointAndStartAsyncExport();
}
/**
* Creates a {@link CommitLogCheckpoint} and initiates an asynchronous export task.
*
* @return the {@code CommitLogCheckpoint} to be exported
*/
public Optional<CommitLogCheckpoint> createCheckPointAndStartAsyncExport() {
final CommitLogCheckpoint checkpoint = strategy.computeCheckpoint();
logger.atInfo().log(
"Generated candidate checkpoint for time: %s", checkpoint.getCheckpointTime());
tm().transact(
() -> {
DateTime lastWrittenTime = CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
logger.atInfo().log(
"Newer checkpoint already written at time: %s", lastWrittenTime);
return;
}
auditedOfy()
.saveIgnoringReadOnlyWithoutBackup()
.entities(
checkpoint, CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
cloudTasksUtils.enqueue(
QUEUE_NAME,
CloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
});
boolean isCheckPointPersisted =
ofyTm()
.transact(
() -> {
DateTime lastWrittenTime =
CommitLogCheckpointRoot.loadRoot().getLastWrittenTime();
if (isBeforeOrAt(checkpoint.getCheckpointTime(), lastWrittenTime)) {
logger.atInfo().log(
"Newer checkpoint already written at time: %s", lastWrittenTime);
return false;
}
auditedOfy()
.saveIgnoringReadOnlyWithoutBackup()
.entities(
checkpoint,
CommitLogCheckpointRoot.create(checkpoint.getCheckpointTime()));
// Enqueue a diff task between previous and current checkpoints.
cloudTasksUtils.enqueue(
QUEUE_NAME,
cloudTasksUtils.createPostTask(
ExportCommitLogDiffAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.of(
LOWER_CHECKPOINT_TIME_PARAM,
lastWrittenTime.toString(),
UPPER_CHECKPOINT_TIME_PARAM,
checkpoint.getCheckpointTime().toString())));
return true;
});
return isCheckPointPersisted ? Optional.of(checkpoint) : Optional.empty();
}
}

View File

@@ -18,7 +18,7 @@ import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.mapreduce.MapreduceRunner.PARAM_DRY_RUN;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static java.lang.Boolean.FALSE;
import static java.lang.Boolean.TRUE;
@@ -288,7 +288,8 @@ public final class DeleteOldCommitLogsAction implements Runnable {
}
DeletionResult deletionResult =
tm().transactNew(
ofyTm()
.transactNew(
() -> {
CommitLogManifest manifest = auditedOfy().load().key(manifestKey).now();
// It is possible that the same manifestKey was run twice, if a shard had to be

View File

@@ -76,7 +76,10 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final Duration LEASE_LENGTH = standardHours(1);
public static final String REPLAY_TO_SQL_LOCK_NAME =
ReplayCommitLogsToSqlAction.class.getSimpleName();
public static final Duration REPLAY_TO_SQL_LOCK_LEASE_LENGTH = standardHours(1);
// Stop / pause where we are if we've been replaying for more than five minutes to avoid GAE
// request timeouts
private static final Duration REPLAY_TIMEOUT_DURATION = Duration.standardMinutes(5);
@@ -115,7 +118,11 @@ public class ReplayCommitLogsToSqlAction implements Runnable {
}
Optional<Lock> lock =
Lock.acquireSql(
this.getClass().getSimpleName(), null, LEASE_LENGTH, requestStatusChecker, false);
REPLAY_TO_SQL_LOCK_NAME,
null,
REPLAY_TO_SQL_LOCK_LEASE_LENGTH,
requestStatusChecker,
false);
if (!lock.isPresent()) {
String message = "Can't acquire SQL commit log replay lock, aborting.";
logger.atSevere().log(message);

View File

@@ -0,0 +1,187 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.backup;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.ofy.CommitLogCheckpoint;
import google.registry.model.replay.ReplicateToDatastoreAction;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Sleeper;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Synchronizes Datastore to a given SQL snapshot when SQL is the primary database.
*
* <p>The caller takes the responsibility for:
*
* <ul>
* <li>verifying the current migration stage
* <li>acquiring the {@link ReplicateToDatastoreAction#REPLICATE_TO_DATASTORE_LOCK_NAME
* replication lock}, and
* <li>while holding the lock, creating an SQL snapshot and invoking this action with the snapshot
* id
* </ul>
*
* The caller may release the replication lock upon receiving the response from this action. Please
* refer to {@link google.registry.tools.ValidateDatastoreCommand} for more information on usage.
*
* <p>This action plays SQL transactions up to the user-specified snapshot, creates a new CommitLog
* checkpoint, and exports all CommitLogs to GCS up to this checkpoint. The timestamp of this
* checkpoint can be used to recreate a Datastore snapshot that is equivalent to the given SQL
* snapshot. If this action succeeds, the checkpoint timestamp is included in the response (the
* format of which is defined by {@link #SUCCESS_RESPONSE_TEMPLATE}).
*/
@Action(
service = Service.BACKEND,
path = SyncDatastoreToSqlSnapshotAction.PATH,
method = Action.Method.POST,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
@DeleteAfterMigration
public class SyncDatastoreToSqlSnapshotAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
public static final String PATH = "/_dr/task/syncDatastoreToSqlSnapshot";
public static final String SUCCESS_RESPONSE_TEMPLATE =
"Datastore is up-to-date with provided SQL snapshot (%s). CommitLog timestamp is (%s).";
static final String SQL_SNAPSHOT_ID_PARAM = "sqlSnapshotId";
private static final int COMMITLOGS_PRESENCE_CHECK_ATTEMPTS = 10;
private static final Duration COMMITLOGS_PRESENCE_CHECK_DELAY = Duration.standardSeconds(6);
private final Response response;
private final Sleeper sleeper;
@Config("commitLogGcsBucket")
private final String gcsBucket;
private final GcsDiffFileLister gcsDiffFileLister;
private final LatestDatastoreSnapshotFinder datastoreSnapshotFinder;
private final CommitLogCheckpointAction commitLogCheckpointAction;
private final String sqlSnapshotId;
@Inject
SyncDatastoreToSqlSnapshotAction(
Response response,
Sleeper sleeper,
@Config("commitLogGcsBucket") String gcsBucket,
GcsDiffFileLister gcsDiffFileLister,
LatestDatastoreSnapshotFinder datastoreSnapshotFinder,
CommitLogCheckpointAction commitLogCheckpointAction,
@Parameter(SQL_SNAPSHOT_ID_PARAM) String sqlSnapshotId) {
this.response = response;
this.sleeper = sleeper;
this.gcsBucket = gcsBucket;
this.gcsDiffFileLister = gcsDiffFileLister;
this.datastoreSnapshotFinder = datastoreSnapshotFinder;
this.commitLogCheckpointAction = commitLogCheckpointAction;
this.sqlSnapshotId = sqlSnapshotId;
}
@Override
public void run() {
logger.atInfo().log("Datastore validation invoked. SqlSnapshotId is %s.", sqlSnapshotId);
try {
CommitLogCheckpoint checkpoint = ensureDatabasesComparable(sqlSnapshotId);
response.setStatus(SC_OK);
response.setPayload(
String.format(SUCCESS_RESPONSE_TEMPLATE, sqlSnapshotId, checkpoint.getCheckpointTime()));
return;
} catch (Throwable e) {
logger.atSevere().withCause(e).log("Failed to sync Datastore to SQL.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(getStackTrace(e));
}
}
private static String getStackTrace(Throwable e) {
try {
ByteArrayOutputStream bis = new ByteArrayOutputStream();
PrintStream printStream = new PrintStream(bis);
e.printStackTrace(printStream);
printStream.close();
return bis.toString();
} catch (RuntimeException re) {
return re.getMessage();
}
}
private CommitLogCheckpoint ensureDatabasesComparable(String sqlSnapshotId) {
// Replicate SQL transaction to Datastore, up to when this snapshot is taken.
int playbacks = ReplicateToDatastoreAction.replayAllTransactions(Optional.of(sqlSnapshotId));
logger.atInfo().log("Played %s SQL transactions.", playbacks);
Optional<CommitLogCheckpoint> checkpoint = exportCommitLogs();
if (!checkpoint.isPresent()) {
throw new RuntimeException("Cannot create CommitLog checkpoint");
}
logger.atInfo().log(
"CommitLog checkpoint created at %s.", checkpoint.get().getCheckpointTime());
verifyCommitLogsPersisted(checkpoint.get());
return checkpoint.get();
}
private Optional<CommitLogCheckpoint> exportCommitLogs() {
// Trigger an async CommitLog export to GCS. Will check file availability later.
// Although we can add support to synchronous execution, it can disrupt the export cadence
// when the system is busy
Optional<CommitLogCheckpoint> checkpoint =
commitLogCheckpointAction.createCheckPointAndStartAsyncExport();
// Failure to create checkpoint most likely caused by race with cron-triggered checkpointing.
// Retry once.
if (!checkpoint.isPresent()) {
commitLogCheckpointAction.createCheckPointAndStartAsyncExport();
}
return checkpoint;
}
private void verifyCommitLogsPersisted(CommitLogCheckpoint checkpoint) {
DateTime exportStartTime =
datastoreSnapshotFinder
.getSnapshotInfo(checkpoint.getCheckpointTime().toInstant())
.exportInterval()
.getStart();
logger.atInfo().log("Found Datastore export at %s", exportStartTime);
for (int attempts = 0; attempts < COMMITLOGS_PRESENCE_CHECK_ATTEMPTS; attempts++) {
try {
gcsDiffFileLister.listDiffFiles(gcsBucket, exportStartTime, checkpoint.getCheckpointTime());
return;
} catch (IllegalStateException e) {
// Gap in commitlog files. Fall through to sleep and retry.
logger.atInfo().log("Commitlog files not yet found on GCS.");
}
sleeper.sleepInterruptibly(COMMITLOGS_PRESENCE_CHECK_DELAY);
}
throw new RuntimeException("Cannot find all commitlog files.");
}
}

View File

@@ -22,15 +22,17 @@ import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.common.base.Joiner;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Multimap;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.EppResource;
import google.registry.model.domain.RegistryLock;
import google.registry.model.eppcommon.Trid;
import google.registry.model.host.HostResource;
import google.registry.persistence.VKey;
import google.registry.util.AppEngineServiceUtils;
import google.registry.request.Action.Service;
import google.registry.util.CloudTasksUtils;
import google.registry.util.Retrier;
import javax.inject.Inject;
import javax.inject.Named;
@@ -59,25 +61,23 @@ public final class AsyncTaskEnqueuer {
private static final Duration MAX_ASYNC_ETA = Duration.standardDays(30);
private final Duration asyncDeleteDelay;
private final Queue asyncActionsPushQueue;
private final Queue asyncDeletePullQueue;
private final Queue asyncDnsRefreshPullQueue;
private final AppEngineServiceUtils appEngineServiceUtils;
private final Retrier retrier;
private CloudTasksUtils cloudTasksUtils;
@Inject
public AsyncTaskEnqueuer(
@Named(QUEUE_ASYNC_ACTIONS) Queue asyncActionsPushQueue,
@Named(QUEUE_ASYNC_DELETE) Queue asyncDeletePullQueue,
@Named(QUEUE_ASYNC_HOST_RENAME) Queue asyncDnsRefreshPullQueue,
@Config("asyncDeleteFlowMapreduceDelay") Duration asyncDeleteDelay,
AppEngineServiceUtils appEngineServiceUtils,
CloudTasksUtils cloudTasksUtils,
Retrier retrier) {
this.asyncActionsPushQueue = asyncActionsPushQueue;
this.asyncDeletePullQueue = asyncDeletePullQueue;
this.asyncDnsRefreshPullQueue = asyncDnsRefreshPullQueue;
this.asyncDeleteDelay = asyncDeleteDelay;
this.appEngineServiceUtils = appEngineServiceUtils;
this.cloudTasksUtils = cloudTasksUtils;
this.retrier = retrier;
}
@@ -103,19 +103,17 @@ public final class AsyncTaskEnqueuer {
entityKey, firstResave, MAX_ASYNC_ETA);
return;
}
logger.atInfo().log("Enqueuing async re-save of %s to run at %s.", entityKey, whenToResave);
String backendHostname = appEngineServiceUtils.getServiceHostname("backend");
TaskOptions task =
TaskOptions.Builder.withUrl(ResaveEntityAction.PATH)
.method(Method.POST)
.header("Host", backendHostname)
.countdownMillis(etaDuration.getMillis())
.param(PARAM_RESOURCE_KEY, entityKey.getOfyKey().getString())
.param(PARAM_REQUESTED_TIME, now.toString());
Multimap<String, String> params = ArrayListMultimap.create();
params.put(PARAM_RESOURCE_KEY, entityKey.stringify());
params.put(PARAM_REQUESTED_TIME, now.toString());
if (whenToResave.size() > 1) {
task.param(PARAM_RESAVE_TIMES, Joiner.on(',').join(whenToResave.tailSet(firstResave, false)));
params.put(PARAM_RESAVE_TIMES, Joiner.on(',').join(whenToResave.tailSet(firstResave, false)));
}
addTaskToQueueWithRetry(asyncActionsPushQueue, task);
logger.atInfo().log("Enqueuing async re-save of %s to run at %s.", entityKey, whenToResave);
cloudTasksUtils.enqueue(
QUEUE_ASYNC_ACTIONS,
cloudTasksUtils.createPostTaskWithDelay(
ResaveEntityAction.PATH, Service.BACKEND.toString(), params, etaDuration));
}
/** Enqueues a task to asynchronously delete a contact or host, by key. */
@@ -131,7 +129,7 @@ public final class AsyncTaskEnqueuer {
TaskOptions task =
TaskOptions.Builder.withMethod(Method.PULL)
.countdownMillis(asyncDeleteDelay.getMillis())
.param(PARAM_RESOURCE_KEY, resourceToDelete.createVKey().getOfyKey().getString())
.param(PARAM_RESOURCE_KEY, resourceToDelete.createVKey().stringify())
.param(PARAM_REQUESTING_CLIENT_ID, requestingRegistrarId)
.param(PARAM_SERVER_TRANSACTION_ID, trid.getServerTransactionId())
.param(PARAM_IS_SUPERUSER, Boolean.toString(isSuperuser))
@@ -148,36 +146,10 @@ public final class AsyncTaskEnqueuer {
addTaskToQueueWithRetry(
asyncDnsRefreshPullQueue,
TaskOptions.Builder.withMethod(Method.PULL)
.param(PARAM_HOST_KEY, hostKey.getOfyKey().getString())
.param(PARAM_HOST_KEY, hostKey.stringify())
.param(PARAM_REQUESTED_TIME, now.toString()));
}
/**
* Enqueues a task to asynchronously re-lock a registry-locked domain after it was unlocked.
*
* <p>Note: the relockDuration must be present on the lock object.
*/
public void enqueueDomainRelock(RegistryLock lock) {
checkArgument(
lock.getRelockDuration().isPresent(),
"Lock with ID %s not configured for relock",
lock.getRevisionId());
enqueueDomainRelock(lock.getRelockDuration().get(), lock.getRevisionId(), 0);
}
/** Enqueues a task to asynchronously re-lock a registry-locked domain after it was unlocked. */
void enqueueDomainRelock(Duration countdown, long lockRevisionId, int previousAttempts) {
String backendHostname = appEngineServiceUtils.getServiceHostname("backend");
addTaskToQueueWithRetry(
asyncActionsPushQueue,
TaskOptions.Builder.withUrl(RelockDomainAction.PATH)
.method(Method.POST)
.header("Host", backendHostname)
.param(RelockDomainAction.OLD_UNLOCK_REVISION_ID_PARAM, String.valueOf(lockRevisionId))
.param(RelockDomainAction.PREVIOUS_ATTEMPTS_PARAM, String.valueOf(previousAttempts))
.countdownMillis(countdown.getMillis()));
}
/**
* Adds a task to a queue with retrying, to avoid aborting the entire flow over a transient issue
* enqueuing a task.

View File

@@ -88,7 +88,6 @@ public class RelockDomainAction implements Runnable {
private final SendEmailService sendEmailService;
private final DomainLockUtils domainLockUtils;
private final Response response;
private final AsyncTaskEnqueuer asyncTaskEnqueuer;
@Inject
public RelockDomainAction(
@@ -99,8 +98,7 @@ public class RelockDomainAction implements Runnable {
@Config("supportEmail") String supportEmail,
SendEmailService sendEmailService,
DomainLockUtils domainLockUtils,
Response response,
AsyncTaskEnqueuer asyncTaskEnqueuer) {
Response response) {
this.oldUnlockRevisionId = oldUnlockRevisionId;
this.previousAttempts = previousAttempts;
this.alertRecipientAddress = alertRecipientAddress;
@@ -109,7 +107,6 @@ public class RelockDomainAction implements Runnable {
this.sendEmailService = sendEmailService;
this.domainLockUtils = domainLockUtils;
this.response = response;
this.asyncTaskEnqueuer = asyncTaskEnqueuer;
}
@Override
@@ -245,8 +242,7 @@ public class RelockDomainAction implements Runnable {
}
}
Duration timeBeforeRetry = previousAttempts < ATTEMPTS_BEFORE_SLOWDOWN ? TEN_MINUTES : ONE_HOUR;
asyncTaskEnqueuer.enqueueDomainRelock(
timeBeforeRetry, oldUnlockRevisionId, previousAttempts + 1);
domainLockUtils.enqueueDomainRelock(timeBeforeRetry, oldUnlockRevisionId, previousAttempts + 1);
}
private void sendSuccessEmail(RegistryLock oldLock) {

View File

@@ -23,6 +23,7 @@ import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.persistence.PersistenceModule;
import google.registry.persistence.PersistenceModule.BeamBulkQueryJpaTm;
import google.registry.persistence.PersistenceModule.BeamJpaTm;
import google.registry.persistence.PersistenceModule.BeamReadOnlyReplicaJpaTm;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.privileges.secretmanager.SecretManagerModule;
@@ -59,6 +60,13 @@ public interface RegistryPipelineComponent {
@BeamBulkQueryJpaTm
Lazy<JpaTransactionManager> getBulkQueryJpaTransactionManager();
/**
* A {@link JpaTransactionManager} that uses the Postgres read-only replica if configured (uses
* the standard DB otherwise).
*/
@BeamReadOnlyReplicaJpaTm
Lazy<JpaTransactionManager> getReadOnlyReplicaJpaTransactionManager();
@Component.Builder
interface Builder {

View File

@@ -56,6 +56,10 @@ public class RegistryPipelineWorkerInitializer implements JvmInitializer {
case BULK_QUERY:
transactionManagerLazy = registryPipelineComponent.getBulkQueryJpaTransactionManager();
break;
case READ_ONLY_REPLICA:
transactionManagerLazy =
registryPipelineComponent.getReadOnlyReplicaJpaTransactionManager();
break;
case REGULAR:
default:
transactionManagerLazy = registryPipelineComponent.getJpaTransactionManager();

View File

@@ -23,6 +23,7 @@ import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.backup.VersionedEntity;
import google.registry.beam.initsql.Transforms;
import google.registry.model.EppResource;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.billing.BillingEvent;
import google.registry.model.common.Cursor;
@@ -42,6 +43,7 @@ import google.registry.model.tld.Registry;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import javax.annotation.Nullable;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
@@ -93,7 +95,8 @@ public final class DatastoreSnapshots {
String commitLogDir,
DateTime commitLogFromTime,
DateTime commitLogToTime,
Set<Class<?>> kinds) {
Set<Class<?>> kinds,
Optional<DateTime> compareStartTime) {
PCollectionTuple snapshot =
pipeline.apply(
"Load Datastore snapshot.",
@@ -112,11 +115,11 @@ public final class DatastoreSnapshots {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag((Class<? extends SqlEntity>) kind),
datastoreEntityToPojo(perKindSnapshot, kind.getSimpleName()));
datastoreEntityToPojo(perKindSnapshot, kind.getSimpleName(), compareStartTime));
continue;
}
Verify.verify(kind == HistoryEntry.class, "Unexpected Non-SqlEntity class: %s", kind);
PCollectionTuple historyEntriesByType = splitHistoryEntry(perKindSnapshot);
PCollectionTuple historyEntriesByType = splitHistoryEntry(perKindSnapshot, compareStartTime);
for (Map.Entry<TupleTag<?>, PCollection<?>> entry :
historyEntriesByType.getAll().entrySet()) {
perTypeSnapshots = perTypeSnapshots.and(entry.getKey().getId(), entry.getValue());
@@ -129,7 +132,9 @@ public final class DatastoreSnapshots {
* Splits a {@link PCollection} of {@link HistoryEntry HistoryEntries} into three collections of
* its child entities by type.
*/
static PCollectionTuple splitHistoryEntry(PCollection<VersionedEntity> historyEntries) {
static PCollectionTuple splitHistoryEntry(
PCollection<VersionedEntity> historyEntries, Optional<DateTime> compareStartTime) {
DateTime nullableStartTime = compareStartTime.orElse(null);
return historyEntries.apply(
"Split HistoryEntry by Resource Type",
ParDo.of(
@@ -138,6 +143,7 @@ public final class DatastoreSnapshots {
public void processElement(
@Element VersionedEntity historyEntry, MultiOutputReceiver out) {
Optional.ofNullable(Transforms.convertVersionedEntityToSqlEntity(historyEntry))
.filter(e -> isEntityIncludedForComparison(e, nullableStartTime))
.ifPresent(
sqlEntity ->
out.get(createSqlEntityTupleTag(sqlEntity.getClass()))
@@ -155,7 +161,8 @@ public final class DatastoreSnapshots {
* objects.
*/
static PCollection<SqlEntity> datastoreEntityToPojo(
PCollection<VersionedEntity> entities, String desc) {
PCollection<VersionedEntity> entities, String desc, Optional<DateTime> compareStartTime) {
DateTime nullableStartTime = compareStartTime.orElse(null);
return entities.apply(
"Datastore Entity to Pojo " + desc,
ParDo.of(
@@ -164,8 +171,23 @@ public final class DatastoreSnapshots {
public void processElement(
@Element VersionedEntity entity, OutputReceiver<SqlEntity> out) {
Optional.ofNullable(Transforms.convertVersionedEntityToSqlEntity(entity))
.filter(e -> isEntityIncludedForComparison(e, nullableStartTime))
.ifPresent(out::output);
}
}));
}
static boolean isEntityIncludedForComparison(
SqlEntity entity, @Nullable DateTime compareStartTime) {
if (compareStartTime == null) {
return true;
}
if (entity instanceof HistoryEntry) {
return compareStartTime.isBefore(((HistoryEntry) entity).getModificationTime());
}
if (entity instanceof EppResource) {
return compareStartTime.isBefore(((EppResource) entity).getUpdateTimestamp().getTimestamp());
}
return true;
}
}

View File

@@ -53,11 +53,11 @@ public class LatestDatastoreSnapshotFinder {
}
/**
* Finds information of the most recent Datastore snapshot, including the GCS folder of the
* exported data files and the start and stop times of the export. The folder of the CommitLogs is
* also included in the return.
* Finds information of the most recent Datastore snapshot that ends strictly before {@code
* exportEndTimeUpperBound}, including the GCS folder of the exported data files and the start and
* stop times of the export. The folder of the CommitLogs is also included in the return.
*/
public DatastoreSnapshotInfo getSnapshotInfo() {
public DatastoreSnapshotInfo getSnapshotInfo(Instant exportEndTimeUpperBound) {
String bucketName = RegistryConfig.getDatastoreBackupsBucket().substring("gs://".length());
/**
* Find the bucket-relative path to the overall metadata file of the last Datastore export.
@@ -65,7 +65,8 @@ public class LatestDatastoreSnapshotFinder {
* return value is like
* "2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata".
*/
Optional<String> metaFilePathOptional = findMostRecentExportMetadataFile(bucketName, 2);
Optional<String> metaFilePathOptional =
findNewestExportMetadataFileBeforeTime(bucketName, exportEndTimeUpperBound, 5);
if (!metaFilePathOptional.isPresent()) {
throw new NoSuchElementException("No exports found over the past 2 days.");
}
@@ -85,8 +86,9 @@ public class LatestDatastoreSnapshotFinder {
}
/**
* Finds the bucket-relative path of the overall export metadata file, in the given bucket,
* searching back up to {@code lookBackDays} days, including today.
* Finds the latest Datastore export that ends strictly before {@code endTimeUpperBound} and
* returns the bucket-relative path of the overall export metadata file, in the given bucket. The
* search goes back for up to {@code lookBackDays} days in time, including today.
*
* <p>The overall export metadata file is the last file created during a Datastore export. All
* data has been exported by the creation time of this file. The name of this file, like that of
@@ -95,7 +97,8 @@ public class LatestDatastoreSnapshotFinder {
* <p>An example return value: {@code
* 2021-11-19T06:00:00_76493/2021-11-19T06:00:00_76493.overall_export_metadata}.
*/
private Optional<String> findMostRecentExportMetadataFile(String bucketName, int lookBackDays) {
private Optional<String> findNewestExportMetadataFileBeforeTime(
String bucketName, Instant endTimeUpperBound, int lookBackDays) {
DateTime today = clock.nowUtc();
for (int day = 0; day < lookBackDays; day++) {
String dateString = today.minusDays(day).toString("yyyy-MM-dd");
@@ -107,7 +110,11 @@ public class LatestDatastoreSnapshotFinder {
.sorted(Comparator.<String>naturalOrder().reversed())
.findFirst();
if (metaFilePath.isPresent()) {
return metaFilePath;
BlobInfo blobInfo = gcsUtils.getBlobInfo(BlobId.of(bucketName, metaFilePath.get()));
Instant exportEndTime = new Instant(blobInfo.getCreateTime());
if (exportEndTime.isBefore(endTimeUpperBound)) {
return metaFilePath;
}
}
} catch (IOException ioe) {
throw new RuntimeException(ioe);
@@ -118,12 +125,12 @@ public class LatestDatastoreSnapshotFinder {
/** Holds information about a Datastore snapshot. */
@AutoValue
abstract static class DatastoreSnapshotInfo {
abstract String exportDir();
public abstract static class DatastoreSnapshotInfo {
public abstract String exportDir();
abstract String commitLogDir();
public abstract String commitLogDir();
abstract Interval exportInterval();
public abstract Interval exportInterval();
static DatastoreSnapshotInfo create(
String exportDir, String commitLogDir, Interval exportOperationInterval) {

View File

@@ -14,15 +14,22 @@
package google.registry.beam.comparedb;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.beam.comparedb.ValidateSqlUtils.createSqlEntityTupleTag;
import static google.registry.beam.comparedb.ValidateSqlUtils.getMedianIdForHistoryTable;
import com.google.auto.value.AutoValue;
import com.google.common.base.Strings;
import com.google.common.base.Verify;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.collect.Streams;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryJpaIO.Read;
import google.registry.model.EppResource;
import google.registry.model.UpdateAutoTimestamp;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.billing.BillingEvent;
import google.registry.model.bulkquery.BulkQueryEntities;
@@ -50,8 +57,10 @@ import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.tld.Registry;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.util.DateTimeUtils;
import java.io.Serializable;
import java.util.Optional;
import javax.persistence.Entity;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.Flatten;
@@ -65,6 +74,7 @@ import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TypeDescriptor;
import org.apache.beam.sdk.values.TypeDescriptors;
import org.joda.time.DateTime;
/**
* Utilities for loading SQL snapshots.
@@ -113,28 +123,48 @@ public final class SqlSnapshots {
public static PCollectionTuple loadCloudSqlSnapshotByType(
Pipeline pipeline,
ImmutableSet<Class<? extends SqlEntity>> sqlEntityTypes,
Optional<String> snapshotId) {
Optional<String> snapshotId,
Optional<DateTime> compareStartTime) {
PCollectionTuple perTypeSnapshots = PCollectionTuple.empty(pipeline);
for (Class<? extends SqlEntity> clazz : sqlEntityTypes) {
if (clazz == DomainBase.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(DomainBase.class),
loadAndAssembleDomainBase(pipeline, snapshotId));
loadAndAssembleDomainBase(pipeline, snapshotId, compareStartTime));
continue;
}
if (clazz == DomainHistory.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(DomainHistory.class),
loadAndAssembleDomainHistory(pipeline, snapshotId));
loadAndAssembleDomainHistory(pipeline, snapshotId, compareStartTime));
continue;
}
if (clazz == ContactHistory.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(ContactHistory.class),
loadContactHistory(pipeline, snapshotId));
loadContactHistory(pipeline, snapshotId, compareStartTime));
continue;
}
if (clazz == HostHistory.class) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(HostHistory.class),
loadHostHistory(
pipeline, snapshotId, compareStartTime.orElse(DateTimeUtils.START_OF_TIME)));
continue;
}
if (EppResource.class.isAssignableFrom(clazz) && compareStartTime.isPresent()) {
perTypeSnapshots =
perTypeSnapshots.and(
createSqlEntityTupleTag(clazz),
pipeline.apply(
"SQL Load " + clazz.getSimpleName(),
buildEppResourceQueryWithTimeFilter(
clazz, SqlEntity.class, snapshotId, compareStartTime.get())
.withSnapshot(snapshotId.orElse(null))));
continue;
}
perTypeSnapshots =
@@ -155,20 +185,33 @@ public final class SqlSnapshots {
* @see BulkQueryEntities
*/
public static PCollection<SqlEntity> loadAndAssembleDomainBase(
Pipeline pipeline, Optional<String> snapshotId) {
Pipeline pipeline, Optional<String> snapshotId, Optional<DateTime> compareStartTime) {
PCollection<KV<String, Serializable>> baseObjects =
readAllAndAssignKey(pipeline, DomainBaseLite.class, DomainBaseLite::getRepoId, snapshotId);
readAllAndAssignKey(
pipeline,
DomainBaseLite.class,
DomainBaseLite::getRepoId,
snapshotId,
compareStartTime);
PCollection<KV<String, Serializable>> gracePeriods =
readAllAndAssignKey(pipeline, GracePeriod.class, GracePeriod::getDomainRepoId, snapshotId);
readAllAndAssignKey(
pipeline,
GracePeriod.class,
GracePeriod::getDomainRepoId,
snapshotId,
compareStartTime);
PCollection<KV<String, Serializable>> delegationSigners =
readAllAndAssignKey(
pipeline,
DelegationSignerData.class,
DelegationSignerData::getDomainRepoId,
snapshotId);
snapshotId,
compareStartTime);
PCollection<KV<String, Serializable>> domainHosts =
readAllAndAssignKey(pipeline, DomainHost.class, DomainHost::getDomainRepoId, snapshotId);
readAllAndAssignKey(
pipeline, DomainHost.class, DomainHost::getDomainRepoId, snapshotId, compareStartTime);
DateTime nullableCompareStartTime = compareStartTime.orElse(null);
return PCollectionList.of(
ImmutableList.of(baseObjects, gracePeriods, delegationSigners, domainHosts))
.apply("SQL Merge DomainBase parts", Flatten.pCollections())
@@ -184,6 +227,14 @@ public final class SqlSnapshots {
TypedClassifier partsByType = new TypedClassifier(kv.getValue());
ImmutableSet<DomainBaseLite> baseObjects =
partsByType.getAllOf(DomainBaseLite.class);
if (nullableCompareStartTime != null) {
Verify.verify(
baseObjects.size() <= 1,
"Found duplicate DomainBaseLite object per repoId: " + kv.getKey());
if (baseObjects.isEmpty()) {
return;
}
}
Verify.verify(
baseObjects.size() == 1,
"Expecting one DomainBaseLite object per repoId: " + kv.getKey());
@@ -205,16 +256,16 @@ public final class SqlSnapshots {
* <p>This method uses two queries to load data in parallel. This is a performance optimization
* specifically for the production database.
*/
static PCollection<SqlEntity> loadContactHistory(Pipeline pipeline, Optional<String> snapshotId) {
long medianId =
getMedianIdForHistoryTable("ContactHistory")
.orElseThrow(
() -> new IllegalStateException("Not a valid database: no ContactHistory."));
static PCollection<SqlEntity> loadContactHistory(
Pipeline pipeline, Optional<String> snapshotId, Optional<DateTime> compareStartTime) {
PartitionedQuery partitionedQuery =
buildPartitonedHistoryQuery(ContactHistory.class, compareStartTime);
PCollection<SqlEntity> part1 =
pipeline.apply(
"SQL Load ContactHistory first half",
RegistryJpaIO.read(
String.format("select c from ContactHistory c where id <= %s", medianId),
partitionedQuery.firstHalfQuery(),
partitionedQuery.parameters(),
false,
SqlEntity.class::cast)
.withSnapshot(snapshotId.orElse(null)));
@@ -222,7 +273,8 @@ public final class SqlSnapshots {
pipeline.apply(
"SQL Load ContactHistory second half",
RegistryJpaIO.read(
String.format("select c from ContactHistory c where id > %s", medianId),
partitionedQuery.secondHalfQuery(),
partitionedQuery.parameters(),
false,
SqlEntity.class::cast)
.withSnapshot(snapshotId.orElse(null)));
@@ -231,6 +283,19 @@ public final class SqlSnapshots {
.apply("Combine ContactHistory parts", Flatten.pCollections());
}
/** Loads all {@link HostHistory} entities from the database. */
static PCollection<SqlEntity> loadHostHistory(
Pipeline pipeline, Optional<String> snapshotId, DateTime compareStartTime) {
return pipeline.apply(
"SQL Load HostHistory",
RegistryJpaIO.read(
"select c from HostHistory c where :compareStartTime < modificationTime",
ImmutableMap.of("compareStartTime", compareStartTime),
false,
SqlEntity.class::cast)
.withSnapshot(snapshotId.orElse(null)));
}
/**
* Bulk-loads all parts of {@link DomainHistory} and assembles them in the pipeline.
*
@@ -240,16 +305,15 @@ public final class SqlSnapshots {
* @see BulkQueryEntities
*/
static PCollection<SqlEntity> loadAndAssembleDomainHistory(
Pipeline pipeline, Optional<String> snapshotId) {
long medianId =
getMedianIdForHistoryTable("DomainHistory")
.orElseThrow(
() -> new IllegalStateException("Not a valid database: no DomainHistory."));
Pipeline pipeline, Optional<String> snapshotId, Optional<DateTime> compareStartTime) {
PartitionedQuery partitionedQuery =
buildPartitonedHistoryQuery(DomainHistoryLite.class, compareStartTime);
PCollection<KV<String, Serializable>> baseObjectsPart1 =
queryAndAssignKey(
pipeline,
"first half",
String.format("select c from DomainHistory c where id <= %s", medianId),
partitionedQuery.firstHalfQuery(),
partitionedQuery.parameters(),
DomainHistoryLite.class,
compose(DomainHistoryLite::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
@@ -257,7 +321,8 @@ public final class SqlSnapshots {
queryAndAssignKey(
pipeline,
"second half",
String.format("select c from DomainHistory c where id > %s", medianId),
partitionedQuery.secondHalfQuery(),
partitionedQuery.parameters(),
DomainHistoryLite.class,
compose(DomainHistoryLite::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
@@ -266,26 +331,31 @@ public final class SqlSnapshots {
pipeline,
GracePeriodHistory.class,
compose(GracePeriodHistory::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
snapshotId,
compareStartTime);
PCollection<KV<String, Serializable>> delegationSigners =
readAllAndAssignKey(
pipeline,
DomainDsDataHistory.class,
compose(DomainDsDataHistory::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
snapshotId,
compareStartTime);
PCollection<KV<String, Serializable>> domainHosts =
readAllAndAssignKey(
pipeline,
DomainHistoryHost.class,
compose(DomainHistoryHost::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
snapshotId,
compareStartTime);
PCollection<KV<String, Serializable>> transactionRecords =
readAllAndAssignKey(
pipeline,
DomainTransactionRecord.class,
compose(DomainTransactionRecord::getDomainHistoryId, DomainHistoryId::toString),
snapshotId);
snapshotId,
compareStartTime);
DateTime nullableCompareStartTime = compareStartTime.orElse(null);
return PCollectionList.of(
ImmutableList.of(
baseObjectsPart1,
@@ -307,6 +377,15 @@ public final class SqlSnapshots {
TypedClassifier partsByType = new TypedClassifier(kv.getValue());
ImmutableSet<DomainHistoryLite> baseObjects =
partsByType.getAllOf(DomainHistoryLite.class);
if (nullableCompareStartTime != null) {
Verify.verify(
baseObjects.size() <= 1,
"Found duplicate DomainHistoryLite object per domainHistoryId: "
+ kv.getKey());
if (baseObjects.isEmpty()) {
return;
}
}
Verify.verify(
baseObjects.size() == 1,
"Expecting one DomainHistoryLite object per domainHistoryId: "
@@ -328,12 +407,19 @@ public final class SqlSnapshots {
Pipeline pipeline,
Class<R> type,
SerializableFunction<R, String> keyFunction,
Optional<String> snapshotId) {
Optional<String> snapshotId,
Optional<DateTime> compareStartTime) {
Read<R, R> queryObject;
if (compareStartTime.isPresent() && EppResource.class.isAssignableFrom(type)) {
queryObject =
buildEppResourceQueryWithTimeFilter(type, type, snapshotId, compareStartTime.get());
} else {
queryObject =
RegistryJpaIO.read(() -> CriteriaQueryBuilder.create(type).build())
.withSnapshot(snapshotId.orElse(null));
}
return pipeline
.apply(
"SQL Load " + type.getSimpleName(),
RegistryJpaIO.read(() -> CriteriaQueryBuilder.create(type).build())
.withSnapshot(snapshotId.orElse(null)))
.apply("SQL Load " + type.getSimpleName(), queryObject)
.apply(
"Assign Key to " + type.getSimpleName(),
MapElements.into(
@@ -346,13 +432,15 @@ public final class SqlSnapshots {
Pipeline pipeline,
String diffrentiator,
String jplQuery,
ImmutableMap<String, Object> queryParameters,
Class<R> type,
SerializableFunction<R, String> keyFunction,
Optional<String> snapshotId) {
return pipeline
.apply(
"SQL Load " + type.getSimpleName() + " " + diffrentiator,
RegistryJpaIO.read(jplQuery, false, type::cast).withSnapshot(snapshotId.orElse(null)))
RegistryJpaIO.read(jplQuery, queryParameters, false, type::cast)
.withSnapshot(snapshotId.orElse(null)))
.apply(
"Assign Key to " + type.getSimpleName() + " " + diffrentiator,
MapElements.into(
@@ -367,6 +455,71 @@ public final class SqlSnapshots {
return r -> f2.apply(f1.apply(r));
}
static <R, T> Read<R, T> buildEppResourceQueryWithTimeFilter(
Class<R> entityType,
Class<T> castOutputAsType,
Optional<String> snapshotId,
DateTime compareStartTime) {
String tableName = getJpaEntityName(entityType);
String jpql =
String.format("select c from %s c where :compareStartTime < updateTimestamp", tableName);
return RegistryJpaIO.read(
jpql,
ImmutableMap.of("compareStartTime", UpdateAutoTimestamp.create(compareStartTime)),
false,
(R x) -> castOutputAsType.cast(x))
.withSnapshot(snapshotId.orElse(null));
}
static PartitionedQuery buildPartitonedHistoryQuery(
Class<?> entityType, Optional<DateTime> compareStartTime) {
String tableName = getJpaEntityName(entityType);
Verify.verify(
!Strings.isNullOrEmpty(tableName), "Invalid entity type %s", entityType.getSimpleName());
long medianId =
getMedianIdForHistoryTable(tableName)
.orElseThrow(() -> new IllegalStateException("Not a valid database: no " + tableName));
String firstHalfQuery = String.format("select c from %s c where id <= :historyId", tableName);
String secondHalfQuery = String.format("select c from %s c where id > :historyId", tableName);
if (compareStartTime.isPresent()) {
String timeFilter = " and :compareStartTime < modificationTime";
firstHalfQuery += timeFilter;
secondHalfQuery += timeFilter;
return PartitionedQuery.createPartitionedQuery(
firstHalfQuery,
secondHalfQuery,
ImmutableMap.of("historyId", medianId, "compareStartTime", compareStartTime.get()));
} else {
return PartitionedQuery.createPartitionedQuery(
firstHalfQuery, secondHalfQuery, ImmutableMap.of("historyId", medianId));
}
}
private static String getJpaEntityName(Class entityType) {
Entity entityAnnotation = (Entity) entityType.getAnnotation(Entity.class);
checkState(
entityAnnotation != null, "Unexpected non-entity type %s", entityType.getSimpleName());
return Strings.isNullOrEmpty(entityAnnotation.name())
? entityType.getSimpleName()
: entityAnnotation.name();
}
/** Contains two queries that partition the target table in two. */
@AutoValue
abstract static class PartitionedQuery {
abstract String firstHalfQuery();
abstract String secondHalfQuery();
abstract ImmutableMap<String, Object> parameters();
public static PartitionedQuery createPartitionedQuery(
String firstHalfQuery, String secondHalfQuery, ImmutableMap<String, Object> parameters) {
return new AutoValue_SqlSnapshots_PartitionedQuery(
firstHalfQuery, secondHalfQuery, parameters);
}
}
/** Container that receives mixed-typed data and groups them by {@link Class}. */
static class TypedClassifier {
private final ImmutableSetMultimap<Class<?>, Object> classifiedEntities;

View File

@@ -0,0 +1,206 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static com.google.common.base.Verify.verify;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.beam.common.RegistryPipelineWorkerInitializer;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder.DatastoreSnapshotInfo;
import google.registry.beam.comparedb.ValidateSqlUtils.CompareSqlEntity;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.replay.SqlEntity;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.util.SystemClock;
import java.io.Serializable;
import java.util.Optional;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.WithKeys;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Validates the asynchronous data replication process between Datastore and Cloud SQL.
*
* <p>This pipeline is to be launched by {@link google.registry.tools.ValidateDatastoreCommand} or
* {@link google.registry.tools.ValidateSqlCommand}.
*/
@DeleteAfterMigration
public class ValidateDatabasePipeline {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Specifies the extra CommitLogs to load before the start of a Database export. */
private static final Duration COMMITLOG_START_TIME_MARGIN = Duration.standardMinutes(10);
private final ValidateDatabasePipelineOptions options;
private final LatestDatastoreSnapshotFinder datastoreSnapshotFinder;
public ValidateDatabasePipeline(
ValidateDatabasePipelineOptions options,
LatestDatastoreSnapshotFinder datastoreSnapshotFinder) {
this.options = options;
this.datastoreSnapshotFinder = datastoreSnapshotFinder;
}
@VisibleForTesting
void run(Pipeline pipeline) {
DateTime latestCommitLogTime = DateTime.parse(options.getLatestCommitLogTimestamp());
DatastoreSnapshotInfo mostRecentExport =
datastoreSnapshotFinder.getSnapshotInfo(latestCommitLogTime.toInstant());
logger.atInfo().log(
"Comparing datastore export at %s and commitlog timestamp %s.",
mostRecentExport.exportDir(), latestCommitLogTime);
Optional<String> outputPath =
Optional.ofNullable(options.getDiffOutputGcsBucket())
.map(
bucket ->
String.format(
"gs://%s/validate_database/%s/diffs.txt",
bucket, new SystemClock().nowUtc()));
outputPath.ifPresent(path -> logger.atInfo().log("Discrepancies will be logged to %s", path));
setupPipeline(
pipeline,
Optional.ofNullable(options.getSqlSnapshotId()),
mostRecentExport,
latestCommitLogTime,
Optional.ofNullable(options.getComparisonStartTimestamp()).map(DateTime::parse),
outputPath);
pipeline.run();
}
static void setupPipeline(
Pipeline pipeline,
Optional<String> sqlSnapshotId,
DatastoreSnapshotInfo mostRecentExport,
DateTime latestCommitLogTime,
Optional<DateTime> compareStartTime,
Optional<String> diffOutputPath) {
pipeline
.getCoderRegistry()
.registerCoderForClass(SqlEntity.class, SerializableCoder.of(Serializable.class));
PCollectionTuple datastoreSnapshot =
DatastoreSnapshots.loadDatastoreSnapshotByKind(
pipeline,
mostRecentExport.exportDir(),
mostRecentExport.commitLogDir(),
mostRecentExport.exportInterval().getStart().minus(COMMITLOG_START_TIME_MARGIN),
// Increase by 1ms since we want to include commitLogs latestCommitLogTime but
// this parameter is exclusive.
latestCommitLogTime.plusMillis(1),
DatastoreSnapshots.ALL_DATASTORE_KINDS,
compareStartTime);
PCollectionTuple cloudSqlSnapshot =
SqlSnapshots.loadCloudSqlSnapshotByType(
pipeline, SqlSnapshots.ALL_SQL_ENTITIES, sqlSnapshotId, compareStartTime);
verify(
datastoreSnapshot.getAll().keySet().equals(cloudSqlSnapshot.getAll().keySet()),
"Expecting the same set of types in both snapshots.");
PCollectionList<String> diffLogs = PCollectionList.empty(pipeline);
for (Class<? extends SqlEntity> clazz : SqlSnapshots.ALL_SQL_ENTITIES) {
TupleTag<SqlEntity> tag = ValidateSqlUtils.createSqlEntityTupleTag(clazz);
verify(
datastoreSnapshot.has(tag), "Missing %s in Datastore snapshot.", clazz.getSimpleName());
verify(cloudSqlSnapshot.has(tag), "Missing %s in Cloud SQL snapshot.", clazz.getSimpleName());
diffLogs =
diffLogs.and(
PCollectionList.of(datastoreSnapshot.get(tag))
.and(cloudSqlSnapshot.get(tag))
.apply(
"Combine from both snapshots: " + clazz.getSimpleName(),
Flatten.pCollections())
.apply(
"Assign primary key to merged " + clazz.getSimpleName(),
WithKeys.of(ValidateDatabasePipeline::getPrimaryKeyString)
.withKeyType(strings()))
.apply("Group by primary key " + clazz.getSimpleName(), GroupByKey.create())
.apply("Compare " + clazz.getSimpleName(), ParDo.of(new CompareSqlEntity())));
}
if (diffOutputPath.isPresent()) {
diffLogs
.apply("Gather diff logs", Flatten.pCollections())
.apply(
"Output diffs",
TextIO.write()
.to(diffOutputPath.get())
/**
* Output to a single file for ease of use since diffs should be few. If this
* assumption turns out not to be false, user should abort the pipeline and
* investigate why.
*/
.withoutSharding()
.withDelimiter((Strings.repeat("-", 80) + "\n").toCharArray()));
}
}
private static String getPrimaryKeyString(SqlEntity sqlEntity) {
// SqlEntity.getPrimaryKeyString only works with entities registered with Hibernate.
// We are using the BulkQueryJpaTransactionManager, which does not recognize DomainBase and
// DomainHistory. See BulkQueryEntities.java for more information.
if (sqlEntity instanceof DomainBase) {
return "DomainBase_" + ((DomainBase) sqlEntity).getRepoId();
}
if (sqlEntity instanceof DomainHistory) {
return "DomainHistory_" + ((DomainHistory) sqlEntity).getDomainHistoryId().toString();
}
return sqlEntity.getPrimaryKeyString();
}
public static void main(String[] args) {
ValidateDatabasePipelineOptions options =
PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(ValidateDatabasePipelineOptions.class);
RegistryPipelineOptions.validateRegistryPipelineOptions(options);
// Defensively set important options.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
options.setJpaTransactionManagerType(JpaTransactionManagerType.BULK_QUERY);
// Set up JPA in the pipeline harness (the locally executed part of the main() method). Reuse
// code in RegistryPipelineWorkerInitializer, which only applies to pipeline worker VMs.
new RegistryPipelineWorkerInitializer().beforeProcessing(options);
LatestDatastoreSnapshotFinder datastoreSnapshotFinder =
DaggerLatestDatastoreSnapshotFinder_LatestDatastoreSnapshotFinderFinderComponent.create()
.datastoreSnapshotInfoFinder();
new ValidateDatabasePipeline(options, datastoreSnapshotFinder).run(Pipeline.create(options));
}
}

View File

@@ -0,0 +1,55 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.model.annotations.DeleteAfterMigration;
import javax.annotation.Nullable;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.Validation;
/** BEAM pipeline options for {@link ValidateDatabasePipeline}. */
@DeleteAfterMigration
public interface ValidateDatabasePipelineOptions extends RegistryPipelineOptions {
@Description(
"The id of the SQL snapshot to be compared with Datastore. "
+ "If null, the current state of the SQL database is used.")
@Nullable
String getSqlSnapshotId();
void setSqlSnapshotId(String snapshotId);
@Description("The latest CommitLogs to load, in ISO8601 format.")
@Validation.Required
String getLatestCommitLogTimestamp();
void setLatestCommitLogTimestamp(String commitLogEndTimestamp);
@Description(
"For history entries and EPP resources, only those modified strictly after this time are "
+ "included in comparison. Value is in ISO8601 format. "
+ "Other entity types are not affected.")
@Nullable
String getComparisonStartTimestamp();
void setComparisonStartTimestamp(String comparisonStartTimestamp);
@Description("The GCS bucket where discrepancies found during comparison should be logged.")
@Nullable
String getDiffOutputGcsBucket();
void setDiffOutputGcsBucket(String gcsBucket);
}

View File

@@ -1,240 +0,0 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.comparedb;
import static com.google.common.base.Verify.verify;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import com.google.common.base.Stopwatch;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.common.DatabaseSnapshot;
import google.registry.beam.common.RegistryPipelineWorkerInitializer;
import google.registry.beam.comparedb.LatestDatastoreSnapshotFinder.DatastoreSnapshotInfo;
import google.registry.beam.comparedb.ValidateSqlUtils.CompareSqlEntity;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainHistory;
import google.registry.model.replay.SqlEntity;
import google.registry.model.replay.SqlReplayCheckpoint;
import google.registry.model.server.Lock;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.transaction.TransactionManagerFactory;
import google.registry.util.RequestStatusChecker;
import java.io.Serializable;
import java.util.Optional;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult.State;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Flatten;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.WithKeys;
import org.apache.beam.sdk.values.PCollectionList;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Validates the asynchronous data replication process from Datastore (primary storage) to Cloud SQL
* (secondary storage).
*/
@DeleteAfterMigration
public class ValidateSqlPipeline {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Specifies the extra CommitLogs to load before the start of a Database export. */
private static final Duration COMMITLOG_START_TIME_MARGIN = Duration.standardMinutes(10);
/**
* Name of the lock used by the commitlog replay process.
*
* <p>See {@link google.registry.backup.ReplayCommitLogsToSqlAction} for more information.
*/
private static final String COMMITLOG_REPLAY_LOCK_NAME = "ReplayCommitLogsToSqlAction";
private static final Duration REPLAY_LOCK_LEASE_LENGTH = Duration.standardHours(1);
private static final java.time.Duration REPLAY_LOCK_ACQUIRE_TIMEOUT =
java.time.Duration.ofMinutes(6);
private static final java.time.Duration REPLAY_LOCK_ACQUIRE_DELAY =
java.time.Duration.ofSeconds(30);
private final ValidateSqlPipelineOptions options;
private final DatastoreSnapshotInfo mostRecentExport;
public ValidateSqlPipeline(
ValidateSqlPipelineOptions options, DatastoreSnapshotInfo mostRecentExport) {
this.options = options;
this.mostRecentExport = mostRecentExport;
}
void run() {
run(Pipeline.create(options));
}
@VisibleForTesting
void run(Pipeline pipeline) {
// TODO(weiminyu): ensure migration stage is DATASTORE_PRIMARY or DATASTORE_PRIMARY_READ_ONLY
Optional<Lock> lock = acquireCommitLogReplayLock();
if (lock.isPresent()) {
logger.atInfo().log("Acquired CommitLog Replay lock.");
} else {
throw new RuntimeException("Failed to acquire CommitLog Replay lock.");
}
try {
DateTime latestCommitLogTime =
TransactionManagerFactory.jpaTm().transact(() -> SqlReplayCheckpoint.get());
Preconditions.checkState(
latestCommitLogTime.isAfter(mostRecentExport.exportInterval().getEnd()),
"Cannot recreate Datastore snapshot since target time is in the middle of an export.");
try (DatabaseSnapshot databaseSnapshot = DatabaseSnapshot.createSnapshot()) {
// Eagerly release the commitlog replay lock so that replay can resume.
lock.ifPresent(Lock::releaseSql);
lock = Optional.empty();
setupPipeline(pipeline, Optional.of(databaseSnapshot.getSnapshotId()), latestCommitLogTime);
State state = pipeline.run().waitUntilFinish();
if (!State.DONE.equals(state)) {
throw new IllegalStateException("Unexpected pipeline state: " + state);
}
}
} finally {
lock.ifPresent(Lock::releaseSql);
}
}
void setupPipeline(
Pipeline pipeline, Optional<String> sqlSnapshotId, DateTime latestCommitLogTime) {
pipeline
.getCoderRegistry()
.registerCoderForClass(SqlEntity.class, SerializableCoder.of(Serializable.class));
PCollectionTuple datastoreSnapshot =
DatastoreSnapshots.loadDatastoreSnapshotByKind(
pipeline,
mostRecentExport.exportDir(),
mostRecentExport.commitLogDir(),
mostRecentExport.exportInterval().getStart().minus(COMMITLOG_START_TIME_MARGIN),
// Increase by 1ms since we want to include commitLogs latestCommitLogTime but
// this parameter is exclusive.
latestCommitLogTime.plusMillis(1),
DatastoreSnapshots.ALL_DATASTORE_KINDS);
PCollectionTuple cloudSqlSnapshot =
SqlSnapshots.loadCloudSqlSnapshotByType(
pipeline, SqlSnapshots.ALL_SQL_ENTITIES, sqlSnapshotId);
verify(
datastoreSnapshot.getAll().keySet().equals(cloudSqlSnapshot.getAll().keySet()),
"Expecting the same set of types in both snapshots.");
for (Class<? extends SqlEntity> clazz : SqlSnapshots.ALL_SQL_ENTITIES) {
TupleTag<SqlEntity> tag = ValidateSqlUtils.createSqlEntityTupleTag(clazz);
verify(
datastoreSnapshot.has(tag), "Missing %s in Datastore snapshot.", clazz.getSimpleName());
verify(cloudSqlSnapshot.has(tag), "Missing %s in Cloud SQL snapshot.", clazz.getSimpleName());
PCollectionList.of(datastoreSnapshot.get(tag))
.and(cloudSqlSnapshot.get(tag))
.apply("Combine from both snapshots: " + clazz.getSimpleName(), Flatten.pCollections())
.apply(
"Assign primary key to merged " + clazz.getSimpleName(),
WithKeys.of(ValidateSqlPipeline::getPrimaryKeyString).withKeyType(strings()))
.apply("Group by primary key " + clazz.getSimpleName(), GroupByKey.create())
.apply("Compare " + clazz.getSimpleName(), ParDo.of(new CompareSqlEntity()));
}
}
private static String getPrimaryKeyString(SqlEntity sqlEntity) {
// SqlEntity.getPrimaryKeyString only works with entities registered with Hibernate.
// We are using the BulkQueryJpaTransactionManager, which does not recognize DomainBase and
// DomainHistory. See BulkQueryEntities.java for more information.
if (sqlEntity instanceof DomainBase) {
return "DomainBase_" + ((DomainBase) sqlEntity).getRepoId();
}
if (sqlEntity instanceof DomainHistory) {
return "DomainHistory_" + ((DomainHistory) sqlEntity).getDomainHistoryId().toString();
}
return sqlEntity.getPrimaryKeyString();
}
private static Optional<Lock> acquireCommitLogReplayLock() {
Stopwatch stopwatch = Stopwatch.createStarted();
while (stopwatch.elapsed().minus(REPLAY_LOCK_ACQUIRE_TIMEOUT).isNegative()) {
Optional<Lock> lock = tryAcquireCommitLogReplayLock();
if (lock.isPresent()) {
return lock;
}
logger.atInfo().log("Failed to acquired CommitLog Replay lock. Will retry...");
try {
Thread.sleep(REPLAY_LOCK_ACQUIRE_DELAY.toMillis());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException("Interrupted.");
}
}
return Optional.empty();
}
private static Optional<Lock> tryAcquireCommitLogReplayLock() {
return Lock.acquireSql(
COMMITLOG_REPLAY_LOCK_NAME,
null,
REPLAY_LOCK_LEASE_LENGTH,
getLockingRequestStatusChecker(),
false);
}
/**
* Returns a fake implementation of {@link RequestStatusChecker} that is required for lock
* acquisition. The default implementation is AppEngine-specific and is unusable on GCE.
*/
private static RequestStatusChecker getLockingRequestStatusChecker() {
return new RequestStatusChecker() {
@Override
public String getLogId() {
return "ValidateSqlPipeline";
}
@Override
public boolean isRunning(String requestLogId) {
return true;
}
};
}
public static void main(String[] args) {
ValidateSqlPipelineOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(ValidateSqlPipelineOptions.class);
// Defensively set important options.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
options.setJpaTransactionManagerType(JpaTransactionManagerType.BULK_QUERY);
// Reuse Dataflow worker initialization code to set up JPA in the pipeline harness.
new RegistryPipelineWorkerInitializer().beforeProcessing(options);
DatastoreSnapshotInfo mostRecentExport =
DaggerLatestDatastoreSnapshotFinder_LatestDatastoreSnapshotFinderFinderComponent.create()
.datastoreSnapshotInfoFinder()
.getSnapshotInfo();
new ValidateSqlPipeline(options, mostRecentExport).run(Pipeline.create(options));
}
}

View File

@@ -20,10 +20,8 @@ import static google.registry.persistence.transaction.TransactionManagerFactory.
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.initsql.Transforms;
import google.registry.config.RegistryEnvironment;
import google.registry.model.BackupGroupRoot;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
@@ -38,6 +36,7 @@ import google.registry.model.poll.PollMessage;
import google.registry.model.registrar.Registrar;
import google.registry.model.replay.SqlEntity;
import google.registry.model.reporting.HistoryEntry;
import google.registry.util.DiffUtils;
import java.lang.reflect.Field;
import java.math.BigInteger;
import java.util.HashMap;
@@ -52,10 +51,9 @@ import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.TupleTag;
/** Helpers for use by {@link ValidateSqlPipeline}. */
/** Helpers for use by {@link ValidateDatabasePipeline}. */
@DeleteAfterMigration
final class ValidateSqlUtils {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private ValidateSqlUtils() {}
@@ -67,7 +65,7 @@ final class ValidateSqlUtils {
* Query template for finding the median value of the {@code history_revision_id} column in one of
* the History tables.
*
* <p>The {@link ValidateSqlPipeline} uses this query to parallelize the query to some of the
* <p>The {@link ValidateDatabasePipeline} uses this query to parallelize the query to some of the
* history tables. Although the {@code repo_id} column is the leading column in the primary keys
* of these tables, in practice and with production data, division by {@code history_revision_id}
* works slightly faster for unknown reasons.
@@ -99,13 +97,12 @@ final class ValidateSqlUtils {
return new TupleTag<SqlEntity>(actualType.getSimpleName()) {};
}
static class CompareSqlEntity extends DoFn<KV<String, Iterable<SqlEntity>>, Void> {
static class CompareSqlEntity extends DoFn<KV<String, Iterable<SqlEntity>>, String> {
private final HashMap<String, Counter> totalCounters = new HashMap<>();
private final HashMap<String, Counter> missingCounters = new HashMap<>();
private final HashMap<String, Counter> unequalCounters = new HashMap<>();
private final HashMap<String, Counter> badEntityCounters = new HashMap<>();
private volatile boolean logPrinted = false;
private final HashMap<String, Counter> duplicateEntityCounters = new HashMap<>();
private String getCounterKey(Class<?> clazz) {
return PollMessage.class.isAssignableFrom(clazz) ? "PollMessage" : clazz.getSimpleName();
@@ -120,80 +117,86 @@ final class ValidateSqlUtils {
counterKey, Metrics.counter("CompareDB", "Missing In One DB: " + counterKey));
unequalCounters.put(counterKey, Metrics.counter("CompareDB", "Not Equal:" + counterKey));
badEntityCounters.put(counterKey, Metrics.counter("CompareDB", "Bad Entities:" + counterKey));
duplicateEntityCounters.put(
counterKey, Metrics.counter("CompareDB", "Duplicate Entities:" + counterKey));
}
String duplicateEntityLog(String key, ImmutableList<SqlEntity> entities) {
return String.format("%s: %d entities.", key, entities.size());
}
String unmatchedEntityLog(String key, SqlEntity entry) {
// For a PollMessage only found in Datastore, key is not enough to query for it.
return String.format("Missing in one DB:\n%s", entry instanceof PollMessage ? entry : key);
}
/**
* A rudimentary debugging helper that prints the first pair of unequal entities in each worker.
* This will be removed when we start exporting such entities to GCS.
*/
void logDiff(String key, Object entry0, Object entry1) {
if (logPrinted) {
return;
}
logPrinted = true;
String unEqualEntityLog(String key, SqlEntity entry0, SqlEntity entry1) {
Map<String, Object> fields0 = ((ImmutableObject) entry0).toDiffableFieldMap();
Map<String, Object> fields1 = ((ImmutableObject) entry1).toDiffableFieldMap();
StringBuilder sb = new StringBuilder();
fields0.forEach(
(field, value) -> {
if (fields1.containsKey(field)) {
if (!Objects.equals(value, fields1.get(field))) {
sb.append(field + " not match: " + value + " -> " + fields1.get(field) + "\n");
}
} else {
sb.append(field + "Not found in entity 2\n");
}
});
fields1.forEach(
(field, value) -> {
if (!fields0.containsKey(field)) {
sb.append(field + "Not found in entity 1\n");
}
});
logger.atWarning().log(key + " " + sb.toString());
return key + " " + DiffUtils.prettyPrintEntityDeepDiff(fields0, fields1);
}
String badEntitiesLog(String key, SqlEntity entry0, SqlEntity entry1) {
Map<String, Object> fields0 = ((ImmutableObject) entry0).toDiffableFieldMap();
Map<String, Object> fields1 = ((ImmutableObject) entry1).toDiffableFieldMap();
return String.format(
"Failed to parse one or both entities for key %s:\n%s\n",
key, DiffUtils.prettyPrintEntityDeepDiff(fields0, fields1));
}
@ProcessElement
public void processElement(@Element KV<String, Iterable<SqlEntity>> kv) {
public void processElement(
@Element KV<String, Iterable<SqlEntity>> kv, OutputReceiver<String> out) {
ImmutableList<SqlEntity> entities = ImmutableList.copyOf(kv.getValue());
verify(!entities.isEmpty(), "Can't happen: no value for key %s.", kv.getKey());
verify(entities.size() <= 2, "Unexpected duplicates for key %s", kv.getKey());
String counterKey = getCounterKey(entities.get(0).getClass());
ensureCounterExists(counterKey);
totalCounters.get(counterKey).inc();
if (entities.size() > 2) {
// Duplicates may happen with Cursors if imported across projects. Its key in Datastore, the
// id field, encodes the project name and is not fixed by the importing job.
duplicateEntityCounters.get(counterKey).inc();
out.output(duplicateEntityLog(kv.getKey(), entities) + "\n");
return;
}
if (entities.size() == 1) {
if (isSpecialCaseProberEntity(entities.get(0))) {
return;
}
missingCounters.get(counterKey).inc();
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
logPrinted = true;
logger.atWarning().log("Unexpected single entity: %s", kv.getKey());
}
out.output(unmatchedEntityLog(kv.getKey(), entities.get(0)) + "\n");
return;
}
SqlEntity entity0 = entities.get(0);
SqlEntity entity1 = entities.get(1);
if (isSpecialCaseProberEntity(entity0) && isSpecialCaseProberEntity(entity1)) {
// Ignore prober-related data: their deletions are not propagated from Datastore to SQL.
// When code reaches here, in most cases it involves one soft deleted entity in Datastore
// and an SQL entity with its pre-deletion status.
return;
}
SqlEntity entity0;
SqlEntity entity1;
try {
entity0 = normalizeEntity(entities.get(0));
entity1 = normalizeEntity(entities.get(1));
entity0 = normalizeEntity(entity0);
entity1 = normalizeEntity(entity1);
} catch (Exception e) {
// Temporary debugging help. See logDiff() above.
if (!logPrinted) {
logPrinted = true;
badEntityCounters.get(counterKey).inc();
}
badEntityCounters.get(counterKey).inc();
out.output(badEntitiesLog(kv.getKey(), entity0, entity1));
return;
}
if (!Objects.equals(entity0, entity1)) {
unequalCounters.get(counterKey).inc();
logDiff(kv.getKey(), entities.get(0), entities.get(1));
out.output(unEqualEntityLog(kv.getKey(), entities.get(0), entities.get(1)));
}
}
}
@@ -218,15 +221,6 @@ final class ValidateSqlUtils {
*/
static SqlEntity normalizeEppResource(SqlEntity eppResource) {
try {
if (isSpecialCaseProberEntity(eppResource)) {
// Clearing some timestamps. See isSpecialCaseProberEntity() for reasons.
Field lastUpdateTime = BackupGroupRoot.class.getDeclaredField("updateTimestamp");
lastUpdateTime.setAccessible(true);
lastUpdateTime.set(eppResource, null);
Field deletionTime = EppResource.class.getDeclaredField("deletionTime");
deletionTime.setAccessible(true);
deletionTime.set(eppResource, null);
}
Field authField =
eppResource instanceof DomainContent
? DomainContent.class.getDeclaredField("authInfo")

View File

@@ -248,6 +248,9 @@ public class RdeIO {
// Now that we're done, output roll the cursor forward.
if (key.manual()) {
logger.atInfo().log("Manual operation; not advancing cursor or enqueuing upload task.");
// Temporary measure to run RDE in beam in parallel with the daily MapReduce based RDE runs.
} else if (tm().isOfy()) {
logger.atInfo().log("Ofy is primary TM; not advancing cursor or enqueuing upload task.");
} else {
outputReceiver.output(KV.of(key, revision));
}
@@ -294,10 +297,14 @@ public class RdeIO {
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), revision);
// Enqueueing a task is a side effect that is not undone if the transaction rolls
// back. So this may result in multiple copies of the same task being processed.
// This is fine because the RdeUploadAction is guarded by a lock and tracks progress
// by cursor. The BrdaCopyAction writes a file to GCS, which is an atomic action.
if (key.mode() == RdeMode.FULL) {
cloudTasksUtils.enqueue(
RDE_UPLOAD_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
RdeUploadAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(
@@ -308,7 +315,7 @@ public class RdeIO {
} else {
cloudTasksUtils.enqueue(
BRDA_QUEUE,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTask(
BrdaCopyAction.PATH,
Service.BACKEND.getServiceId(),
ImmutableMultimap.of(

View File

@@ -21,6 +21,7 @@ import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.util.CloudTasksUtils.SerializableCloudTasksClient;
@@ -46,8 +47,9 @@ public abstract class CloudTasksUtilsModule {
@Config("projectId") String projectId,
@Config("locationId") String locationId,
SerializableCloudTasksClient client,
Retrier retrier) {
return new CloudTasksUtils(retrier, projectId, locationId, client);
Retrier retrier,
Clock clock) {
return new CloudTasksUtils(retrier, clock, projectId, locationId, client);
}
// Provides a supplier instead of using a Dagger @Provider because the latter is not serializable.

View File

@@ -33,6 +33,7 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSortedMap;
import dagger.Module;
import dagger.Provides;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.util.TaskQueueUtils;
import google.registry.util.YamlUtils;
import java.lang.annotation.Documented;
@@ -392,19 +393,26 @@ public final class RegistryConfig {
@Provides
@Config("cloudSqlJdbcUrl")
public static String providesCloudSqlJdbcUrl(RegistryConfigSettings config) {
public static String provideCloudSqlJdbcUrl(RegistryConfigSettings config) {
return config.cloudSql.jdbcUrl;
}
@Provides
@Config("cloudSqlInstanceConnectionName")
public static String providesCloudSqlInstanceConnectionName(RegistryConfigSettings config) {
public static String provideCloudSqlInstanceConnectionName(RegistryConfigSettings config) {
return config.cloudSql.instanceConnectionName;
}
@Provides
@Config("cloudSqlReplicaInstanceConnectionName")
public static Optional<String> provideCloudSqlReplicaInstanceConnectionName(
RegistryConfigSettings config) {
return Optional.ofNullable(config.cloudSql.replicaInstanceConnectionName);
}
@Provides
@Config("cloudSqlDbInstanceName")
public static String providesCloudSqlDbInstance(RegistryConfigSettings config) {
public static String provideCloudSqlDbInstance(RegistryConfigSettings config) {
// Format of instanceConnectionName: project-id:region:instance-name
int lastColonIndex = config.cloudSql.instanceConnectionName.lastIndexOf(':');
return config.cloudSql.instanceConnectionName.substring(lastColonIndex + 1);
@@ -1524,6 +1532,31 @@ public final class RegistryConfig {
return CONFIG_SETTINGS.get().hibernate.hikariIdleTimeout;
}
/**
* JDBC-specific: driver default batch size is 0, which means that every INSERT statement will be
* sent to the database individually. Batching allows us to group together multiple inserts into
* one single INSERT statement which can dramatically increase speed in situations with many
* inserts.
*
* <p>Hibernate docs, i.e.
* https://docs.jboss.org/hibernate/orm/5.6/userguide/html_single/Hibernate_User_Guide.html,
* recommend between 10 and 50.
*/
public static String getHibernateJdbcBatchSize() {
return CONFIG_SETTINGS.get().hibernate.jdbcBatchSize;
}
/**
* Returns the JDBC fetch size.
*
* <p>Postgresql-specific: driver default fetch size is 0, which disables streaming result sets.
* Here we set a small default geared toward Nomulus server transactions. Large queries can
* override the defaults using {@link JpaTransactionManager#setQueryFetchSize}.
*/
public static String getHibernateJdbcFetchSize() {
return CONFIG_SETTINGS.get().hibernate.jdbcFetchSize;
}
/** Returns the roid suffix to be used for the roids of all contacts and hosts. */
public static String getContactAndHostRoidSuffix() {
return CONFIG_SETTINGS.get().registryPolicy.contactAndHostRoidSuffix;

View File

@@ -120,6 +120,8 @@ public class RegistryConfigSettings {
public String hikariMinimumIdle;
public String hikariMaximumPoolSize;
public String hikariIdleTimeout;
public String jdbcBatchSize;
public String jdbcFetchSize;
}
/** Configuration for Cloud SQL. */
@@ -128,6 +130,7 @@ public class RegistryConfigSettings {
// TODO(05012021): remove username field after it is removed from all yaml files.
public String username;
public String instanceConnectionName;
public String replicaInstanceConnectionName;
}
/** Configuration for Apache Beam (Cloud Dataflow). */

View File

@@ -221,6 +221,17 @@ hibernate:
hikariMinimumIdle: 1
hikariMaximumPoolSize: 10
hikariIdleTimeout: 300000
# The batch size is basically the number of insertions / updates in a single
# transaction that will be batched together into one INSERT/UPDATE statement.
# A larger batch size is useful when inserting or updating many entities in a
# single transaction. Hibernate docs
# (https://docs.jboss.org/hibernate/orm/5.6/userguide/html_single/Hibernate_User_Guide.html)
# recommend between 10 and 50.
jdbcBatchSize: 50
# The fetch size is the number of entities retrieved at a time from the
# database cursor. Here we set a small default geared toward Nomulus server
# transactions. Large queries can override the defaults on a per-query basis.
jdbcFetchSize: 20
cloudSql:
# jdbc url for the Cloud SQL database.
@@ -231,6 +242,10 @@ cloudSql:
jdbcUrl: jdbc:postgresql://localhost
# This name is used by Cloud SQL when connecting to the database.
instanceConnectionName: project-id:region:instance-id
# If non-null, we will use this instance for certain read-only actions or
# pipelines, e.g. RDE, in order to offload some work from the primary
# instance. Expect any write actions on this instance to fail.
replicaInstanceConnectionName: null
cloudDns:
# Set both properties to null in Production.

View File

@@ -20,7 +20,6 @@ import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import javax.inject.Inject;
@@ -35,7 +34,6 @@ public final class CommitLogFanoutAction implements Runnable {
public static final String BUCKET_PARAM = "bucket";
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject @Parameter("endpoint") String endpoint;
@@ -43,18 +41,15 @@ public final class CommitLogFanoutAction implements Runnable {
@Inject @Parameter("jitterSeconds") Optional<Integer> jitterSeconds;
@Inject CommitLogFanoutAction() {}
@Override
public void run() {
for (int bucketId : CommitLogBucket.getBucketIds()) {
cloudTasksUtils.enqueue(
queue,
CloudTasksUtils.createPostTask(
cloudTasksUtils.createPostTaskWithJitter(
endpoint,
Service.BACKEND.toString(),
ImmutableMultimap.of(BUCKET_PARAM, Integer.toString(bucketId)),
clock,
jitterSeconds));
}
}

View File

@@ -45,7 +45,6 @@ import google.registry.request.ParameterMap;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import java.util.stream.Stream;
@@ -98,7 +97,6 @@ public final class TldFanoutAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Clock clock;
@Inject CloudTasksUtils cloudTasksUtils;
@Inject Response response;
@Inject @Parameter(ENDPOINT_PARAM) String endpoint;
@@ -159,7 +157,7 @@ public final class TldFanoutAction implements Runnable {
params = ArrayListMultimap.create(params);
params.put(RequestParameters.PARAM_TLD, tld);
}
return CloudTasksUtils.createPostTask(
endpoint, Service.BACKEND.toString(), params, clock, jitterSeconds);
return cloudTasksUtils.createPostTaskWithJitter(
endpoint, Service.BACKEND.toString(), params, jitterSeconds);
}
}

View File

@@ -422,6 +422,12 @@ have been in the database for a certain period of time. -->
<url-pattern>/_dr/task/createSyntheticHistoryEntries</url-pattern>
</servlet-mapping>
<!-- Action to sync Datastore to a snapshot of the primary SQL database. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/task/syncDatastoreToSqlSnapshot</url-pattern>
</servlet-mapping>
<!-- Security config -->
<security-constraint>
<web-resource-collection>

View File

@@ -36,6 +36,19 @@
<target>backend</target>
</cron>
<cron>
<url>/_dr/task/rdeStaging?beam=true</url>
<description>
This job generates a full RDE escrow deposit as a single gigantic XML
document using the Beam pipeline regardless of the current TM
configuration and streams it to cloud storage. It does not trigger the
subsequent upload tasks and is meant to run parallel with the main cron
job in order to compare the results from both runs.
</description>
<schedule>every 8 hours from 00:07 to 20:00</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=rde-upload&endpoint=/_dr/task/rdeUpload&forEachRealTld]]></url>
<description>
@@ -240,16 +253,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<description>
@@ -349,6 +352,15 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/replicateToDatastore]]></url>
<description>
Replays recent transactions from SQL to the Datastore secondary backend.
</description>
<schedule>every 3 minutes</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/task/wipeOutContactHistoryPii]]></url>
<description>

View File

@@ -168,15 +168,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/task/sendExpiringCertificateNotificationEmail]]></url>
<description>
This job runs an action that sends emails to partners if their certificates are expiring soon.
</description>
<schedule>every day 04:30</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=export-snapshot&endpoint=/_dr/task/backupDatastore&runInEmpty]]></url>
<description>
@@ -191,16 +182,6 @@
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<description>

View File

@@ -26,7 +26,6 @@ import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Iterables;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
@@ -143,7 +142,7 @@ public class ExportPremiumTermsAction implements Runnable {
PremiumListDao.getLatestRevision(premiumListName).isPresent(),
"Could not load premium list for " + tld);
SortedSet<String> premiumTerms =
Streams.stream(PremiumListDao.loadAllPremiumEntries(premiumListName))
PremiumListDao.loadAllPremiumEntries(premiumListName).stream()
.map(PremiumEntry::toString)
.collect(ImmutableSortedSet.toImmutableSortedSet(String::compareTo));

View File

@@ -14,8 +14,6 @@
package google.registry.export.sheet;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import static com.google.common.net.MediaType.PLAIN_TEXT_UTF_8;
import static google.registry.request.Action.Method.POST;
import static javax.servlet.http.HttpServletResponse.SC_BAD_REQUEST;
@@ -23,7 +21,6 @@ import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_NO_CONTENT;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action;
@@ -100,7 +97,7 @@ public class SyncRegistrarsSheetAction implements Runnable {
}
public static final String PATH = "/_dr/task/syncRegistrarsSheet";
private static final String QUEUE = "sheet";
public static final String QUEUE = "sheet";
private static final String LOCK_NAME = "Synchronize registrars sheet";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@@ -144,11 +141,4 @@ public class SyncRegistrarsSheetAction implements Runnable {
Result.LOCKED.send(response, null);
}
}
/**
* Enqueues a sync registrar sheet task targeting the App Engine service specified by hostname.
*/
public static void enqueueRegistrarSheetSync(String hostname) {
getQueue(QUEUE).add(withUrl(PATH).method(Method.GET).header("Host", hostname));
}
}

View File

@@ -169,6 +169,7 @@ import org.joda.time.Duration;
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredDuringEarlyAccessProgramException}
* @error {@link DomainFlowUtils.FeesRequiredForPremiumNameException}
* @error {@link DomainFlowUtils.InvalidDsRecordException}
* @error {@link DomainFlowUtils.InvalidIdnDomainLabelException}
* @error {@link DomainFlowUtils.InvalidPunycodeException}
* @error {@link DomainFlowUtils.InvalidTcnIdChecksumException}

View File

@@ -129,6 +129,7 @@ import google.registry.model.tld.label.ReservedList;
import google.registry.model.tmch.ClaimsListDao;
import google.registry.persistence.VKey;
import google.registry.tldconfig.idn.IdnLabelValidator;
import google.registry.tools.DigestType;
import google.registry.util.Idn;
import java.math.BigDecimal;
import java.util.Collection;
@@ -144,6 +145,7 @@ import org.joda.money.CurrencyUnit;
import org.joda.money.Money;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.xbill.DNS.DNSSEC.Algorithm;
/** Static utility functions for domain flows. */
public class DomainFlowUtils {
@@ -293,13 +295,59 @@ public class DomainFlowUtils {
/** Check that the DS data that will be set on a domain is valid. */
static void validateDsData(Set<DelegationSignerData> dsData) throws EppException {
if (dsData != null && dsData.size() > MAX_DS_RECORDS_PER_DOMAIN) {
throw new TooManyDsRecordsException(
String.format(
"A maximum of %s DS records are allowed per domain.", MAX_DS_RECORDS_PER_DOMAIN));
if (dsData != null) {
if (dsData.size() > MAX_DS_RECORDS_PER_DOMAIN) {
throw new TooManyDsRecordsException(
String.format(
"A maximum of %s DS records are allowed per domain.", MAX_DS_RECORDS_PER_DOMAIN));
}
ImmutableList<DelegationSignerData> invalidAlgorithms =
dsData.stream()
.filter(ds -> !validateAlgorithm(ds.getAlgorithm()))
.collect(toImmutableList());
if (!invalidAlgorithms.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid algorithm wire value: %s",
invalidAlgorithms));
}
ImmutableList<DelegationSignerData> invalidDigestTypes =
dsData.stream()
.filter(ds -> !DigestType.fromWireValue(ds.getDigestType()).isPresent())
.collect(toImmutableList());
if (!invalidDigestTypes.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid digest type: %s",
invalidDigestTypes));
}
ImmutableList<DelegationSignerData> digestsWithInvalidDigestLength =
dsData.stream()
.filter(
ds ->
DigestType.fromWireValue(ds.getDigestType()).isPresent()
&& (ds.getDigest().length
!= DigestType.fromWireValue(ds.getDigestType()).get().getBytes()))
.collect(toImmutableList());
if (!digestsWithInvalidDigestLength.isEmpty()) {
throw new InvalidDsRecordException(
String.format(
"Domain contains DS record(s) with an invalid digest length: %s",
digestsWithInvalidDigestLength));
}
}
}
public static boolean validateAlgorithm(int alg) {
if (alg > 255 || alg < 0) {
return false;
}
// Algorithms that are reserved or unassigned will just return a string representation of their
// integer wire value.
String algorithm = Algorithm.string(alg);
return !algorithm.equals(Integer.toString(alg));
}
/** We only allow specifying years in a period. */
static Period verifyUnitIsYears(Period period) throws EppException {
if (!checkNotNull(period).getUnit().equals(Period.Unit.YEARS)) {
@@ -1077,7 +1125,16 @@ public class DomainFlowUtils {
// Only cancel fields which are cancelable
if (cancelableFields.contains(record.getReportField())) {
int cancelledAmount = -1 * record.getReportAmount();
recordsBuilder.add(record.asBuilder().setReportAmount(cancelledAmount).build());
// NB: It's necessary to create a new DomainTransactionRecord from scratch so that we
// don't retain the ID of the previous record to cancel. If we keep the ID, Hibernate
// will remove that record from the DB entirely as the record will be re-parented on
// this DomainHistory being created now.
recordsBuilder.add(
DomainTransactionRecord.create(
record.getTld(),
record.getReportingTime(),
record.getReportField(),
cancelledAmount));
}
}
}
@@ -1217,6 +1274,13 @@ public class DomainFlowUtils {
}
}
/** Domain has an invalid DS record. */
static class InvalidDsRecordException extends ParameterValuePolicyErrorException {
public InvalidDsRecordException(String message) {
super(message);
}
}
/** Domain name is under tld which doesn't exist. */
static class TldDoesNotExistException extends ParameterValueRangeErrorException {
public TldDoesNotExistException(String tld) {

View File

@@ -114,6 +114,7 @@ import org.joda.time.DateTime;
* @error {@link DomainFlowUtils.EmptySecDnsUpdateException}
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredForNonFreeOperationException}
* @error {@link DomainFlowUtils.InvalidDsRecordException}
* @error {@link DomainFlowUtils.LinkedResourcesDoNotExistException}
* @error {@link DomainFlowUtils.LinkedResourceInPendingDeleteProhibitsOperationException}
* @error {@link DomainFlowUtils.MaxSigLifeChangeNotSupportedException}

View File

@@ -14,8 +14,6 @@
package google.registry.loadtest;
import static com.google.appengine.api.taskqueue.QueueConstants.maxTasksPerAdd;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.Lists.partition;
@@ -24,16 +22,20 @@ import static google.registry.util.ResourceUtils.readResourceUtf8;
import static java.util.Arrays.asList;
import static org.joda.time.DateTimeZone.UTC;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.Iterators;
import com.google.common.flogger.FluentLogger;
import com.google.protobuf.Timestamp;
import google.registry.config.RegistryEnvironment;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.security.XsrfTokenManager;
import google.registry.util.TaskQueueUtils;
import google.registry.util.CloudTasksUtils;
import java.time.Instant;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
@@ -62,6 +64,7 @@ public class LoadTestAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final int NUM_QUEUES = 10;
private static final int MAX_TASKS_PER_LOAD = 100;
private static final int ARBITRARY_VALID_HOST_LENGTH = 40;
private static final int MAX_CONTACT_LENGTH = 13;
private static final int MAX_DOMAIN_LABEL_LENGTH = 63;
@@ -146,7 +149,7 @@ public class LoadTestAction implements Runnable {
@Parameter("hostInfos")
int hostInfosPerSecond;
@Inject TaskQueueUtils taskQueueUtils;
@Inject CloudTasksUtils cloudTasksUtils;
private final String xmlContactCreateTmpl;
private final String xmlContactCreateFail;
@@ -208,7 +211,7 @@ public class LoadTestAction implements Runnable {
ImmutableList<String> contactNames = contactNamesBuilder.build();
ImmutableList<String> hostPrefixes = hostPrefixesBuilder.build();
ImmutableList.Builder<TaskOptions> tasks = new ImmutableList.Builder<>();
ImmutableList.Builder<Task> tasks = new ImmutableList.Builder<>();
for (int offsetSeconds = 0; offsetSeconds < runSeconds; offsetSeconds++) {
DateTime startSecond = initialStartSecond.plusSeconds(offsetSeconds);
// The first "failed" creates might actually succeed if the object doesn't already exist, but
@@ -254,7 +257,7 @@ public class LoadTestAction implements Runnable {
.collect(toImmutableList()),
startSecond));
}
ImmutableList<TaskOptions> taskOptions = tasks.build();
ImmutableList<Task> taskOptions = tasks.build();
enqueue(taskOptions);
logger.atInfo().log("Added %d total load test tasks.", taskOptions.size());
}
@@ -322,28 +325,51 @@ public class LoadTestAction implements Runnable {
return name.toString();
}
private List<TaskOptions> createTasks(List<String> xmls, DateTime start) {
ImmutableList.Builder<TaskOptions> tasks = new ImmutableList.Builder<>();
private List<Task> createTasks(List<String> xmls, DateTime start) {
ImmutableList.Builder<Task> tasks = new ImmutableList.Builder<>();
for (int i = 0; i < xmls.size(); i++) {
// Space tasks evenly within across a second.
int offsetMillis = (int) (1000.0 / xmls.size() * i);
Instant scheduleTime =
Instant.ofEpochMilli(start.plusMillis((int) (1000.0 / xmls.size() * i)).getMillis());
tasks.add(
TaskOptions.Builder.withUrl("/_dr/epptool")
.etaMillis(start.getMillis() + offsetMillis)
.header(X_CSRF_TOKEN, xsrfToken)
.param("clientId", registrarId)
.param("superuser", Boolean.FALSE.toString())
.param("dryRun", Boolean.FALSE.toString())
.param("xml", xmls.get(i)));
Task.newBuilder()
.setAppEngineHttpRequest(
cloudTasksUtils
.createPostTask(
"/_dr/epptool",
Service.TOOLS.toString(),
ImmutableMultimap.of(
"clientId",
registrarId,
"superuser",
Boolean.FALSE.toString(),
"dryRun",
Boolean.FALSE.toString(),
"xml",
xmls.get(i)))
.toBuilder()
.getAppEngineHttpRequest()
.toBuilder()
// instead of adding the X_CSRF_TOKEN to params, this remains as part of
// headers because of the existing setup for authentication in {@link
// google.registry.request.auth.LegacyAuthenticationMechanism}
.putHeaders(X_CSRF_TOKEN, xsrfToken)
.build())
.setScheduleTime(
Timestamp.newBuilder()
.setSeconds(scheduleTime.getEpochSecond())
.setNanos(scheduleTime.getNano())
.build())
.build());
}
return tasks.build();
}
private void enqueue(List<TaskOptions> tasks) {
List<List<TaskOptions>> chunks = partition(tasks, maxTasksPerAdd());
private void enqueue(List<Task> tasks) {
List<List<Task>> chunks = partition(tasks, MAX_TASKS_PER_LOAD);
// Farm out tasks to multiple queues to work around queue qps quotas.
for (int i = 0; i < chunks.size(); i++) {
taskQueueUtils.enqueue(getQueue("load" + (i % NUM_QUEUES)), chunks.get(i));
cloudTasksUtils.enqueue("load" + (i % NUM_QUEUES), chunks.get(i));
}
}
}

View File

@@ -60,4 +60,25 @@ public abstract class BackupGroupRoot extends ImmutableObject implements UnsafeS
protected void copyUpdateTimestamp(BackupGroupRoot other) {
this.updateTimestamp = PreconditionsUtils.checkArgumentNotNull(other, "other").updateTimestamp;
}
/**
* Resets the {@link #updateTimestamp} to force Hibernate to persist it.
*
* <p>This method is for use in setters in derived builders that do not result in the derived
* object being persisted.
*/
protected void resetUpdateTimestamp() {
this.updateTimestamp = UpdateAutoTimestamp.create(null);
}
/**
* Sets the {@link #updateTimestamp}.
*
* <p>This method is for use in the few places where we need to restore the update timestamp after
* mutating a collection in order to force the new timestamp to be persisted when it ordinarily
* wouldn't.
*/
protected void setUpdateTimestamp(UpdateAutoTimestamp timestamp) {
updateTimestamp = timestamp;
}
}

View File

@@ -15,6 +15,7 @@
package google.registry.model;
import com.google.common.collect.ImmutableSet;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.billing.BillingEvent;
import google.registry.model.common.Cursor;
import google.registry.model.common.EntityGroupRoot;
@@ -45,6 +46,7 @@ import google.registry.model.server.ServerSecret;
import google.registry.model.tld.Registry;
/** Sets of classes of the Objectify-registered entities in use throughout the model. */
@DeleteAfterMigration
public final class EntityClasses {
/** Set of entity classes. */

View File

@@ -21,6 +21,7 @@ import static com.google.common.collect.Sets.union;
import static google.registry.config.RegistryConfig.getEppResourceCachingDuration;
import static google.registry.config.RegistryConfig.getEppResourceMaxCachedEntries;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.CollectionUtils.nullToEmptyImmutableCopy;
@@ -361,6 +362,16 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable {
return thisCastToDerived();
}
/**
* Set the update timestamp.
*
* <p>This is provided at EppResource since BackupGroupRoot doesn't have a Builder.
*/
public B setUpdateTimestamp(UpdateAutoTimestamp updateTimestamp) {
getInstance().setUpdateTimestamp(updateTimestamp);
return thisCastToDerived();
}
/** Build the resource, nullifying empty strings and sets and setting defaults. */
@Override
public T build() {
@@ -380,13 +391,13 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable {
@Override
public EppResource load(VKey<? extends EppResource> key) {
return tm().doTransactionless(() -> tm().loadByKey(key));
return replicaTm().doTransactionless(() -> replicaTm().loadByKey(key));
}
@Override
public Map<VKey<? extends EppResource>, EppResource> loadAll(
Iterable<? extends VKey<? extends EppResource>> keys) {
return tm().doTransactionless(() -> tm().loadByKeys(keys));
return replicaTm().doTransactionless(() -> replicaTm().loadByKeys(keys));
}
};

View File

@@ -20,6 +20,7 @@ import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.common.annotations.VisibleForTesting;
import google.registry.beam.common.RegistryPipelineWorkerInitializer;
import google.registry.config.RegistryEnvironment;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.concurrent.atomic.AtomicLong;
/**
@@ -28,6 +29,7 @@ import java.util.concurrent.atomic.AtomicLong;
* <p>In non-test, non-beam environments the Id is generated by Datastore, otherwise it's from an
* atomic long number that's incremented every time this method is called.
*/
@DeleteAfterMigration
public final class IdService {
/**

View File

@@ -19,12 +19,14 @@ import static com.google.common.base.Predicates.subtypeOf;
import static java.util.stream.Collectors.joining;
import com.google.common.collect.Ordering;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.ArrayDeque;
import java.util.Queue;
import java.util.SortedSet;
import java.util.TreeSet;
/** Utility methods for getting the version of the model schema from the model code. */
@DeleteAfterMigration
public final class SchemaVersion {
/**

View File

@@ -74,6 +74,8 @@ public class BulkQueryEntities {
builder.setGracePeriods(gracePeriods);
builder.setDsData(delegationSignerData);
builder.setNameservers(nsHosts);
// Restore the original update timestamp (this gets cleared when we set nameservers or DS data).
builder.setUpdateTimestamp(domainBaseLite.getUpdateTimestamp());
return builder.build();
}
@@ -100,6 +102,9 @@ public class BulkQueryEntities {
dsDataHistories.stream()
.map(DelegationSignerData::create)
.collect(toImmutableSet()))
// Restore the original update timestamp (this gets cleared when we set nameservers or
// DS data).
.setUpdateTimestamp(domainHistoryLite.domainContent.getUpdateTimestamp())
.build();
builder.setDomain(newDomainContent);
}

View File

@@ -20,11 +20,13 @@ import com.googlecode.objectify.Key;
import com.googlecode.objectify.annotation.Id;
import com.googlecode.objectify.annotation.Parent;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.InCrossTld;
import javax.persistence.MappedSuperclass;
import javax.persistence.Transient;
/** A singleton entity in Datastore. */
@DeleteAfterMigration
@MappedSuperclass
@InCrossTld
public abstract class CrossTldSingleton extends ImmutableObject {

View File

@@ -26,6 +26,7 @@ import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSortedMap;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryEnvironment;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.common.TimedTransitionProperty.TimedTransition;
import google.registry.model.replay.SqlOnlyEntity;
import java.time.Duration;
@@ -39,6 +40,7 @@ import org.joda.time.DateTime;
* <p>The entity is stored in SQL throughout the entire migration so as to have a single point of
* access.
*/
@DeleteAfterMigration
@Entity
public class DatabaseMigrationStateSchedule extends CrossTldSingleton implements SqlOnlyEntity {

View File

@@ -19,6 +19,7 @@ import com.googlecode.objectify.Key;
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import google.registry.model.BackupGroupRoot;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.replay.DatastoreOnlyEntity;
import javax.annotation.Nullable;
@@ -37,6 +38,7 @@ import javax.annotation.Nullable;
* entity group for the single namespace where global data applicable for all TLDs lived.
*/
@Entity
@DeleteAfterMigration
public class EntityGroupRoot extends BackupGroupRoot implements DatastoreOnlyEntity {
@SuppressWarnings("unused")

View File

@@ -64,7 +64,7 @@ public class ContactHistory extends HistoryEntry implements SqlEntity, UnsafeSer
// Store ContactBase instead of ContactResource so we don't pick up its @Id
// Nullable for the sake of pre-Registry-3.0 history objects
@Nullable ContactBase contactBase;
@DoNotCompare @Nullable ContactBase contactBase;
@Id
@Access(AccessType.PROPERTY)

View File

@@ -779,6 +779,10 @@ public class DomainContent extends EppResource
*/
void setContactFields(Set<DesignatedContact> contacts, boolean includeRegistrant) {
// Set the individual contact fields.
billingContact = techContact = adminContact = null;
if (includeRegistrant) {
registrantContact = null;
}
for (DesignatedContact contact : contacts) {
switch (contact.getType()) {
case BILLING:
@@ -895,6 +899,7 @@ public class DomainContent extends EppResource
public B setDsData(ImmutableSet<DelegationSignerData> dsData) {
getInstance().dsData = dsData;
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
@@ -918,11 +923,13 @@ public class DomainContent extends EppResource
public B setNameservers(VKey<HostResource> nameserver) {
getInstance().nsHosts = ImmutableSet.of(nameserver);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B setNameservers(ImmutableSet<VKey<HostResource>> nameservers) {
getInstance().nsHosts = forceEmptyToNull(nameservers);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
@@ -1032,17 +1039,20 @@ public class DomainContent extends EppResource
public B setGracePeriods(ImmutableSet<GracePeriod> gracePeriods) {
getInstance().gracePeriods = gracePeriods;
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B addGracePeriod(GracePeriod gracePeriod) {
getInstance().gracePeriods = union(getInstance().getGracePeriods(), gracePeriod);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}
public B removeGracePeriod(GracePeriod gracePeriod) {
getInstance().gracePeriods =
CollectionUtils.difference(getInstance().getGracePeriods(), gracePeriod);
getInstance().resetUpdateTimestamp();
return thisCastToDerived();
}

View File

@@ -86,7 +86,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
// Store DomainContent instead of DomainBase so we don't pick up its @Id
// Nullable for the sake of pre-Registry-3.0 history objects
@Nullable DomainContent domainContent;
@DoNotCompare @Nullable DomainContent domainContent;
@Id
@Access(AccessType.PROPERTY)
@@ -105,6 +105,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
// We could have reused domainContent.nsHosts here, but Hibernate throws a weird exception after
// we change to use a composite primary key.
// TODO(b/166776754): Investigate if we can reuse domainContent.nsHosts for storing host keys.
@DoNotCompare
@ElementCollection
@JoinTable(
name = "DomainHistoryHost",
@@ -118,6 +119,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
@Column(name = "host_repo_id")
Set<VKey<HostResource>> nsHosts;
@DoNotCompare
@OneToMany(
cascade = {CascadeType.ALL},
fetch = FetchType.EAGER,
@@ -137,6 +139,7 @@ public class DomainHistory extends HistoryEntry implements SqlEntity {
// HashSet rather than ImmutableSet so that Hibernate can fill them out lazily on request
Set<DomainDsDataHistory> dsDataHistories = new HashSet<>();
@DoNotCompare
@OneToMany(
cascade = {CascadeType.ALL},
fetch = FetchType.EAGER,

View File

@@ -66,7 +66,7 @@ public class HostHistory extends HistoryEntry implements SqlEntity, UnsafeSerial
// Store HostBase instead of HostResource so we don't pick up its @Id
// Nullable for the sake of pre-Registry-3.0 history objects
@Nullable HostBase hostBase;
@DoNotCompare @Nullable HostBase hostBase;
@Id
@Access(AccessType.PROPERTY)

View File

@@ -34,6 +34,20 @@ import javax.persistence.AccessType;
@ReportedOn
@Entity
@javax.persistence.Entity(name = "Host")
@javax.persistence.Table(
name = "Host",
/**
* A gin index defined on the inet_addresses field ({@link HostBase#inetAddresses} cannot be
* declared here because JPA/Hibernate does not support index type specification. As a result,
* the hibernate-generated schema (which is for reference only) does not have this index.
*
* <p>There are Hibernate-specific solutions for adding this index to Hibernate's domain model.
* We could either declare the index in hibernate.cfg.xml or add it to the {@link
* org.hibernate.cfg.Configuration} instance for {@link SessionFactory} instantiation (which
* would prevent us from using JPA standard bootstrapping). For now, there is no obvious benefit
* doing either.
*/
indexes = {@javax.persistence.Index(columnList = "hostName")})
@ExternalMessagingName("host")
@WithStringVKey
@Access(AccessType.FIELD) // otherwise it'll use the default if the repoId (property)

View File

@@ -24,12 +24,15 @@ import com.googlecode.objectify.annotation.Index;
import com.googlecode.objectify.annotation.Parent;
import google.registry.model.BackupGroupRoot;
import google.registry.model.EppResource;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.ReportedOn;
import google.registry.model.replay.DatastoreOnlyEntity;
import google.registry.persistence.VKey;
/** An index that allows for quick enumeration of all EppResource entities (e.g. via map reduce). */
@ReportedOn
@Entity
@DeleteAfterMigration
public class EppResourceIndex extends BackupGroupRoot implements DatastoreOnlyEntity {
@Id String id;
@@ -64,8 +67,9 @@ public class EppResourceIndex extends BackupGroupRoot implements DatastoreOnlyEn
EppResourceIndex instance = instantiate(EppResourceIndex.class);
instance.reference = resourceKey;
instance.kind = resourceKey.getKind();
// TODO(b/207368050): figure out if this value has ever been used other than test cases
instance.id = resourceKey.getString(); // creates a web-safe key string
// creates a web-safe key string, this value is never used
// TODO(b/211785379): remove unused id
instance.id = VKey.from(resourceKey).stringify();
instance.bucket = bucket;
return instance;
}

View File

@@ -23,12 +23,14 @@ import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import google.registry.model.EppResource;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.VirtualEntity;
import google.registry.model.replay.DatastoreOnlyEntity;
/** A virtual entity to represent buckets to which EppResourceIndex objects are randomly added. */
@Entity
@VirtualEntity
@DeleteAfterMigration
public class EppResourceIndexBucket extends ImmutableObject implements DatastoreOnlyEntity {
@SuppressWarnings("unused")

View File

@@ -21,6 +21,7 @@ import static google.registry.config.RegistryConfig.getEppResourceCachingDuratio
import static google.registry.config.RegistryConfig.getEppResourceMaxCachedEntries;
import static google.registry.model.ofy.ObjectifyService.auditedOfy;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.replicaJpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.entriesToImmutableMap;
import static google.registry.util.TypeUtils.instantiate;
@@ -43,6 +44,7 @@ import com.googlecode.objectify.annotation.Index;
import google.registry.config.RegistryConfig;
import google.registry.model.BackupGroupRoot;
import google.registry.model.EppResource;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.ReportedOn;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
@@ -50,6 +52,7 @@ import google.registry.model.host.HostResource;
import google.registry.model.replay.DatastoreOnlyEntity;
import google.registry.persistence.VKey;
import google.registry.persistence.transaction.CriteriaQueryBuilder;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.util.NonFinalForTesting;
import java.util.Collection;
import java.util.Comparator;
@@ -65,6 +68,7 @@ import org.joda.time.Duration;
* the foreign key string. The instance is never deleted, but it is updated if a newer entity
* becomes the active entity.
*/
@DeleteAfterMigration
public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroupRoot {
/** The {@link ForeignKeyIndex} type for {@link ContactResource} entities. */
@@ -196,7 +200,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
*/
public static <E extends EppResource> ImmutableMap<String, ForeignKeyIndex<E>> load(
Class<E> clazz, Collection<String> foreignKeys, final DateTime now) {
return loadIndexesFromStore(clazz, foreignKeys, true).entrySet().stream()
return loadIndexesFromStore(clazz, foreignKeys, true, false).entrySet().stream()
.filter(e -> now.isBefore(e.getValue().getDeletionTime()))
.collect(entriesToImmutableMap());
}
@@ -215,7 +219,10 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
*/
private static <E extends EppResource>
ImmutableMap<String, ForeignKeyIndex<E>> loadIndexesFromStore(
Class<E> clazz, Collection<String> foreignKeys, boolean inTransaction) {
Class<E> clazz,
Collection<String> foreignKeys,
boolean inTransaction,
boolean useReplicaJpaTm) {
if (tm().isOfy()) {
Class<ForeignKeyIndex<E>> fkiClass = mapToFkiClass(clazz);
return ImmutableMap.copyOf(
@@ -224,17 +231,18 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
: tm().doTransactionless(() -> auditedOfy().load().type(fkiClass).ids(foreignKeys)));
} else {
String property = RESOURCE_CLASS_TO_FKI_PROPERTY.get(clazz);
JpaTransactionManager jpaTmToUse = useReplicaJpaTm ? replicaJpaTm() : jpaTm();
ImmutableList<ForeignKeyIndex<E>> indexes =
tm().transact(
() ->
jpaTm()
.criteriaQuery(
CriteriaQueryBuilder.create(clazz)
.whereFieldIsIn(property, foreignKeys)
.build())
.getResultStream()
.map(e -> ForeignKeyIndex.create(e, e.getDeletionTime()))
.collect(toImmutableList()));
jpaTmToUse.transact(
() ->
jpaTmToUse
.criteriaQuery(
CriteriaQueryBuilder.create(clazz)
.whereFieldIsIn(property, foreignKeys)
.build())
.getResultStream()
.map(e -> ForeignKeyIndex.create(e, e.getDeletionTime()))
.collect(toImmutableList()));
// We need to find and return the entities with the maximum deletionTime for each foreign key.
return Multimaps.index(indexes, ForeignKeyIndex::getForeignKey).asMap().entrySet().stream()
.map(
@@ -258,7 +266,8 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
loadIndexesFromStore(
RESOURCE_CLASS_TO_FKI_CLASS.inverse().get(key.getKind()),
ImmutableSet.of(foreignKey),
false)
false,
true)
.get(foreignKey));
}
@@ -274,7 +283,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
Streams.stream(keys).map(v -> v.getSqlKey().toString()).collect(toImmutableSet());
ImmutableSet<VKey<ForeignKeyIndex<?>>> typedKeys = ImmutableSet.copyOf(keys);
ImmutableMap<String, ? extends ForeignKeyIndex<? extends EppResource>> existingFkis =
loadIndexesFromStore(resourceClass, foreignKeys, false);
loadIndexesFromStore(resourceClass, foreignKeys, false, true);
// ofy omits keys that don't have values in Datastore, so re-add them in
// here with Optional.empty() values.
return Maps.asMap(
@@ -334,7 +343,7 @@ public abstract class ForeignKeyIndex<E extends EppResource> extends BackupGroup
// Safe to cast VKey<FKI<E>> to VKey<FKI<?>>
@SuppressWarnings("unchecked")
ImmutableList<VKey<ForeignKeyIndex<?>>> fkiVKeys =
Streams.stream(foreignKeys)
foreignKeys.stream()
.map(fk -> (VKey<ForeignKeyIndex<?>>) VKey.create(fkiClass, fk))
.collect(toImmutableList());
try {

View File

@@ -23,6 +23,7 @@ import com.googlecode.objectify.Key;
import com.googlecode.objectify.Result;
import com.googlecode.objectify.cmd.DeleteType;
import com.googlecode.objectify.cmd.Deleter;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Arrays;
import java.util.stream.Stream;
@@ -30,6 +31,7 @@ import java.util.stream.Stream;
* A Deleter that forwards to {@code auditedOfy().delete()}, but can be augmented via subclassing to
* do custom processing on the keys to be deleted prior to their deletion.
*/
@DeleteAfterMigration
abstract class AugmentedDeleter implements Deleter {
private final Deleter delegate = ofy().delete();

View File

@@ -21,13 +21,15 @@ import com.google.common.collect.ImmutableList;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.Result;
import com.googlecode.objectify.cmd.Saver;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Arrays;
import java.util.Map;
/**
* A Saver that forwards to {@code ofy().save()}, but can be augmented via subclassing to
* do custom processing on the entities to be saved prior to their saving.
* A Saver that forwards to {@code ofy().save()}, but can be augmented via subclassing to do custom
* processing on the entities to be saved prior to their saving.
*/
@DeleteAfterMigration
abstract class AugmentedSaver implements Saver {
private final Saver delegate = ofy().save();

View File

@@ -31,6 +31,7 @@ import com.googlecode.objectify.annotation.Id;
import google.registry.config.RegistryConfig;
import google.registry.model.Buildable;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.NotBackedUp;
import google.registry.model.annotations.NotBackedUp.Reason;
import google.registry.model.replay.DatastoreOnlyEntity;
@@ -51,6 +52,7 @@ import org.joda.time.DateTime;
*/
@Entity
@NotBackedUp(reason = Reason.COMMIT_LOGS)
@DeleteAfterMigration
public class CommitLogBucket extends ImmutableObject implements Buildable, DatastoreOnlyEntity {
/**

View File

@@ -25,6 +25,7 @@ import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import com.googlecode.objectify.annotation.Parent;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.NotBackedUp;
import google.registry.model.annotations.NotBackedUp.Reason;
import google.registry.model.replay.DatastoreOnlyEntity;
@@ -45,6 +46,7 @@ import org.joda.time.DateTime;
*/
@Entity
@NotBackedUp(reason = Reason.COMMIT_LOGS)
@DeleteAfterMigration
public class CommitLogCheckpoint extends ImmutableObject implements DatastoreOnlyEntity {
/** Shared singleton parent entity for commit log checkpoints. */

View File

@@ -21,6 +21,7 @@ import com.googlecode.objectify.Key;
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.NotBackedUp;
import google.registry.model.annotations.NotBackedUp.Reason;
import google.registry.model.replay.DatastoreOnlyEntity;
@@ -29,6 +30,7 @@ import org.joda.time.DateTime;
/** Singleton parent entity for all commit log checkpoints. */
@Entity
@NotBackedUp(reason = Reason.COMMIT_LOGS)
@DeleteAfterMigration
public class CommitLogCheckpointRoot extends ImmutableObject implements DatastoreOnlyEntity {
public static final long SINGLETON_ID = 1; // There is always exactly one of these.

View File

@@ -23,6 +23,7 @@ import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import com.googlecode.objectify.annotation.Parent;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.NotBackedUp;
import google.registry.model.annotations.NotBackedUp.Reason;
import google.registry.model.replay.DatastoreOnlyEntity;
@@ -39,6 +40,7 @@ import org.joda.time.DateTime;
*/
@Entity
@NotBackedUp(reason = Reason.COMMIT_LOGS)
@DeleteAfterMigration
public class CommitLogManifest extends ImmutableObject implements DatastoreOnlyEntity {
/** Commit log manifests are parented on a random bucket. */

View File

@@ -27,6 +27,7 @@ import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import com.googlecode.objectify.annotation.Parent;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.NotBackedUp;
import google.registry.model.annotations.NotBackedUp.Reason;
import google.registry.model.replay.DatastoreOnlyEntity;
@@ -34,6 +35,7 @@ import google.registry.model.replay.DatastoreOnlyEntity;
/** Representation of a saved entity in a {@link CommitLogManifest} (not deletes). */
@Entity
@NotBackedUp(reason = Reason.COMMIT_LOGS)
@DeleteAfterMigration
public class CommitLogMutation extends ImmutableObject implements DatastoreOnlyEntity {
/** The manifest this belongs to. */

View File

@@ -30,6 +30,7 @@ import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.model.BackupGroupRoot;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.util.Clock;
import java.util.HashSet;
import java.util.Map;
@@ -39,7 +40,8 @@ import java.util.function.Supplier;
import org.joda.time.DateTime;
/** Wrapper for {@link Supplier} that associates a time with each attempt. */
class CommitLoggedWork<R> implements Runnable {
@DeleteAfterMigration
public class CommitLoggedWork<R> implements Runnable {
private final Supplier<R> work;
private final Clock clock;

View File

@@ -33,6 +33,7 @@ import com.googlecode.objectify.Key;
import com.googlecode.objectify.Result;
import com.googlecode.objectify.cmd.Query;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.InCrossTld;
import google.registry.model.contact.ContactHistory;
import google.registry.model.domain.DomainHistory;
@@ -55,6 +56,7 @@ import javax.persistence.NonUniqueResultException;
import org.joda.time.DateTime;
/** Datastore implementation of {@link TransactionManager}. */
@DeleteAfterMigration
public class DatastoreTransactionManager implements TransactionManager {
private Ofy injectedOfy;

View File

@@ -16,6 +16,7 @@ package google.registry.model.ofy;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import google.registry.model.annotations.DeleteAfterMigration;
/**
* Contains the mapping from class names to SQL-replay-write priorities.
@@ -26,6 +27,7 @@ import com.google.common.collect.ImmutableMap;
* values represent an earlier write (and later delete). Higher-valued classes can have foreign keys
* on lower-valued classes, but not vice versa.
*/
@DeleteAfterMigration
public class EntityWritePriorities {
/**

View File

@@ -35,6 +35,7 @@ import google.registry.config.RegistryEnvironment;
import google.registry.model.Buildable;
import google.registry.model.EntityClasses;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.translators.BloomFilterOfStringTranslatorFactory;
import google.registry.model.translators.CidrAddressBlockTranslatorFactory;
import google.registry.model.translators.CommitLogRevisionsTranslatorFactory;
@@ -52,6 +53,7 @@ import google.registry.model.translators.VKeyTranslatorFactory;
* objects. The class contains a static initializer to call factory().register(...) on all
* persistable objects in this package.
*/
@DeleteAfterMigration
public class ObjectifyService {
/** A singleton instance of our Ofy wrapper. */

View File

@@ -37,6 +37,7 @@ import com.googlecode.objectify.ObjectifyFactory;
import com.googlecode.objectify.cmd.Deleter;
import com.googlecode.objectify.cmd.Loader;
import com.googlecode.objectify.cmd.Saver;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.annotations.NotBackedUp;
import google.registry.model.annotations.VirtualEntity;
import google.registry.model.ofy.ReadOnlyWork.KillTransactionException;
@@ -59,6 +60,7 @@ import org.joda.time.Duration;
* simpler to wrap {@link Objectify} rather than extend it because this way we can remove some
* methods that we don't really want exposed and add some shortcuts.
*/
@DeleteAfterMigration
public class Ofy {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();

View File

@@ -14,6 +14,7 @@
package google.registry.model.ofy;
import google.registry.model.annotations.DeleteAfterMigration;
import java.io.IOException;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
@@ -23,6 +24,7 @@ import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
/** A filter that statically registers types with Objectify. */
@DeleteAfterMigration
public class OfyFilter implements Filter {
@Override

View File

@@ -14,10 +14,12 @@
package google.registry.model.ofy;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.util.Clock;
import java.util.function.Supplier;
/** Wrapper for {@link Supplier} that disallows mutations and fails the transaction at the end. */
@DeleteAfterMigration
class ReadOnlyWork<R> extends CommitLoggedWork<R> {
ReadOnlyWork(Supplier<R> work, Clock clock) {

View File

@@ -22,6 +22,7 @@ import com.google.common.collect.ImmutableMap;
import com.googlecode.objectify.Key;
import google.registry.config.RegistryEnvironment;
import google.registry.model.UpdateAutoTimestamp;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.replay.DatastoreEntity;
import google.registry.model.replay.ReplaySpecializer;
import google.registry.persistence.VKey;
@@ -34,6 +35,7 @@ import java.util.concurrent.ConcurrentLinkedQueue;
*
* <p>This code is to be removed when the actual replay cron job is implemented.
*/
@DeleteAfterMigration
public class ReplayQueue {
static ConcurrentLinkedQueue<ImmutableMap<Key<?>, Object>> queue =

View File

@@ -28,6 +28,7 @@ import com.google.appengine.api.datastore.Query;
import com.google.appengine.api.datastore.Transaction;
import com.google.appengine.api.datastore.TransactionOptions;
import com.google.common.collect.ImmutableList;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
@@ -35,6 +36,7 @@ import java.util.Map;
import java.util.concurrent.Future;
/** A proxy for {@link AsyncDatastoreService} that exposes call counts. */
@DeleteAfterMigration
public class RequestCapturingAsyncDatastoreService implements AsyncDatastoreService {
private final AsyncDatastoreService delegate;

View File

@@ -18,8 +18,10 @@ import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.ObjectifyFactory;
import com.googlecode.objectify.impl.ObjectifyImpl;
import google.registry.model.annotations.DeleteAfterMigration;
/** Registry-specific Objectify subclass that exposes the keys used in the current session. */
@DeleteAfterMigration
public class SessionKeyExposingObjectify extends ObjectifyImpl<SessionKeyExposingObjectify> {
public SessionKeyExposingObjectify(ObjectifyFactory factory) {

View File

@@ -17,6 +17,7 @@ package google.registry.model.ofy;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.Objectify;
import google.registry.model.BackupGroupRoot;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Arrays;
import java.util.Map;
import org.joda.time.DateTime;
@@ -25,6 +26,7 @@ import org.joda.time.DateTime;
* Exception when trying to write to Datastore with a timestamp that is inconsistent with a partial
* ordering on transactions that touch the same entities.
*/
@DeleteAfterMigration
class TimestampInversionException extends RuntimeException {
static String getFileAndLine(StackTraceElement callsite) {

View File

@@ -26,10 +26,12 @@ import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Map;
import org.joda.time.DateTime;
/** Metadata for an {@link Ofy} transaction that saves commit logs. */
@DeleteAfterMigration
public class TransactionInfo {
@VisibleForTesting

View File

@@ -46,6 +46,7 @@ import static google.registry.util.X509Utils.loadCertificate;
import static java.util.Comparator.comparing;
import static java.util.function.Predicate.isEqual;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.base.Supplier;
import com.google.common.collect.ImmutableList;
@@ -991,6 +992,16 @@ public class Registrar extends ImmutableObject
return this;
}
/**
* This lets tests set the update timestamp in cases where setting fields resets the timestamp
* and breaks the verification that an object has not been updated since it was copied.
*/
@VisibleForTesting
public Builder setLastUpdateTime(DateTime timestamp) {
getInstance().lastUpdateTime = UpdateAutoTimestamp.create(timestamp);
return this;
}
/** Build the registrar, nullifying empty fields. */
@Override
public Registrar build() {

View File

@@ -14,9 +14,11 @@
package google.registry.model.replay;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Optional;
/** An entity that has the same Java object representation in SQL and Datastore. */
@DeleteAfterMigration
public interface DatastoreAndSqlEntity extends DatastoreEntity, SqlEntity {
@Override

View File

@@ -14,6 +14,7 @@
package google.registry.model.replay;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Optional;
/**
@@ -24,6 +25,7 @@ import java.util.Optional;
* transactions and data into the secondary SQL store during the first, Datastore-primary, phase of
* the migration.
*/
@DeleteAfterMigration
public interface DatastoreEntity {
Optional<SqlEntity> toSqlEntity();

View File

@@ -14,9 +14,11 @@
package google.registry.model.replay;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Optional;
/** An entity that is only stored in Datastore, that should not be replayed to SQL. */
@DeleteAfterMigration
public interface DatastoreOnlyEntity extends DatastoreEntity {
@Override
default Optional<SqlEntity> toSqlEntity() {

View File

@@ -22,9 +22,11 @@ import com.googlecode.objectify.Key;
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
import google.registry.model.ImmutableObject;
import google.registry.model.annotations.DeleteAfterMigration;
/** Datastore entity to keep track of the last SQL transaction imported into the datastore. */
@Entity
@DeleteAfterMigration
public class LastSqlTransaction extends ImmutableObject implements DatastoreOnlyEntity {
/** The key for this singleton. */

View File

@@ -14,6 +14,7 @@
package google.registry.model.replay;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Optional;
/**
@@ -21,6 +22,7 @@ import java.util.Optional;
*
* <p>We expect that this is a result of the entity being dually-written.
*/
@DeleteAfterMigration
public interface NonReplicatedEntity extends DatastoreEntity, SqlEntity {
@Override

View File

@@ -14,6 +14,7 @@
package google.registry.model.replay;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.persistence.VKey;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
@@ -25,6 +26,7 @@ import java.lang.reflect.Method;
* not directly present in the other database. This class allows us to do that by using reflection
* to invoke special class methods if they are present.
*/
@DeleteAfterMigration
public class ReplaySpecializer {
public static void beforeSqlDelete(VKey<?> key) {

View File

@@ -27,6 +27,7 @@ import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import google.registry.model.UpdateAutoTimestamp;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
@@ -53,10 +54,15 @@ import org.joda.time.Duration;
automaticallyPrintOk = true,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
@VisibleForTesting
@DeleteAfterMigration
public class ReplicateToDatastoreAction implements Runnable {
public static final String PATH = "/_dr/cron/replicateToDatastore";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/** Name of the lock that ensures sequential execution of replays. */
public static final String REPLICATE_TO_DATASTORE_LOCK_NAME =
ReplicateToDatastoreAction.class.getSimpleName();
/**
* Number of transactions to fetch from SQL. The rationale for 200 is that we're processing these
* every minute and our production instance currently does about 2 mutations per second, so this
@@ -64,7 +70,7 @@ public class ReplicateToDatastoreAction implements Runnable {
*/
public static final int BATCH_SIZE = 200;
private static final Duration LEASE_LENGTH = standardHours(1);
public static final Duration REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH = standardHours(1);
private final Clock clock;
private final RequestStatusChecker requestStatusChecker;
@@ -79,21 +85,26 @@ public class ReplicateToDatastoreAction implements Runnable {
}
@VisibleForTesting
public List<TransactionEntity> getTransactionBatch() {
public List<TransactionEntity> getTransactionBatchAtSnapshot() {
return getTransactionBatchAtSnapshot(Optional.empty());
}
static List<TransactionEntity> getTransactionBatchAtSnapshot(Optional<String> snapshotId) {
// Get the next batch of transactions that we haven't replicated.
LastSqlTransaction lastSqlTxnBeforeBatch = ofyTm().transact(LastSqlTransaction::load);
try {
return jpaTm()
.transactWithoutBackup(
() ->
jpaTm()
.query(
"SELECT txn FROM TransactionEntity txn WHERE id >"
+ " :lastId ORDER BY id",
TransactionEntity.class)
.setParameter("lastId", lastSqlTxnBeforeBatch.getTransactionId())
.setMaxResults(BATCH_SIZE)
.getResultList());
() -> {
snapshotId.ifPresent(jpaTm()::setDatabaseSnapshot);
return jpaTm()
.query(
"SELECT txn FROM TransactionEntity txn WHERE id >" + " :lastId ORDER BY id",
TransactionEntity.class)
.setParameter("lastId", lastSqlTxnBeforeBatch.getTransactionId())
.setMaxResults(BATCH_SIZE)
.getResultList();
});
} catch (NoResultException e) {
return ImmutableList.of();
}
@@ -106,7 +117,7 @@ public class ReplicateToDatastoreAction implements Runnable {
* <p>Throws an exception if a fatal error occurred and the batch should be aborted
*/
@VisibleForTesting
public void applyTransaction(TransactionEntity txnEntity) {
public static void applyTransaction(TransactionEntity txnEntity) {
logger.atInfo().log("Applying a single transaction Cloud SQL -> Cloud Datastore.");
try (UpdateAutoTimestamp.DisableAutoUpdateResource disabler =
UpdateAutoTimestamp.disableAutoUpdate()) {
@@ -116,15 +127,16 @@ public class ReplicateToDatastoreAction implements Runnable {
// Reload the last transaction id, which could possibly have changed.
LastSqlTransaction lastSqlTxn = LastSqlTransaction.load();
long nextTxnId = lastSqlTxn.getTransactionId() + 1;
if (nextTxnId < txnEntity.getId()) {
// We're missing a transaction. This is bad. Transaction ids are supposed to
// increase monotonically, so we abort rather than applying anything out of
// order.
throw new IllegalStateException(
String.format(
"Missing transaction: last txn id = %s, next available txn = %s",
nextTxnId - 1, txnEntity.getId()));
} else if (nextTxnId > txnEntity.getId()) {
// Skip missing transactions. Missed transactions can happen normally. If a
// transaction gets rolled back, the sequence counter doesn't.
while (nextTxnId < txnEntity.getId()) {
logger.atWarning().log(
"Ignoring transaction %s, which does not exist.", nextTxnId);
++nextTxnId;
}
if (nextTxnId > txnEntity.getId()) {
// We've already replayed this transaction. This shouldn't happen, as GAE cron
// is supposed to avoid overruns and this action shouldn't be executed from any
// other context, but it's not harmful as we can just ignore the transaction. Log
@@ -172,7 +184,11 @@ public class ReplicateToDatastoreAction implements Runnable {
}
Optional<Lock> lock =
Lock.acquireSql(
this.getClass().getSimpleName(), null, LEASE_LENGTH, requestStatusChecker, false);
REPLICATE_TO_DATASTORE_LOCK_NAME,
null,
REPLICATE_TO_DATASTORE_LOCK_LEASE_LENGTH,
requestStatusChecker,
false);
if (!lock.isPresent()) {
String message = "Can't acquire ReplicateToDatastoreAction lock, aborting.";
logger.atSevere().log(message);
@@ -201,10 +217,14 @@ public class ReplicateToDatastoreAction implements Runnable {
}
private int replayAllTransactions() {
return replayAllTransactions(Optional.empty());
}
public static int replayAllTransactions(Optional<String> snapshotId) {
int numTransactionsReplayed = 0;
List<TransactionEntity> transactionBatch;
do {
transactionBatch = getTransactionBatch();
transactionBatch = getTransactionBatchAtSnapshot(snapshotId);
for (TransactionEntity transaction : transactionBatch) {
applyTransaction(transaction);
numTransactionsReplayed++;

View File

@@ -16,6 +16,7 @@ package google.registry.model.replay;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Optional;
/**
@@ -25,6 +26,7 @@ import java.util.Optional;
* <p>This will be used when replaying SQL transactions into Datastore, during the second,
* SQL-primary, phase of the migration from Datastore to SQL.
*/
@DeleteAfterMigration
public interface SqlEntity {
Optional<DatastoreEntity> toDatastoreEntity();

View File

@@ -14,9 +14,11 @@
package google.registry.model.replay;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.Optional;
/** An entity that is only stored in SQL, that should not be replayed to Datastore. */
@DeleteAfterMigration
public interface SqlOnlyEntity extends SqlEntity {
@Override
default Optional<DatastoreEntity> toDatastoreEntity() {

View File

@@ -17,12 +17,14 @@ package google.registry.model.replay;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.common.CrossTldSingleton;
import javax.persistence.Column;
import javax.persistence.Entity;
import org.joda.time.DateTime;
@Entity
@DeleteAfterMigration
public class SqlReplayCheckpoint extends CrossTldSingleton implements SqlOnlyEntity {
@Column(nullable = false)

View File

@@ -324,7 +324,7 @@ public class HistoryEntry extends ImmutableObject
// Note: how we wish to treat this Hibernate setter depends on the current state of the object
// and what's passed in. The key principle is that we wish to maintain the link between parent
// and child objects, meaning that we should keep around whichever of the two sets (the
// parameter vs the class variable and clear/populate that as appropriate.
// parameter vs the class variable) and clear/populate that as appropriate.
//
// If the class variable is a PersistentSet and we overwrite it here, Hibernate will throw
// an exception "A collection with cascade=”all-delete-orphan” was no longer referenced by the
@@ -539,7 +539,7 @@ public class HistoryEntry extends ImmutableObject
public B setDomainTransactionRecords(
ImmutableSet<DomainTransactionRecord> domainTransactionRecords) {
getInstance().domainTransactionRecords = domainTransactionRecords;
getInstance().setDomainTransactionRecords(domainTransactionRecords);
return thisCastToDerived();
}
}

View File

@@ -21,7 +21,6 @@ import static com.google.common.hash.Funnels.stringFunnel;
import com.google.common.base.Splitter;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Streams;
import com.google.common.hash.BloomFilter;
import google.registry.model.Buildable;
import google.registry.model.ImmutableObject;
@@ -86,9 +85,8 @@ public final class PremiumList extends BaseDomainLabelList<BigDecimal, PremiumEn
*/
public synchronized ImmutableMap<String, BigDecimal> getLabelsToPrices() {
if (labelsToPrices == null) {
Iterable<PremiumEntry> entries = PremiumListDao.loadAllPremiumEntries(name);
labelsToPrices =
Streams.stream(entries)
PremiumListDao.loadAllPremiumEntries(name).stream()
.collect(
toImmutableMap(
PremiumEntry::getDomainLabel,

View File

@@ -28,8 +28,8 @@ import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.CacheLoader.InvalidCacheLoadException;
import com.google.common.cache.LoadingCache;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import google.registry.model.tld.label.PremiumList.PremiumEntry;
import google.registry.util.NonFinalForTesting;
import java.math.BigDecimal;
@@ -56,8 +56,7 @@ public class PremiumListDao {
* <p>This is cached for a shorter duration because we need to periodically reload this entity to
* check if a new revision has been published, and if so, then use that.
*
* <p>We also cache the absence of premium lists with a given name to avoid unnecessary pointless
* lookups. Note that this cache is only applicable to PremiumList objects stored in SQL.
* <p>We also cache the absence of premium lists with a given name to avoid pointless lookups.
*/
@NonFinalForTesting
static LoadingCache<String, Optional<PremiumList>> premiumListCache =
@@ -170,11 +169,10 @@ public class PremiumListDao {
if (!isNullOrEmpty(premiumList.getLabelsToPrices())) {
ImmutableSet.Builder<PremiumEntry> entries = new ImmutableSet.Builder<>();
premiumList.getLabelsToPrices().entrySet().stream()
premiumList
.getLabelsToPrices()
.forEach(
entry ->
entries.add(
PremiumEntry.create(revisionId, entry.getValue(), entry.getKey())));
(key, value) -> entries.add(PremiumEntry.create(revisionId, value, key)));
jpaTm().insertAll(entries.build());
}
});
@@ -217,7 +215,7 @@ public class PremiumListDao {
*
* <p>This is an expensive operation and should only be used when the entire list is required.
*/
public static Iterable<PremiumEntry> loadPremiumEntries(PremiumList premiumList) {
public static List<PremiumEntry> loadPremiumEntries(PremiumList premiumList) {
return jpaTm()
.transact(
() ->
@@ -254,15 +252,14 @@ public class PremiumListDao {
*
* <p>This is an expensive operation and should only be used when the entire list is required.
*/
public static Iterable<PremiumEntry> loadAllPremiumEntries(String premiumListName) {
public static ImmutableList<PremiumEntry> loadAllPremiumEntries(String premiumListName) {
PremiumList premiumList =
getLatestRevision(premiumListName)
.orElseThrow(
() ->
new IllegalArgumentException(
String.format("No premium list with name %s.", premiumListName)));
Iterable<PremiumEntry> entries = loadPremiumEntries(premiumList);
return Streams.stream(entries)
return loadPremiumEntries(premiumList).stream()
.map(
premiumEntry ->
new PremiumEntry.Builder()

View File

@@ -23,6 +23,7 @@ import static google.registry.util.DateTimeUtils.START_OF_TIME;
import com.google.common.collect.ImmutableSortedMap;
import com.google.common.collect.Ordering;
import com.googlecode.objectify.Key;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.ofy.CommitLogManifest;
import google.registry.persistence.transaction.Transaction;
import org.joda.time.DateTime;
@@ -31,11 +32,12 @@ import org.joda.time.DateTime;
* Objectify translator for {@code ImmutableSortedMap<DateTime, Key<CommitLogManifest>>} fields.
*
* <p>This translator is responsible for doing three things:
*
* <ol>
* <li>Translating the data into two lists of {@code Date} and {@code Key} objects, in a manner
* similar to {@code @Mapify}.
* <li>Inserting a key to the transaction's {@link CommitLogManifest} on save.
* <li>Truncating the map to include only the last key per day for the last 30 days.
* <li>Translating the data into two lists of {@code Date} and {@code Key} objects, in a manner
* similar to {@code @Mapify}.
* <li>Inserting a key to the transaction's {@link CommitLogManifest} on save.
* <li>Truncating the map to include only the last key per day for the last 30 days.
* </ol>
*
* <p>This allows you to have a field on your model object that tracks historical revisions of
@@ -46,6 +48,7 @@ import org.joda.time.DateTime;
*
* @see google.registry.model.EppResource
*/
@DeleteAfterMigration
public final class CommitLogRevisionsTranslatorFactory
extends ImmutableSortedMapTranslatorFactory<DateTime, Key<CommitLogManifest>> {

View File

@@ -14,12 +14,10 @@
package google.registry.model.translators;
import static com.google.common.base.MoreObjects.firstNonNull;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.joda.time.DateTimeZone.UTC;
import google.registry.model.CreateAutoTimestamp;
import google.registry.persistence.transaction.Transaction;
import java.util.Date;
import org.joda.time.DateTime;
@@ -47,13 +45,13 @@ public class CreateAutoTimestampTranslatorFactory
/** Save a timestamp, setting it to the current time if it did not have a previous value. */
@Override
public Date saveValue(CreateAutoTimestamp pojoValue) {
// Don't do this if we're in the course of transaction serialization.
if (Transaction.inSerializationMode()) {
return pojoValue.getTimestamp() == null ? null : pojoValue.getTimestamp().toDate();
}
return firstNonNull(pojoValue.getTimestamp(), ofyTm().getTransactionTime()).toDate();
// Note that we use the current transaction manager -- we need to do this under JPA when we
// serialize the entity from a Transaction object, but we need to use the JPA transaction
// manager in that case.
return (pojoValue.getTimestamp() == null
? tm().getTransactionTime()
: pojoValue.getTimestamp())
.toDate();
}
};
}

View File

@@ -14,6 +14,8 @@
package google.registry.model.translators;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.persistence.transaction.TransactionManagerFactory.jpaTm;
import static google.registry.persistence.transaction.TransactionManagerFactory.ofyTm;
import static org.joda.time.DateTimeZone.UTC;
@@ -48,9 +50,16 @@ public class UpdateAutoTimestampTranslatorFactory
@Override
public Date saveValue(UpdateAutoTimestamp pojoValue) {
// Don't do this if we're in the course of transaction serialization.
// If we're in the course of Transaction serialization, we have to use the transaction time
// here and the JPA transaction manager which is what will ultimately be saved during the
// commit.
// Note that this branch doesn't respect "auto update disabled", as this state is
// specifically to address replay, so we add a runtime check for this.
if (Transaction.inSerializationMode()) {
return pojoValue.getTimestamp() == null ? null : pojoValue.getTimestamp().toDate();
checkState(
UpdateAutoTimestamp.autoUpdateEnabled(),
"Auto-update disabled during transaction serialization.");
return jpaTm().getTransactionTime().toDate();
}
return UpdateAutoTimestamp.autoUpdateEnabled()

View File

@@ -43,7 +43,7 @@ import google.registry.rde.JSchModule;
import google.registry.request.Modules.DatastoreServiceModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.URLFetchServiceModule;
import google.registry.request.Modules.UrlConnectionServiceModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
@@ -80,7 +80,7 @@ import javax.inject.Singleton;
ServerTridProviderModule.class,
SheetsServiceModule.class,
StackdriverModule.class,
URLFetchServiceModule.class,
UrlConnectionServiceModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
VoidDnsWriterModule.class,

View File

@@ -21,6 +21,7 @@ import google.registry.backup.CommitLogCheckpointAction;
import google.registry.backup.DeleteOldCommitLogsAction;
import google.registry.backup.ExportCommitLogDiffAction;
import google.registry.backup.ReplayCommitLogsToSqlAction;
import google.registry.backup.SyncDatastoreToSqlSnapshotAction;
import google.registry.batch.BatchModule;
import google.registry.batch.DeleteContactsAndHostsAction;
import google.registry.batch.DeleteExpiredDomainsAction;
@@ -199,6 +200,8 @@ interface BackendRequestComponent {
SendExpiringCertificateNotificationEmailAction sendExpiringCertificateNotificationEmailAction();
SyncDatastoreToSqlSnapshotAction syncDatastoreToSqlSnapshotAction();
SyncGroupMembersAction syncGroupMembersAction();
SyncRegistrarsSheetAction syncRegistrarsSheetAction();

View File

@@ -17,6 +17,7 @@ package google.registry.module.frontend;
import com.google.monitoring.metrics.MetricReporter;
import dagger.Component;
import dagger.Lazy;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.flows.ServerTridProviderModule;
@@ -33,7 +34,6 @@ import google.registry.monitoring.whitebox.StackdriverModule;
import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
import google.registry.ui.ConsoleDebug.ConsoleConfigModule;
@@ -45,10 +45,12 @@ import javax.inject.Singleton;
@Component(
modules = {
AuthModule.class,
CloudTasksUtilsModule.class,
ConfigModule.class,
ConsoleConfigModule.class,
CredentialModule.class,
CustomLogicFactoryModule.class,
CloudTasksUtilsModule.class,
DirectoryModule.class,
DummyKeyringModule.class,
FrontendRequestComponentModule.class,
@@ -62,7 +64,6 @@ import javax.inject.Singleton;
SecretManagerModule.class,
ServerTridProviderModule.class,
StackdriverModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
UtilsModule.class
})

View File

@@ -30,10 +30,10 @@ import google.registry.keyring.api.KeyModule;
import google.registry.keyring.kms.KmsModule;
import google.registry.module.pubapi.PubApiRequestComponent.PubApiRequestComponentModule;
import google.registry.monitoring.whitebox.StackdriverModule;
import google.registry.persistence.PersistenceModule;
import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
import google.registry.util.UtilsModule;
@@ -56,11 +56,11 @@ import javax.inject.Singleton;
KeyringModule.class,
KmsModule.class,
NetHttpTransportModule.class,
PersistenceModule.class,
PubApiRequestComponentModule.class,
SecretManagerModule.class,
ServerTridProviderModule.class,
StackdriverModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
UtilsModule.class
})

View File

@@ -17,6 +17,7 @@ package google.registry.module.tools;
import com.google.monitoring.metrics.MetricReporter;
import dagger.Component;
import dagger.Lazy;
import google.registry.config.CloudTasksUtilsModule;
import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.export.DriveModule;
@@ -35,7 +36,6 @@ import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.request.Modules.DatastoreServiceModule;
import google.registry.request.Modules.Jackson2Module;
import google.registry.request.Modules.NetHttpTransportModule;
import google.registry.request.Modules.UrlFetchTransportModule;
import google.registry.request.Modules.UserServiceModule;
import google.registry.request.auth.AuthModule;
import google.registry.util.UtilsModule;
@@ -49,6 +49,7 @@ import javax.inject.Singleton;
ConfigModule.class,
CredentialModule.class,
CustomLogicFactoryModule.class,
CloudTasksUtilsModule.class,
DatastoreServiceModule.class,
DirectoryModule.class,
DummyKeyringModule.class,
@@ -64,7 +65,6 @@ import javax.inject.Singleton;
ServerTridProviderModule.class,
StackdriverModule.class,
ToolsRequestComponentModule.class,
UrlFetchTransportModule.class,
UserServiceModule.class,
UtilsModule.class
})

View File

@@ -19,6 +19,7 @@ import google.registry.config.CredentialModule;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.keyring.kms.KmsModule;
import google.registry.persistence.PersistenceModule.AppEngineJpaTm;
import google.registry.persistence.PersistenceModule.ReadOnlyReplicaJpaTm;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.privileges.secretmanager.SecretManagerModule;
import google.registry.util.UtilsModule;
@@ -40,4 +41,7 @@ public interface PersistenceComponent {
@AppEngineJpaTm
JpaTransactionManager appEngineJpaTransactionManager();
@ReadOnlyReplicaJpaTm
JpaTransactionManager readOnlyReplicaJpaTransactionManager();
}

View File

@@ -20,6 +20,8 @@ import static google.registry.config.RegistryConfig.getHibernateHikariConnection
import static google.registry.config.RegistryConfig.getHibernateHikariIdleTimeout;
import static google.registry.config.RegistryConfig.getHibernateHikariMaximumPoolSize;
import static google.registry.config.RegistryConfig.getHibernateHikariMinimumIdle;
import static google.registry.config.RegistryConfig.getHibernateJdbcBatchSize;
import static google.registry.config.RegistryConfig.getHibernateJdbcFetchSize;
import static google.registry.config.RegistryConfig.getHibernateLogSqlQueries;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
@@ -76,15 +78,9 @@ public abstract class PersistenceModule {
public static final String HIKARI_DS_CLOUD_SQL_INSTANCE =
"hibernate.hikari.dataSource.cloudSqlInstance";
/**
* Postgresql-specific: driver default fetch size is 0, which disables streaming result sets. Here
* we set a small default geared toward Nomulus server transactions. Large queries can override
* the defaults using {@link JpaTransactionManager#setQueryFetchSize}.
*/
public static final String JDBC_BATCH_SIZE = "hibernate.jdbc.batch_size";
public static final String JDBC_FETCH_SIZE = "hibernate.jdbc.fetch_size";
private static final int DEFAULT_SERVER_FETCH_SIZE = 20;
@VisibleForTesting
@Provides
@DefaultHibernateConfigs
@@ -111,7 +107,8 @@ public abstract class PersistenceModule {
properties.put(HIKARI_MAXIMUM_POOL_SIZE, getHibernateHikariMaximumPoolSize());
properties.put(HIKARI_IDLE_TIMEOUT, getHibernateHikariIdleTimeout());
properties.put(Environment.DIALECT, NomulusPostgreSQLDialect.class.getName());
properties.put(JDBC_FETCH_SIZE, Integer.toString(DEFAULT_SERVER_FETCH_SIZE));
properties.put(JDBC_BATCH_SIZE, getHibernateJdbcBatchSize());
properties.put(JDBC_FETCH_SIZE, getHibernateJdbcFetchSize());
return properties.build();
}
@@ -122,8 +119,11 @@ public abstract class PersistenceModule {
@Config("cloudSqlJdbcUrl") String jdbcUrl,
@Config("cloudSqlInstanceConnectionName") String instanceConnectionName,
@DefaultHibernateConfigs ImmutableMap<String, String> defaultConfigs) {
return createPartialSqlConfigs(
jdbcUrl, instanceConnectionName, defaultConfigs, Optional.empty());
HashMap<String, String> overrides = Maps.newHashMap(defaultConfigs);
overrides.put(Environment.URL, jdbcUrl);
overrides.put(HIKARI_DS_SOCKET_FACTORY, "com.google.cloud.sql.postgres.SocketFactory");
overrides.put(HIKARI_DS_CLOUD_SQL_INSTANCE, instanceConnectionName);
return ImmutableMap.copyOf(overrides);
}
/**
@@ -184,22 +184,6 @@ public abstract class PersistenceModule {
return ImmutableMap.copyOf(overrides);
}
@VisibleForTesting
static ImmutableMap<String, String> createPartialSqlConfigs(
String jdbcUrl,
String instanceConnectionName,
ImmutableMap<String, String> defaultConfigs,
Optional<Provider<TransactionIsolationLevel>> isolationOverride) {
HashMap<String, String> overrides = Maps.newHashMap(defaultConfigs);
overrides.put(Environment.URL, jdbcUrl);
overrides.put(HIKARI_DS_SOCKET_FACTORY, "com.google.cloud.sql.postgres.SocketFactory");
overrides.put(HIKARI_DS_CLOUD_SQL_INSTANCE, instanceConnectionName);
isolationOverride
.map(Provider::get)
.ifPresent(override -> overrides.put(Environment.ISOLATION, override.name()));
return ImmutableMap.copyOf(overrides);
}
/**
* Provides a {@link Supplier} of single-use JDBC {@link Connection connections} that can manage
* the database DDL schema.
@@ -280,6 +264,40 @@ public abstract class PersistenceModule {
return new JpaTransactionManagerImpl(create(overrides), clock);
}
@Provides
@Singleton
@ReadOnlyReplicaJpaTm
static JpaTransactionManager provideReadOnlyReplicaJpaTm(
SqlCredentialStore credentialStore,
@PartialCloudSqlConfigs ImmutableMap<String, String> cloudSqlConfigs,
@Config("cloudSqlReplicaInstanceConnectionName")
Optional<String> replicaInstanceConnectionName,
Clock clock) {
HashMap<String, String> overrides = Maps.newHashMap(cloudSqlConfigs);
setSqlCredential(credentialStore, new RobotUser(RobotId.NOMULUS), overrides);
replicaInstanceConnectionName.ifPresent(
name -> overrides.put(HIKARI_DS_CLOUD_SQL_INSTANCE, name));
overrides.put(
Environment.ISOLATION, TransactionIsolationLevel.TRANSACTION_READ_COMMITTED.name());
return new JpaTransactionManagerImpl(create(overrides), clock);
}
@Provides
@Singleton
@BeamReadOnlyReplicaJpaTm
static JpaTransactionManager provideBeamReadOnlyReplicaJpaTm(
@BeamPipelineCloudSqlConfigs ImmutableMap<String, String> beamCloudSqlConfigs,
@Config("cloudSqlReplicaInstanceConnectionName")
Optional<String> replicaInstanceConnectionName,
Clock clock) {
HashMap<String, String> overrides = Maps.newHashMap(beamCloudSqlConfigs);
replicaInstanceConnectionName.ifPresent(
name -> overrides.put(HIKARI_DS_CLOUD_SQL_INSTANCE, name));
overrides.put(
Environment.ISOLATION, TransactionIsolationLevel.TRANSACTION_READ_COMMITTED.name());
return new JpaTransactionManagerImpl(create(overrides), clock);
}
/** Constructs the {@link EntityManagerFactory} instance. */
@VisibleForTesting
static EntityManagerFactory create(
@@ -357,7 +375,12 @@ public abstract class PersistenceModule {
* The {@link JpaTransactionManager} optimized for bulk loading multi-level JPA entities. Please
* see {@link google.registry.model.bulkquery.BulkQueryEntities} for more information.
*/
BULK_QUERY
BULK_QUERY,
/**
* The {@link JpaTransactionManager} that uses the read-only Postgres replica if configured, or
* the standard DB if not.
*/
READ_ONLY_REPLICA
}
/** Dagger qualifier for JDBC {@link Connection} with schema management privilege. */
@@ -383,6 +406,22 @@ public abstract class PersistenceModule {
@Documented
public @interface BeamBulkQueryJpaTm {}
/**
* Dagger qualifier for {@link JpaTransactionManager} used inside BEAM pipelines that uses the
* read-only Postgres replica if one is configured (otherwise it uses the standard DB).
*/
@Qualifier
@Documented
public @interface BeamReadOnlyReplicaJpaTm {}
/**
* Dagger qualifier for {@link JpaTransactionManager} that uses the read-only Postgres replica if
* one is configured (otherwise it uses the standard DB).
*/
@Qualifier
@Documented
public @interface ReadOnlyReplicaJpaTm {}
/** Dagger qualifier for {@link JpaTransactionManager} used for Nomulus tool. */
@Qualifier
@Documented

View File

@@ -144,7 +144,7 @@ public class VKey<T> extends ImmutableObject implements Serializable {
*/
public static <T> VKey<T> create(String keyString) {
if (!keyString.startsWith(CLASS_TYPE + KV_SEPARATOR)) {
// to handle the existing ofy key string
// handle the existing ofy key string
return fromWebsafeKey(keyString);
} else {
ImmutableMap<String, String> kvs =
@@ -307,6 +307,7 @@ public class VKey<T> extends ImmutableObject implements Serializable {
if (maybeGetSqlKey().isPresent()) {
key += DELIMITER + SQL_LOOKUP_KEY + KV_SEPARATOR + SerializeUtils.stringify(getSqlKey());
}
// getString() method returns a Base64 encoded web safe of ofy key
if (maybeGetOfyKey().isPresent()) {
key += DELIMITER + OFY_LOOKUP_KEY + KV_SEPARATOR + getOfyKey().getString();
}

View File

@@ -15,6 +15,7 @@
package google.registry.persistence.converter;
import com.google.common.collect.Maps;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationStateTransition;
@@ -23,6 +24,7 @@ import javax.persistence.Converter;
import org.joda.time.DateTime;
/** JPA converter for {@link DatabaseMigrationStateSchedule} transitions. */
@DeleteAfterMigration
@Converter(autoApply = true)
public class DatabaseMigrationScheduleTransitionConverter
extends TimedTransitionPropertyConverterBase<MigrationState, MigrationStateTransition> {

View File

@@ -18,7 +18,6 @@ import static google.registry.persistence.transaction.TransactionManagerFactory.
import com.google.common.collect.ImmutableList;
import java.util.Collection;
import javax.persistence.EntityManager;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Expression;
@@ -42,12 +41,14 @@ public class CriteriaQueryBuilder<T> {
private final CriteriaQuery<T> query;
private final Root<?> root;
private final JpaTransactionManager jpaTm;
private final ImmutableList.Builder<Predicate> predicates = new ImmutableList.Builder<>();
private final ImmutableList.Builder<Order> orders = new ImmutableList.Builder<>();
private CriteriaQueryBuilder(CriteriaQuery<T> query, Root<?> root) {
private CriteriaQueryBuilder(CriteriaQuery<T> query, Root<?> root, JpaTransactionManager jpaTm) {
this.query = query;
this.root = root;
this.jpaTm = jpaTm;
}
/** Adds a WHERE clause to the query, given the specified operation, field, and value. */
@@ -75,18 +76,18 @@ public class CriteriaQueryBuilder<T> {
*/
public <V> CriteriaQueryBuilder<T> whereFieldContains(String fieldName, Object value) {
return where(
jpaTm().getEntityManager().getCriteriaBuilder().isMember(value, root.get(fieldName)));
jpaTm.getEntityManager().getCriteriaBuilder().isMember(value, root.get(fieldName)));
}
/** Orders the result by the given field ascending. */
public CriteriaQueryBuilder<T> orderByAsc(String fieldName) {
orders.add(jpaTm().getEntityManager().getCriteriaBuilder().asc(root.get(fieldName)));
orders.add(jpaTm.getEntityManager().getCriteriaBuilder().asc(root.get(fieldName)));
return this;
}
/** Orders the result by the given field descending. */
public CriteriaQueryBuilder<T> orderByDesc(String fieldName) {
orders.add(jpaTm().getEntityManager().getCriteriaBuilder().desc(root.get(fieldName)));
orders.add(jpaTm.getEntityManager().getCriteriaBuilder().desc(root.get(fieldName)));
return this;
}
@@ -103,23 +104,24 @@ public class CriteriaQueryBuilder<T> {
/** Creates a query builder that will SELECT from the given class. */
public static <T> CriteriaQueryBuilder<T> create(Class<T> clazz) {
return create(jpaTm().getEntityManager(), clazz);
return create(jpaTm(), clazz);
}
/** Creates a query builder for the given entity manager. */
public static <T> CriteriaQueryBuilder<T> create(EntityManager em, Class<T> clazz) {
CriteriaQuery<T> query = em.getCriteriaBuilder().createQuery(clazz);
public static <T> CriteriaQueryBuilder<T> create(JpaTransactionManager jpaTm, Class<T> clazz) {
CriteriaQuery<T> query = jpaTm.getEntityManager().getCriteriaBuilder().createQuery(clazz);
Root<T> root = query.from(clazz);
query = query.select(root);
return new CriteriaQueryBuilder<>(query, root);
return new CriteriaQueryBuilder<>(query, root, jpaTm);
}
/** Creates a "count" query for the table for the class. */
public static <T> CriteriaQueryBuilder<Long> createCount(EntityManager em, Class<T> clazz) {
CriteriaBuilder builder = em.getCriteriaBuilder();
public static <T> CriteriaQueryBuilder<Long> createCount(
JpaTransactionManager jpaTm, Class<T> clazz) {
CriteriaBuilder builder = jpaTm.getEntityManager().getCriteriaBuilder();
CriteriaQuery<Long> query = builder.createQuery(Long.class);
Root<T> root = query.from(clazz);
query = query.select(builder.count(root));
return new CriteriaQueryBuilder<>(query, root);
return new CriteriaQueryBuilder<>(query, root, jpaTm);
}
}

View File

@@ -30,6 +30,7 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.googlecode.objectify.Key;
import google.registry.model.ImmutableObject;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.ReplayDirection;
@@ -141,14 +142,15 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
// Postgresql-specific: 'set transaction' command must be called inside a transaction
assertInTransaction();
EntityManager entityManager = getEntityManager();
ReadOnlyCheckingEntityManager entityManager =
(ReadOnlyCheckingEntityManager) getEntityManager();
// Isolation is hardcoded to REPEATABLE READ, as specified by parent's Javadoc.
entityManager
.createNativeQuery("SET TRANSACTION ISOLATION LEVEL REPEATABLE READ")
.executeUpdate();
.executeUpdateIgnoringReadOnly();
entityManager
.createNativeQuery(String.format("SET TRANSACTION SNAPSHOT '%s'", snapshotId))
.executeUpdate();
.executeUpdateIgnoringReadOnly();
return this;
}
@@ -603,6 +605,13 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
managedEntity = getEntityManager().merge(entity);
}
getEntityManager().remove(managedEntity);
// We check shouldReplicate() in TransactionInfo.addDelete(), but we have to check it here as
// well prior to attempting to create a datastore key because a non-replicated entity may not
// have one.
if (shouldReplicate(entity.getClass())) {
transactionInfo.get().addDelete(VKey.from(Key.create(entity)));
}
return managedEntity;
}
@@ -826,6 +835,12 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
replaySqlToDatastoreOverrideForTest.set(Optional.empty());
}
/** Returns true if the entity class should be replicated from SQL to datastore. */
private static boolean shouldReplicate(Class<?> entityClass) {
return !NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !SqlOnlyEntity.class.isAssignableFrom(entityClass);
}
private static class TransactionInfo {
ReadOnlyCheckingEntityManager entityManager;
boolean inTransaction = false;
@@ -882,12 +897,6 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
}
}
/** Returns true if the entity class should be replicated from SQL to datastore. */
private boolean shouldReplicate(Class<?> entityClass) {
return !NonReplicatedEntity.class.isAssignableFrom(entityClass)
&& !SqlOnlyEntity.class.isAssignableFrom(entityClass);
}
private void recordTransaction() {
if (contentsBuilder != null) {
Transaction persistedTxn = contentsBuilder.build();
@@ -1130,7 +1139,7 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
private TypedQuery<T> buildQuery() {
CriteriaQueryBuilder<T> queryBuilder =
CriteriaQueryBuilder.create(getEntityManager(), entityClass);
CriteriaQueryBuilder.create(JpaTransactionManagerImpl.this, entityClass);
return addCriteria(queryBuilder);
}
@@ -1177,7 +1186,7 @@ public class JpaTransactionManagerImpl implements JpaTransactionManager {
@Override
public long count() {
CriteriaQueryBuilder<Long> queryBuilder =
CriteriaQueryBuilder.createCount(getEntityManager(), entityClass);
CriteriaQueryBuilder.createCount(JpaTransactionManagerImpl.this, entityClass);
return addCriteria(queryBuilder).getSingleResult();
}

View File

@@ -16,6 +16,7 @@ package google.registry.persistence.transaction;
import static google.registry.persistence.transaction.TransactionManagerFactory.assertNotReadOnlyMode;
import google.registry.model.annotations.DeleteAfterMigration;
import java.util.List;
import java.util.Map;
import javax.persistence.EntityGraph;
@@ -34,6 +35,7 @@ import javax.persistence.criteria.CriteriaUpdate;
import javax.persistence.metamodel.Metamodel;
/** An {@link EntityManager} that throws exceptions on write actions if in read-only mode. */
@DeleteAfterMigration
public class ReadOnlyCheckingEntityManager implements EntityManager {
private final EntityManager delegate;
@@ -204,7 +206,7 @@ public class ReadOnlyCheckingEntityManager implements EntityManager {
}
@Override
public Query createNativeQuery(String sqlString) {
public ReadOnlyCheckingQuery createNativeQuery(String sqlString) {
return new ReadOnlyCheckingQuery(delegate.createNativeQuery(sqlString));
}

Some files were not shown because too many files have changed in this diff Show More