1
0
mirror of https://github.com/google/nomulus synced 2026-02-02 19:12:27 +00:00

Compare commits

..

79 Commits

Author SHA1 Message Date
Pavlo Tkach
db1b92638f Create console settings contact endpoints (#2033) 2023-05-31 16:34:57 -04:00
Lai Jiang
74baae397a Find the most recent prefix for RdeReportAction (#2043)
When RdeReportAction is invoked without a prefix parameter (as in the
case when it is kicked off by cron jobs for potential catch ups), we
need to used the same heuristics that's employed in RdeUploadAction to
find the most recent prefix for the given watermark, otherwise the job
will not find any deposits to upload.

Also renamed RdeUtil to RdeUtils, to be consistent with our naming
conventions.
2023-05-25 14:57:03 -04:00
sarahcaseybot
fddecea18e Rename Registries to Tlds (#2042)
* Rename Registries to Tlds

* Change Tlds to TLDs in comments
2023-05-24 17:08:09 -04:00
Pavlo Tkach
36a60bdf8b Add swagger API documentation (#2035) 2023-05-24 16:10:50 -04:00
dependabot[bot]
58ed53314c Bump socket.io-parser from 4.2.1 to 4.2.3 in /console-webapp (#2040)
Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.3.
- [Release notes](https://github.com/socketio/socket.io-parser/releases)
- [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io-parser/compare/4.2.1...4.2.3)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 07:23:15 -04:00
Lai Jiang
5eaf99e02a Show HTTP response code when PUT fails (#2038) 2023-05-23 17:01:56 -04:00
Pavlo Tkach
9a5f094d1d Remove unused queue.xml file left after Cloud Tasks Queue migration (#2039) 2023-05-23 13:59:21 -04:00
Lai Jiang
6cbc2fa5ef Wrap tm().loadByKey() in a transaction when caching is not enabled. (#2030)
We have caching enabled so we never exercised this line.
2023-05-19 14:21:48 -04:00
Lai Jiang
6883093735 Drop DatabaseMigrationStateSchedule table (#2002) 2023-05-18 13:44:24 -04:00
Lai Jiang
a6078bc4f4 Refactor OIDC-based auth mechanism (#2025)
IAP and regular OIDC auth mechanisms are unified under a base class that
produces either APP or USER level AuthResult based on the principal email
found in the OIDC token.

Also moved some enum classes to better organize code structure.
2023-05-16 16:43:11 -04:00
gbrodman
6b75cf8496 Add view/edit basic registrar details permissions (#2036)
This encompasses most of the basic information that is viewable in the
existing console, basically, just viewing the base info of the Registrar
object.
2023-05-16 15:32:25 -04:00
Lai Jiang
219e9d3afb Update install.md (#2029) 2023-05-16 10:07:20 -04:00
sarahcaseybot
acdbc65c51 Change Registry object reference to Tld in configuration.md (#2021) 2023-05-12 12:32:02 -04:00
Weimin Yu
d510531f65 Remove the deprecatd DefaultCredential (#2032)
Use the ApplicationDefaultCredential annotation instead.

The new annotation has been verified in sandbox and production using the
'executeCannedScript' endpoint. The verification code is removed in this
PR too.
2023-05-11 13:46:36 -04:00
Lai Jiang
0d4dd57fe7 Fix a typo (#2031) 2023-05-11 13:26:07 -04:00
Pavlo Tkach
2667a0e977 Expand nomulus get_domain command to load up deleted domain data too (#2018) 2023-05-10 16:05:03 -04:00
gbrodman
1aef31efff Allow usage of standard HTTP requests in CloudTasksUtils (#2013)
This adds a possible configuration point "defaultServiceAccount" (which
in GAE will be the standard GAE service account). If this is configured,
CloudTasksUtils can create tasks with standard HTTP requests with an
OIDC token corresponding to that service account, as opposed to using
the AppEngine-specific request methods.

This also works with IAP, in that if IAP is on and we specify the IAP
client ID in the config, CloudTasksUtils will use the IAP client ID as
the token audience and the request will successfully be passed through
the IAP layer.

Tetsted in QA.
2023-05-09 16:02:12 -04:00
Lai Jiang
4d19245c29 Change usage grouping key in the invoice CSV (#2024)
This column is used by the billing team to create invoices. Registrars
have asked that a single invoice be created for a given registrar,
instead of one per registrar-tld pair. This should have no other effect
on the billing pipeline as the invoice grouping key has a description
field that also contains the TLD, so the granularity as a whole does not
change.
2023-05-09 11:25:11 -04:00
Lai Jiang
4b34307a6e Delete DatabaseMigrationStateSchedule (#2001)
We have been using it as a poor man's timed flag that triggers a system
behavior change after a certain time. We have no foreseeable future use
for it now that the DNS pull queue related code is deleted. If in the
future a need for such a flag arises, we are better off implementing a
proper flag system than hijacking this class any way.
2023-05-08 14:36:28 -04:00
Pavlo Tkach
55243e7cf6 Adds cloud scheduler and tasks deployer (#1999) 2023-05-04 15:57:32 -04:00
Lai Jiang
e14764b4c8 Remove DNS pull queue (#2000)
This is the last dependency on GAE pull queue, therefore we can delete
the pull queue config from queue.xml as well.
2023-05-04 13:21:53 -04:00
dependabot[bot]
68810f7a30 Bump engine.io and socket.io in /console-webapp (#2022)
Bumps [engine.io](https://github.com/socketio/engine.io) and [socket.io](https://github.com/socketio/socket.io). These dependencies needed to be updated together.

Updates `engine.io` from 6.2.1 to 6.4.2
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.1...6.4.2)

Updates `socket.io` from 4.5.2 to 4.6.1
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/4.5.2...4.6.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
- dependency-name: socket.io
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-04 12:50:19 -04:00
Ben McIlwain
14d245b1e3 Remove duplicate info from create/update reserved list command output (#2020)
It was repeating the domain label twice for every reserved list entry. It used
to look like this:

baddies=baddies,FULLY_BLOCKED
2023-05-03 17:31:23 -04:00
Weimin Yu
61ab29ae9e Prober ssl cert update automation (#2019)
Defined CloudBuild script and docker image that automatically
updates probers' SSL certs
2023-05-03 15:57:50 -04:00
Weimin Yu
6742e5bf23 Remove CloudSql wipeout cron job in crash (#2017)
No more production data in crash. This allows us to repopulate crash
with test data.
2023-05-02 14:44:09 -04:00
Weimin Yu
c7f69eba1d Prepare switch of credential annotation (#2014)
* Prepare switch of credential annotation

Prepare the switch from DefaultCredential to ApplicationCredential.

In nomulus tools, start using the new annotation. This is tested by
successfully using the nomulus curl command, which actually needs a
valid credential to work.

For remaining use cases of the old annotation in Nomulus server, add
some code that relies on the new credential to work. Once these code
are tested in sandbox and production, we will switch the annotations.
2023-05-01 11:23:19 -04:00
gbrodman
578988d5ea Don't allow a list of the empty string in List<String> fields (#2011)
If the user does, e.g. `--allowed_nameservers=` (or contact ids) that
shouldn't mean a list consisting solely of the empty string.

Using this parameter / converter allows us to ensure that lists of
strings look reasonable.
2023-04-28 17:59:17 -04:00
sarahcaseybot
c17b8285f9 Don't apply non-premium default tokens to premium names (#2007)
* Don't apply non-premium default tokens to premium names

* Add test for renew

* Remove premium check from try/catch block

* Add check in validateToken

* Update docs

* Add validateForPremiums

* Better method name

* Shorten error message to fit as reason

* Add missing extension catch

* Remove extra javadoc

* Fix merge conflicts and change error message

* Update flow docs
2023-04-28 17:56:15 -04:00
gbrodman
ff8a08f40e Fix typo in pipeline name (#2016) 2023-04-28 17:05:24 -04:00
gbrodman
a341058282 Refactor / rename Billing object classes (#1993)
This includes renaming the billing classes to match the SQL table names,
as well as splitting them out into their own separate top-level classes.
The rest of the changes are mostly renaming variables and comments etc.

We now use `BillingBase` as the name of the common billing superclass,
because one-time events are called BillingEvents
2023-04-28 14:27:37 -04:00
Weimin Yu
16758879f0 Allow rotation when updating registrar cert (#2012)
* Allow rotation when updating registrar cert

When updating a registrar's primary cert, add a flag to activate
rotation of previous primary cert to failover.

This functionality is part of the prober ssl cert renewal automation.
2023-04-27 14:42:11 -04:00
Lai Jiang
2021247ab4 Update README on how to manually push schema (#2009) 2023-04-26 16:32:15 -04:00
Lai Jiang
4fc7038690 Make a few minor changes to make the linter happy (#2010)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/2010)
<!-- Reviewable:end -->
2023-04-26 15:49:32 -04:00
Weimin Yu
9272e7fd14 Add a test of failover certificate (#2008)
Verifies that client can log in with correct failover certificate.
2023-04-26 15:47:47 -04:00
sarahcaseybot
e1afe00758 Require token transition schedules for default tokens (#2005) 2023-04-21 17:38:10 -04:00
sarahcaseybot
203c20c040 Use a TLD's configured TTLs if they are present (#1992)
* Use tld's configured TTLs if they are present

* Change to optional

* Use optionals better
2023-04-21 13:47:10 -04:00
Lai Jiang
bd0cea0d87 Remove AppEngineServiceUtils (#2003)
The only method that is called from this class is setNumInstances. However we
don't current use `nomulus set_num_instances` anywhere. If we need to change
the number of instances, it is either done by updating appengine-web.xml, which
is deployed by Spinnaker, or doing it manually as a break-glass fix via gcloud
or on Pantheon.
2023-04-21 10:11:12 -04:00
sarahcaseybot
23fb69a682 Fix parameter description for type in GenerateAllocationTokensCommand (#1998) 2023-04-19 17:32:09 -04:00
Lai Jiang
597f63a603 Fix URL parameter to the DNS refresh fanout job (#1997)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1997)
<!-- Reviewable:end -->
2023-04-19 15:32:41 -04:00
Lai Jiang
5ec73f3809 Refactor contact history PII wipeout logic into a Beam pipeline (#1994)
Because we need to check if a contact history is the most recent for its
underlying contact resource, the query-wipe out-repeat loop no longer works
ideally due to the added overhead with the query.

Instead, we refactor the logic into a Beam pipeline where the query only
needs to be performed once and history entries eligible for wipe out are
handled individually in their own transforms. Because history entries
are otherwise immutable, we can run the pipeline in relatively relaxed
repeatable read isolation level. We also do not worry about batching for
performance, as we do not anticipate this operation to put a lot of
strains on the particular table.
2023-04-19 13:04:45 -04:00
Ben McIlwain
b474e50e87 Update IDN tables with latest approved by ICANN (#1995)
This also adds README files to explain the two different IDN table locations
(which have different purposes). See http://b/278565478 for more information.
2023-04-18 17:23:12 -04:00
sarahcaseybot
6f3d062c32 Change Registry class name to Tld (#1991)
* Change Registry class name to Tld

* Fix merge conflict

* Some capitalization fixes
2023-04-18 12:26:14 -04:00
gbrodman
371d83b4cc Add a command to update Recurrence objects' behavior (#1987)
We want to basically be able to change the renewal behavior, either
setting the behavior type (e.g. NONPREMIUM) or the specified renewal
price.
2023-04-17 11:36:12 -04:00
Lai Jiang
e1f29a8103 Add routing for ReadDnsRefreshRequestsAction (#1990)
It looks like we forgot this crucial part to actually add the necessary
routing the new action...

Also fixes a linter warning.
2023-04-12 15:17:21 -04:00
Pavlo Tkach
055a52f67e Trim cloud scheduler config url value before submitting (#1988) 2023-04-10 19:05:32 -04:00
sarahcaseybot
d17678959c Add tool commands to modify TTLs on a TLD (#1985)
* Add tool commands to modify TTLs on a TLD

* Small changes

* Add an example to the parameter description
2023-04-10 14:43:56 -04:00
Lai Jiang
79ba1b94c4 Add SQL-based DNS refresh processing mechanism (#1971) 2023-04-07 17:31:28 -04:00
gbrodman
33a771b13e Add Java code for storing and using IDN tables per-TLD (#1977)
This includes changes to make sure that we use the proper per-TLD IDN
tables as well as setting/updating/removing them via the Create/Update
TLD commands.
2023-04-06 17:33:23 -04:00
gbrodman
bd65c6eee6 Allow a credit of 0 when deleting a domain during a grace period (#1984)
There can be situations (anchor tenants, test tokens, other ways of
getting a domain to cost $0) where we may want to delete a domain during
the add grace period but the credit applied is 0. We should not fail on
those cases.

See b/277115241 for an example.
2023-04-06 15:58:53 -04:00
Ben McIlwain
20c673840e Add a new Unconfusable Latin table (#1981)
This new table has just been approved by ICANN. It is the same as our existing
Extended Latin table, except with the removal of some lesser-used characters
with diacritic marks that are confusable variants.

The filenames for the IDN tables are made explicit to improve code readability.

And this reverses the removal of G with stroke from the existing Extended Latin
table (see PR #1938), so that that table continues to accurately reflect the
state of our previously launched TLDs.

This is the full list of removed characters:

U+00E1                         # LATIN SMALL LETTER A WITH ACUTE
U+0101                         # LATIN SMALL LETTER A WITH MACRON
U+01CE                         # LATIN SMALL LETTER A WITH CARON
U+010B                         # LATIN SMALL LETTER C WITH DOT ABOVE
U+01E7                         # LATIN SMALL LETTER G WITH CARON
U+0123                         # LATIN SMALL LETTER G WITH CEDILLA
U+01E5                         # LATIN SMALL LETTER G WITH STROKE
U+0131                         # LATIN SMALL LETTER DOTLESS I
U+00ED                         # LATIN SMALL LETTER I WITH ACUTE
U+00EF                         # LATIN SMALL LETTER I WITH DIAERESIS
U+01D0                         # LATIN SMALL LETTER I WITH CARON
U+0144                         # LATIN SMALL LETTER N WITH ACUTE
U+014B                         # LATIN SMALL LETTER ENG
U+00F3                         # LATIN SMALL LETTER O WITH ACUTE
U+014D                         # LATIN SMALL LETTER O WITH MACRON
U+01D2                         # LATIN SMALL LETTER O WITH CARON
U+0157                         # LATIN SMALL LETTER R WITH CEDILLA
U+0163                         # LATIN SMALL LETTER T WITH CEDILLA
U+00FA                         # LATIN SMALL LETTER U WITH ACUTE
U+00FC                         # LATIN SMALL LETTER U WITH DIAERESIS
U+01D4                         # LATIN SMALL LETTER U WITH CARON
U+1E83                         # LATIN SMALL LETTER W WITH ACUTE
U+1E81                         # LATIN SMALL LETTER W WITH GRAVE
U+1E85                         # LATIN SMALL LETTER W WITH DIAERESIS
U+1EF3                         # LATIN SMALL LETTER Y WITH GRAVE
U+017C                         # LATIN SMALL LETTER Z WITH DOT ABOVE
2023-04-06 15:49:36 -04:00
Lai Jiang
11c60b8c8f Temporarily disable contact history wipeout (#1982)
Makes the next run at the first Monday of December, which should give us
plenty of time to fix the issue with it wiping out PII in the most recent
contact history.

<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1982)
<!-- Reviewable:end -->
2023-04-06 13:41:51 -04:00
Lai Jiang
e330fd1c66 Remove cron.xml from sandbox (#1979)
It is somehow missed in #1965.
2023-04-06 11:30:07 -04:00
Pavlo Tkach
57c17042b6 Transaction manager to not retry inner transactions (#1974) 2023-04-05 16:46:36 -04:00
sarahcaseybot
8623fce119 Check for default tokens in the renew flow (#1978)
* Check for default tokens in the renew flow

* Remove extra check

* Add allowed action
2023-04-05 12:25:09 -04:00
Lai Jiang
7243575433 Remove unused GAE dependencies from NordnUploadAction (#1980) 2023-04-04 16:53:35 -04:00
sarahcaseybot
8eab43d371 Check allowedEppActions when validating tokens (#1972)
* Check allowedEppActions when validating tokens

* Reflect failed tokens in the fee check

* Add tests for domainCheckFlow

* Add hyphens to fee class name

* Add clarifying comment to catch block

* Add specific exception types
2023-04-04 14:29:50 -04:00
sarahcaseybot
34d329c158 Add tool changes to modify allowedEppActions on allocation tokens (#1970)
* Add tool changes to modify allowedEppActions on allocation tokens

* Change enum value error message

* Remove unnecessary variable

* Prevent UNKNOWN command

* Check command name instead of string
2023-03-31 14:37:19 -04:00
Pavlo Tkach
425ecdcd87 Add disable_runner_v2 to pipeline options (#1976) 2023-03-30 17:10:37 -04:00
gbrodman
77ee124374 Add SQL change for per-TLD IDN tables (#1975) 2023-03-28 17:03:22 -04:00
Lai Jiang
b9742adc0b Delete cron.xml (#1965)
We've successfully migrated to using Cloud Scheduler.
2023-03-23 14:29:06 -04:00
sarahcaseybot
d4cd25c4ae Add pricing logic for allocation tokens in domain renew (#1961)
* Add pricing logic for allocation tokens in domain renew

* Add clarifying comment

* Several fixes

* Add test for renewalPriceBehavior not changing
2023-03-23 14:00:36 -04:00
sarahcaseybot
8b7e938ed6 Add TTL configs to Registry object (#1968)
* Add TTL configs to Registry object

* Change A and AAAA records TTL field name
2023-03-22 13:56:11 -04:00
Pavlo Tkach
c216c874b4 Remove app engine deps from Lock flyway change (#1911) 2023-03-20 12:25:12 -04:00
Pavlo Tkach
0ab9471c8d Make cloud scheduler deployment part of gradle deploy (alpha, qa and crash only) (#1969) 2023-03-20 11:10:00 -04:00
sarahcaseybot
d482754f66 Implement default tokens for the fee extension in domain check flow (#1950)
* Implement default tokens for the fee extension in domain check

* Add test for expired token

* Add test for alloc token and default token

* Fix formatting

* Always check for default tokens

* Change transaction time to passed in DateTime
2023-03-17 15:41:17 -04:00
sarahcaseybot
fe086b43f5 Add TTL columns to the Tld table (#1964)
* Add TTL columns to Tld table

* Change A and AAAA records column name
2023-03-17 11:54:14 -04:00
Lai Jiang
95f1bca3fb Remove Nordn pull queue code (#1966)
The SQL-based flow is verified to work on production.
2023-03-16 17:37:48 -04:00
sarahcaseybot
178a2323d9 Add allowedEppActions to AllocationToken Java classes (#1958)
* Add allowedEppActions field to AllocationToken Java class and converter

* Add getter and setter
2023-03-16 15:45:34 -04:00
Lai Jiang
a44aa1378f Create a DnsRefreshRequest entity backed by the corresponding table (#1941)
Also adds a DnsUtils class to deal with adding, polling, and removing
DNS refresh requests (only adding is implemented for now). The class
also takes care of choosing which mechanism to use (pull queue vs. SQL)
based on the current time and the database migration schedule map.
2023-03-16 13:02:20 -04:00
Pavlo Tkach
d0f625f70e angular version update 15.1.0 -> 15.2.2 (#1967) 2023-03-16 11:56:38 -04:00
gbrodman
fb59874234 Allow for multiple service accounts in authentication (#1963)
When submitting tasks to Cloud Tasks, we will use the built-in OIDC
authentication which runs under the default service account (not the
cloud scheduler service account). We want either to work for app-level
auth.
2023-03-15 10:20:58 -04:00
gbrodman
b6083e227f Move CloudTasksUtils to core/ project (#1956)
This does nothing for now, but in the future this will allow us to refer
to the RegistryConfig and/or Service objects from the core project. This
will be necessary when changing CloudTasksUtils to not use the AppEngine
built-in connection (it will need to use a standard HTTP request
instead).
2023-03-14 15:15:05 -04:00
Lai Jiang
5805b6859e Rename process_time column in DnsRefreshRequest (#1962)
Make it explicit that this is the last process time, not a scheduled
future process time.
2023-03-14 14:03:12 -04:00
Pavlo Tkach
3108e8a871 Use builder image as a base for schema-deployer and schema-verifier (#1955) 2023-03-13 15:37:02 -04:00
Pavlo Tkach
ec142caf9c Expand ID Token Auth verifier to catch all exceptions (#1960) 2023-03-13 12:12:47 -04:00
Pavlo Tkach
e60ad58098 Restore resaveAllEppResourcesPipeline as a cloud task (#1953) 2023-03-13 10:44:25 -04:00
sarahcaseybot
83e9e7fb5c Add allowedEppActions field to AllocationToken (#1957) 2023-03-10 14:14:47 -05:00
Pavlo Tkach
438c523fcb Remove app engine deps from Lock (#1910) 2023-03-09 10:47:48 -05:00
Lai Jiang
025a2faff2 Drop the indexs and columns for dns_refresh_request_time (#1949) 2023-03-09 10:29:31 -05:00
471 changed files with 25533 additions and 23135 deletions

2
.gitignore vendored
View File

@@ -103,7 +103,7 @@ nomulus.iws
.gradle/
**/build
cloudbuild-caches/
node_modules/**
**/node_modules/**
/repos/
# Compiled JS/CSS code

View File

@@ -103,6 +103,7 @@ explodeWar.doLast {
file("${it.explodedAppDirectory}/WEB-INF/lib/tools.jar").setWritable(true)
}
appengineDeployAll.finalizedBy ':deployCloudSchedulerAndQueue'
rootProject.deploy.dependsOn appengineDeployAll
rootProject.stage.dependsOn appengineStage
tasks['war'].dependsOn ':console-webapp:buildConsoleWebappProd'

View File

@@ -558,6 +558,27 @@ task coreDev {
javadocDependentTasks.each { tasks.javadoc.dependsOn(it) }
// Runs the script, which deploys cloud scheduler and tasks based on the config
task deployCloudSchedulerAndQueue {
doLast {
def env = environment
if (!prodOrSandboxEnv) {
exec {
commandLine 'go', 'run',
"${rootDir}/release/builder/deployCloudSchedulerAndQueue.go",
"${rootDir}/core/src/main/java/google/registry/env/${env}/default/WEB-INF/cloud-scheduler-tasks.xml",
"domain-registry-${env}"
}
exec {
commandLine 'go', 'run',
"${rootDir}/release/builder/deployCloudSchedulerAndQueue.go",
"${rootDir}/core/src/main/java/google/registry/env/common/default/WEB-INF/cloud-tasks-queue.xml",
"domain-registry-${env}"
}
}
}
}
// disable javadoc in subprojects, these will break because they don't have
// the correct classpath (see above).
gradle.taskGraph.whenReady { graph ->

View File

@@ -25,7 +25,7 @@ import textwrap
import re
# We should never analyze any generated files
UNIVERSALLY_SKIPPED_PATTERNS = {"/build/", "cloudbuild-caches", "/out/", ".git/", ".gradle/", "/dist/", "karma.conf.js", "polyfills.ts", "test.ts"}
UNIVERSALLY_SKIPPED_PATTERNS = {"/build/", "cloudbuild-caches", "/out/", ".git/", ".gradle/", "/dist/", "karma.conf.js", "polyfills.ts", "test.ts", "/docs/console-endpoints/"}
# We can't rely on CI to have the Enum package installed so we do this instead.
FORBIDDEN = 1
REQUIRED = 2

File diff suppressed because it is too large Load Diff

View File

@@ -13,24 +13,24 @@
},
"private": true,
"dependencies": {
"@angular/animations": "^15.1.0",
"@angular/cdk": "^15.0.4",
"@angular/common": "^15.1.0",
"@angular/compiler": "^15.1.0",
"@angular/core": "^15.1.0",
"@angular/forms": "^15.1.0",
"@angular/material": "^15.0.4",
"@angular/platform-browser": "^15.1.0",
"@angular/platform-browser-dynamic": "^15.1.0",
"@angular/router": "^15.1.0",
"@angular/animations": "^15.2.2",
"@angular/cdk": "^15.2.2",
"@angular/common": "^15.2.2",
"@angular/compiler": "^15.2.2",
"@angular/core": "^15.2.2",
"@angular/forms": "^15.2.2",
"@angular/material": "^15.2.2",
"@angular/platform-browser": "^15.2.2",
"@angular/platform-browser-dynamic": "^15.2.2",
"@angular/router": "^15.2.2",
"rxjs": "~7.5.0",
"tslib": "^2.3.0",
"zone.js": "~0.11.4"
},
"devDependencies": {
"@angular-devkit/build-angular": "^15.1.0",
"@angular/cli": "~15.1.0",
"@angular/compiler-cli": "^15.1.0",
"@angular-devkit/build-angular": "^15.2.4",
"@angular/cli": "~15.2.4",
"@angular/compiler-cli": "^15.2.2",
"@types/jasmine": "~4.0.0",
"@types/node": "^18.11.18",
"concurrently": "^7.6.0",

View File

@@ -693,10 +693,6 @@ createToolTask(
'google.registry.tools.DevTool',
sourceSets.nonprod)
createToolTask(
'createSyntheticDomainHistories',
'google.registry.tools.javascrap.CreateSyntheticDomainHistoriesPipeline')
project.tasks.create('generateSqlSchema', JavaExec) {
classpath = sourceSets.nonprod.runtimeClasspath
main = 'google.registry.tools.DevTool'
@@ -753,8 +749,8 @@ if (environment == 'alpha') {
],
expandBilling :
[
mainClass: 'google.registry.beam.billing.ExpandRecurringBillingEventsPipeline',
metaData : 'google/registry/beam/expand_recurring_billing_events_pipeline_metadata.json'
mainClass: 'google.registry.beam.billing.ExpandBillingRecurrencesPipeline',
metaData : 'google/registry/beam/expand_billing_recurrences_pipeline_metadata.json'
],
rde :
[
@@ -766,6 +762,11 @@ if (environment == 'alpha') {
mainClass: 'google.registry.beam.resave.ResaveAllEppResourcesPipeline',
metaData: 'google/registry/beam/resave_all_epp_resources_pipeline_metadata.json'
],
wipeOutContactHistoryPii:
[
mainClass: 'google.registry.beam.wipeout.WipeOutContactHistoryPiiPipeline',
metaData: 'google/registry/beam/wipe_out_contact_history_pii_pipeline_metadata.json'
],
]
project.tasks.create("stageBeamPipelines") {
doLast {
@@ -1028,6 +1029,7 @@ test {
// TODO(weiminyu): Remove dependency on sqlIntegrationTest
}.dependsOn(fragileTest, outcastTest, standardTest, registryToolIntegrationTest, sqlIntegrationTest)
// When we override tests, we also break the cleanTest command.
cleanTest.dependsOn(cleanFragileTest, cleanOutcastTest, cleanStandardTest,
cleanRegistryToolIntegrationTest, cleanSqlIntegrationTest)

View File

@@ -25,7 +25,6 @@ import com.google.common.flogger.FluentLogger;
import google.registry.model.EppResource;
import google.registry.persistence.VKey;
import google.registry.request.Action.Service;
import google.registry.util.CloudTasksUtils;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
@@ -86,6 +85,6 @@ public final class AsyncTaskEnqueuer {
cloudTasksUtils.enqueue(
QUEUE_ASYNC_ACTIONS,
cloudTasksUtils.createPostTaskWithDelay(
ResaveEntityAction.PATH, Service.BACKEND.toString(), params, etaDuration));
ResaveEntityAction.PATH, Service.BACKEND, params, etaDuration));
}
}

View File

@@ -17,7 +17,6 @@ package google.registry.batch;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_REQUESTED_TIME;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESAVE_TIMES;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESOURCE_KEY;
import static google.registry.batch.CannedScriptExecutionAction.SCRIPT_PARAM;
import static google.registry.request.RequestParameters.extractBooleanParameter;
import static google.registry.request.RequestParameters.extractIntParameter;
import static google.registry.request.RequestParameters.extractLongParameter;
@@ -105,22 +104,27 @@ public class BatchModule {
}
@Provides
@Parameter(ExpandRecurringBillingEventsAction.PARAM_START_TIME)
@Parameter(ExpandBillingRecurrencesAction.PARAM_START_TIME)
static Optional<DateTime> provideStartTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(
req, ExpandRecurringBillingEventsAction.PARAM_START_TIME);
return extractOptionalDatetimeParameter(req, ExpandBillingRecurrencesAction.PARAM_START_TIME);
}
@Provides
@Parameter(ExpandRecurringBillingEventsAction.PARAM_END_TIME)
@Parameter(ExpandBillingRecurrencesAction.PARAM_END_TIME)
static Optional<DateTime> provideEndTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(req, ExpandRecurringBillingEventsAction.PARAM_END_TIME);
return extractOptionalDatetimeParameter(req, ExpandBillingRecurrencesAction.PARAM_END_TIME);
}
@Provides
@Parameter(ExpandRecurringBillingEventsAction.PARAM_ADVANCE_CURSOR)
@Parameter(WipeOutContactHistoryPiiAction.PARAM_CUTOFF_TIME)
static Optional<DateTime> provideCutoffTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(req, WipeOutContactHistoryPiiAction.PARAM_CUTOFF_TIME);
}
@Provides
@Parameter(ExpandBillingRecurrencesAction.PARAM_ADVANCE_CURSOR)
static boolean provideAdvanceCursor(HttpServletRequest req) {
return extractBooleanParameter(req, ExpandRecurringBillingEventsAction.PARAM_ADVANCE_CURSOR);
return extractBooleanParameter(req, ExpandBillingRecurrencesAction.PARAM_ADVANCE_CURSOR);
}
@Provides
@@ -134,11 +138,4 @@ public class BatchModule {
static boolean provideIsDryRun(HttpServletRequest req) {
return extractBooleanParameter(req, PARAM_DRY_RUN);
}
// TODO(b/234424397): remove method after credential changes are rolled out.
@Provides
@Parameter(SCRIPT_PARAM)
static String provideScriptName(HttpServletRequest req) {
return extractRequiredParameter(req, SCRIPT_PARAM);
}
}

View File

@@ -16,26 +16,23 @@ package google.registry.batch;
import static google.registry.request.Action.Method.POST;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import google.registry.batch.cannedscript.GroupsApiChecker;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import javax.inject.Inject;
/**
* Action that executes a canned script specified by the caller.
*
* <p>This class is introduced to help the safe rollout of credential changes. The delegated
* credentials in particular, benefit from this: they require manual configuration of the peer
* system in each environment, and may wait hours or even days after deployment until triggered by
* user activities.
* <p>This class provides a hook for invoking hard-coded methods. The main use case is to verify in
* Sandbox and Production environments new features that depend on environment-specific
* configurations. For example, the {@code DelegatedCredential}, which requires correct GWorkspace
* configuration, has been tested this way. Since it is a hassle to add or remove endpoints, we keep
* this class all the time.
*
* <p>This action can be invoked using the Nomulus CLI command: {@code nomulus -e ${env} curl
* --service BACKEND -X POST -u '/_dr/task/executeCannedScript?script=${script_name}'}
* --service BACKEND -X POST -u '/_dr/task/executeCannedScript}'}
*/
// TODO(b/234424397): remove class after credential changes are rolled out.
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/executeCannedScript",
@@ -45,29 +42,18 @@ import javax.inject.Inject;
public class CannedScriptExecutionAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
static final String SCRIPT_PARAM = "script";
static final ImmutableMap<String, Runnable> SCRIPTS =
ImmutableMap.of("runGroupsApiChecks", GroupsApiChecker::runGroupsApiChecks);
private final String scriptName;
@Inject
CannedScriptExecutionAction(@Parameter(SCRIPT_PARAM) String scriptName) {
logger.atInfo().log("Received request to run script %s", scriptName);
this.scriptName = scriptName;
CannedScriptExecutionAction() {
logger.atInfo().log("Received request to run scripts.");
}
@Override
public void run() {
if (!SCRIPTS.containsKey(scriptName)) {
throw new IllegalArgumentException("Script not found:" + scriptName);
}
try {
SCRIPTS.get(scriptName).run();
logger.atInfo().log("Finished running %s.", scriptName);
// Invoke canned scripts here.
logger.atInfo().log("Finished running scripts.");
} catch (Throwable t) {
logger.atWarning().withCause(t).log("Error executing %s", scriptName);
logger.atWarning().withCause(t).log("Error executing scripts.");
throw new RuntimeException("Execution failed.");
}
}

View File

@@ -12,10 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.util;
package google.registry.batch;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.tools.ServiceConnection.getServer;
import static java.util.concurrent.TimeUnit.SECONDS;
import com.google.api.gax.rpc.ApiException;
@@ -23,6 +24,8 @@ import com.google.cloud.tasks.v2.AppEngineHttpRequest;
import com.google.cloud.tasks.v2.AppEngineRouting;
import com.google.cloud.tasks.v2.CloudTasksClient;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.cloud.tasks.v2.HttpRequest;
import com.google.cloud.tasks.v2.OidcToken;
import com.google.cloud.tasks.v2.QueueName;
import com.google.cloud.tasks.v2.Task;
import com.google.common.base.Joiner;
@@ -36,12 +39,21 @@ import com.google.common.net.MediaType;
import com.google.common.net.UrlEscapers;
import com.google.protobuf.ByteString;
import com.google.protobuf.util.Timestamps;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action.Service;
import google.registry.util.Clock;
import google.registry.util.CollectionUtils;
import google.registry.util.Retrier;
import java.io.Serializable;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Optional;
import java.util.Random;
import java.util.function.BiConsumer;
import java.util.function.Consumer;
import java.util.function.Supplier;
import javax.annotation.Nullable;
import javax.inject.Inject;
import org.joda.time.Duration;
/** Utilities for dealing with Cloud Tasks. */
@@ -55,18 +67,26 @@ public class CloudTasksUtils implements Serializable {
private final Clock clock;
private final String projectId;
private final String locationId;
// defaultServiceAccount and iapClientId are nullable because Optional isn't serializable
@Nullable private final String defaultServiceAccount;
@Nullable private final String iapClientId;
private final SerializableCloudTasksClient client;
@Inject
public CloudTasksUtils(
Retrier retrier,
Clock clock,
String projectId,
String locationId,
@Config("projectId") String projectId,
@Config("locationId") String locationId,
@Config("defaultServiceAccount") Optional<String> defaultServiceAccount,
@Config("iapClientId") Optional<String> iapClientId,
SerializableCloudTasksClient client) {
this.retrier = retrier;
this.clock = clock;
this.projectId = projectId;
this.locationId = locationId;
this.defaultServiceAccount = defaultServiceAccount.orElse(null);
this.iapClientId = iapClientId.orElse(null);
this.client = client;
}
@@ -91,6 +111,74 @@ public class CloudTasksUtils implements Serializable {
return enqueue(queue, Arrays.asList(tasks));
}
/**
* Converts a (possible) set of params into an HTTP request via the appropriate method.
*
* <p>For GET requests we add them on to the URL, and for POST requests we add them in the body of
* the request.
*
* <p>The parameters {@code putHeadersFunction} and {@code setBodyFunction} are used so that this
* method can be called with either an AppEngine HTTP request or a standard non-AppEngine HTTP
* request. The two objects do not have the same methods, but both have ways of setting headers /
* body.
*
* @return the resulting path (unchanged for POST requests, with params added for GET requests)
*/
private String processRequestParameters(
String path,
HttpMethod method,
Multimap<String, String> params,
BiConsumer<String, String> putHeadersFunction,
Consumer<ByteString> setBodyFunction) {
if (CollectionUtils.isNullOrEmpty(params)) {
return path;
}
Escaper escaper = UrlEscapers.urlPathSegmentEscaper();
String encodedParams =
Joiner.on("&")
.join(
params.entries().stream()
.map(
entry ->
String.format(
"%s=%s",
escaper.escape(entry.getKey()), escaper.escape(entry.getValue())))
.collect(toImmutableList()));
if (method.equals(HttpMethod.GET)) {
return String.format("%s?%s", path, encodedParams);
}
putHeadersFunction.accept(HttpHeaders.CONTENT_TYPE, MediaType.FORM_DATA.toString());
setBodyFunction.accept(ByteString.copyFrom(encodedParams, StandardCharsets.UTF_8));
return path;
}
/**
* Creates a {@link Task} that does not use AppEngine for submission.
*
* <p>This uses the standard Cloud Tasks auth format to create and send an OIDC ID token set to
* the default service account. That account must have permission to submit tasks to Cloud Tasks.
*/
private Task createNonAppEngineTask(
String path, HttpMethod method, Service service, Multimap<String, String> params) {
HttpRequest.Builder requestBuilder = HttpRequest.newBuilder().setHttpMethod(method);
path =
processRequestParameters(
path, method, params, requestBuilder::putHeaders, requestBuilder::setBody);
OidcToken.Builder oidcTokenBuilder =
OidcToken.newBuilder().setServiceAccountEmail(defaultServiceAccount);
// If the service is using IAP, add that as the audience for the token so the request can be
// appropriately authed. Otherwise, use the project name.
if (iapClientId != null) {
oidcTokenBuilder.setAudience(iapClientId);
} else {
oidcTokenBuilder.setAudience(projectId);
}
requestBuilder.setOidcToken(oidcTokenBuilder.build());
String totalPath = String.format("%s%s", getServer(service), path);
requestBuilder.setUrl(totalPath);
return Task.newBuilder().setHttpRequest(requestBuilder.build()).build();
}
/**
* Create a {@link Task} to be enqueued.
*
@@ -108,7 +196,7 @@ public class CloudTasksUtils implements Serializable {
* the worker service</a>
*/
private Task createTask(
String path, HttpMethod method, String service, Multimap<String, String> params) {
String path, HttpMethod method, Service service, Multimap<String, String> params) {
checkArgument(
path != null && !path.isEmpty() && path.charAt(0) == '/',
"The path must start with a '/'.");
@@ -116,33 +204,21 @@ public class CloudTasksUtils implements Serializable {
method.equals(HttpMethod.GET) || method.equals(HttpMethod.POST),
"HTTP method %s is used. Only GET and POST are allowed.",
method);
AppEngineHttpRequest.Builder requestBuilder =
AppEngineHttpRequest.newBuilder()
.setHttpMethod(method)
.setAppEngineRouting(AppEngineRouting.newBuilder().setService(service).build());
if (!CollectionUtils.isNullOrEmpty(params)) {
Escaper escaper = UrlEscapers.urlPathSegmentEscaper();
String encodedParams =
Joiner.on("&")
.join(
params.entries().stream()
.map(
entry ->
String.format(
"%s=%s",
escaper.escape(entry.getKey()), escaper.escape(entry.getValue())))
.collect(toImmutableList()));
if (method == HttpMethod.GET) {
path = String.format("%s?%s", path, encodedParams);
} else {
requestBuilder
.putHeaders(HttpHeaders.CONTENT_TYPE, MediaType.FORM_DATA.toString())
.setBody(ByteString.copyFrom(encodedParams, StandardCharsets.UTF_8));
}
// If the default service account is configured, send a standard non-AppEngine HTTP request
if (defaultServiceAccount != null) {
return createNonAppEngineTask(path, method, service, params);
} else {
AppEngineHttpRequest.Builder requestBuilder =
AppEngineHttpRequest.newBuilder()
.setHttpMethod(method)
.setAppEngineRouting(
AppEngineRouting.newBuilder().setService(service.toString()).build());
path =
processRequestParameters(
path, method, params, requestBuilder::putHeaders, requestBuilder::setBody);
requestBuilder.setRelativeUri(path);
return Task.newBuilder().setAppEngineHttpRequest(requestBuilder.build()).build();
}
requestBuilder.setRelativeUri(path);
return Task.newBuilder().setAppEngineHttpRequest(requestBuilder.build()).build();
}
/**
@@ -165,7 +241,7 @@ public class CloudTasksUtils implements Serializable {
private Task createTaskWithJitter(
String path,
HttpMethod method,
String service,
Service service,
Multimap<String, String> params,
Optional<Integer> jitterSeconds) {
if (!jitterSeconds.isPresent() || jitterSeconds.get() <= 0) {
@@ -199,7 +275,7 @@ public class CloudTasksUtils implements Serializable {
private Task createTaskWithDelay(
String path,
HttpMethod method,
String service,
Service service,
Multimap<String, String> params,
Duration delay) {
if (delay.isEqual(Duration.ZERO)) {
@@ -211,11 +287,11 @@ public class CloudTasksUtils implements Serializable {
.build();
}
public Task createPostTask(String path, String service, Multimap<String, String> params) {
public Task createPostTask(String path, Service service, Multimap<String, String> params) {
return createTask(path, HttpMethod.POST, service, params);
}
public Task createGetTask(String path, String service, Multimap<String, String> params) {
public Task createGetTask(String path, Service service, Multimap<String, String> params) {
return createTask(path, HttpMethod.GET, service, params);
}
@@ -224,7 +300,7 @@ public class CloudTasksUtils implements Serializable {
*/
public Task createPostTaskWithJitter(
String path,
String service,
Service service,
Multimap<String, String> params,
Optional<Integer> jitterSeconds) {
return createTaskWithJitter(path, HttpMethod.POST, service, params, jitterSeconds);
@@ -235,7 +311,7 @@ public class CloudTasksUtils implements Serializable {
*/
public Task createGetTaskWithJitter(
String path,
String service,
Service service,
Multimap<String, String> params,
Optional<Integer> jitterSeconds) {
return createTaskWithJitter(path, HttpMethod.GET, service, params, jitterSeconds);
@@ -243,13 +319,13 @@ public class CloudTasksUtils implements Serializable {
/** Create a {@link Task} via HTTP.POST that will be delayed for {@code delay}. */
public Task createPostTaskWithDelay(
String path, String service, Multimap<String, String> params, Duration delay) {
String path, Service service, Multimap<String, String> params, Duration delay) {
return createTaskWithDelay(path, HttpMethod.POST, service, params, delay);
}
/** Create a {@link Task} via HTTP.GET that will be delayed for {@code delay}. */
public Task createGetTaskWithDelay(
String path, String service, Multimap<String, String> params, Duration delay) {
String path, Service service, Multimap<String, String> params, Duration delay) {
return createTaskWithDelay(path, HttpMethod.GET, service, params, delay);
}

View File

@@ -19,8 +19,9 @@ import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.batch.BatchModule.PARAM_DRY_RUN;
import static google.registry.config.RegistryEnvironment.PRODUCTION;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_DELETE;
import static google.registry.model.tld.Registries.getTldsOfType;
import static google.registry.model.tld.Tlds.getTldsOfType;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.POST;
import static google.registry.request.RequestParameters.PARAM_TLDS;
@@ -32,12 +33,11 @@ import com.google.common.collect.Sets;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.dns.DnsQueue;
import google.registry.model.CreateAutoTimestamp;
import google.registry.model.EppResourceUtils;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld.TldType;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
@@ -98,8 +98,6 @@ public class DeleteProberDataAction implements Runnable {
/** Number of domains to retrieve and delete per SQL transaction. */
private static final int BATCH_SIZE = 1000;
@Inject DnsQueue dnsQueue;
@Inject
@Parameter(PARAM_DRY_RUN)
boolean isDryRun;
@@ -222,7 +220,7 @@ public class DeleteProberDataAction implements Runnable {
}
}
private void hardDeleteDomainsAndHosts(
private static void hardDeleteDomainsAndHosts(
ImmutableList<String> domainRepoIds, ImmutableList<String> hostNames) {
tm().query("DELETE FROM Host WHERE hostName IN :hostNames")
.setParameter("hostNames", hostNames)
@@ -264,6 +262,6 @@ public class DeleteProberDataAction implements Runnable {
// messages, or auto-renews because those will all be hard-deleted the next time the job runs
// anyway.
tm().putAll(ImmutableList.of(deletedDomain, historyEntry));
dnsQueue.addDomainRefreshTask(deletedDomain.getDomainName());
requestDomainDnsRefresh(deletedDomain.getDomainName());
}
}

View File

@@ -29,11 +29,11 @@ import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.billing.ExpandRecurringBillingEventsPipeline;
import google.registry.beam.billing.ExpandBillingRecurrencesPipeline;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.common.Cursor;
import google.registry.request.Action;
import google.registry.request.Parameter;
@@ -46,20 +46,20 @@ import javax.inject.Inject;
import org.joda.time.DateTime;
/**
* An action that kicks off a {@link ExpandRecurringBillingEventsPipeline} dataflow job to expand
* {@link Recurring} billing events into synthetic {@link OneTime} events.
* An action that kicks off a {@link ExpandBillingRecurrencesPipeline} dataflow job to expand {@link
* BillingRecurrence} billing events into synthetic {@link BillingEvent} events.
*/
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/expandRecurringBillingEvents",
path = "/_dr/task/expandBillingRecurrences",
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class ExpandRecurringBillingEventsAction implements Runnable {
public class ExpandBillingRecurrencesAction implements Runnable {
public static final String PARAM_START_TIME = "startTime";
public static final String PARAM_END_TIME = "endTime";
public static final String PARAM_ADVANCE_CURSOR = "advanceCursor";
private static final String PIPELINE_NAME = "expand_recurring_billing_events_pipeline";
private static final String PIPELINE_NAME = "expand_billing_recurrences_pipeline";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Clock clock;
@@ -97,7 +97,7 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
@Inject Response response;
@Inject
ExpandRecurringBillingEventsAction() {}
ExpandBillingRecurrencesAction() {}
@Override
public void run() {
@@ -133,7 +133,7 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
.put("advanceCursor", Boolean.toString(advanceCursor))
.build());
logger.atInfo().log(
"Launching recurring billing event expansion pipeline for event time range [%s, %s)%s.",
"Launching billing recurrence expansion pipeline for event time range [%s, %s)%s.",
startTime,
endTime,
isDryRun ? " in dry run mode" : advanceCursor ? "" : " without advancing the cursor");
@@ -152,7 +152,7 @@ public class ExpandRecurringBillingEventsAction implements Runnable {
response.setStatus(SC_OK);
response.setPayload(
String.format(
"Launched recurring billing event expansion pipeline: %s",
"Launched billing recurrence expansion pipeline: %s",
launchResponse.getJob().getId()));
} catch (IOException e) {
logger.atWarning().withCause(e).log("Pipeline Launch failed");

View File

@@ -14,31 +14,39 @@
package google.registry.batch;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.apache.http.HttpStatus.SC_INTERNAL_SERVER_ERROR;
import static org.apache.http.HttpStatus.SC_OK;
import static google.registry.batch.BatchModule.PARAM_DRY_RUN;
import static google.registry.beam.BeamUtils.createJobName;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.annotations.VisibleForTesting;
import com.google.api.services.dataflow.Dataflow;
import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.beam.wipeout.WipeOutContactHistoryPiiPipeline;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.model.contact.ContactHistory;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Stream;
import java.io.IOException;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
/**
* An action that wipes out Personal Identifiable Information (PII) fields of {@link ContactHistory}
* entities.
* An action that launches {@link WipeOutContactHistoryPiiPipeline} to wipe out Personal
* Identifiable Information (PII) fields of {@link ContactHistory} entities.
*
* <p>ContactHistory entities should be retained in the database for only certain amount of time.
* This periodic wipe out action only applies to SQL.
* <p>{@link ContactHistory} entities should be retained in the database for only certain amount of
* time.
*/
@Action(
service = Service.BACKEND,
@@ -47,90 +55,89 @@ import org.joda.time.DateTime;
public class WipeOutContactHistoryPiiAction implements Runnable {
public static final String PATH = "/_dr/task/wipeOutContactHistoryPii";
public static final String PARAM_CUTOFF_TIME = "wipeoutTime";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final String PIPELINE_NAME = "wipe_out_contact_history_pii_pipeline";
private final Clock clock;
private final Response response;
private final boolean isDryRun;
private final Optional<DateTime> maybeCutoffTime;
private final int minMonthsBeforeWipeOut;
private final int wipeOutQueryBatchSize;
private final String stagingBucketUrl;
private final String projectId;
private final String jobRegion;
private final Dataflow dataflow;
private final Response response;
@Inject
public WipeOutContactHistoryPiiAction(
Clock clock,
@Parameter(PARAM_DRY_RUN) boolean isDryRun,
@Parameter(PARAM_CUTOFF_TIME) Optional<DateTime> maybeCutoffTime,
@Config("minMonthsBeforeWipeOut") int minMonthsBeforeWipeOut,
@Config("wipeOutQueryBatchSize") int wipeOutQueryBatchSize,
@Config("beamStagingBucketUrl") String stagingBucketUrl,
@Config("projectId") String projectId,
@Config("defaultJobRegion") String jobRegion,
Dataflow dataflow,
Response response) {
this.clock = clock;
this.response = response;
this.isDryRun = isDryRun;
this.maybeCutoffTime = maybeCutoffTime;
this.minMonthsBeforeWipeOut = minMonthsBeforeWipeOut;
this.wipeOutQueryBatchSize = wipeOutQueryBatchSize;
this.stagingBucketUrl = stagingBucketUrl;
this.projectId = projectId;
this.jobRegion = jobRegion;
this.dataflow = dataflow;
this.response = response;
}
@Override
public void run() {
response.setContentType(MediaType.PLAIN_TEXT_UTF_8);
DateTime cutoffTime =
maybeCutoffTime.orElse(clock.nowUtc().minusMonths(minMonthsBeforeWipeOut));
LaunchFlexTemplateParameter launchParameter =
new LaunchFlexTemplateParameter()
.setJobName(
createJobName(
String.format(
"contact-history-pii-wipeout-%s",
cutoffTime.toString("yyyy-MM-dd't'HH-mm-ss'z'")),
clock))
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(
ImmutableMap.of(
"registryEnvironment",
RegistryEnvironment.get().name(),
"cutoffTime",
cutoffTime.toString("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"),
"isDryRun",
Boolean.toString(isDryRun)));
logger.atInfo().log(
"Launching Beam pipeline to wipe out all PII of contact history entities prior to %s%s.",
cutoffTime, " in dry run mode");
try {
int totalNumOfWipedEntities = 0;
DateTime wipeOutTime = clock.nowUtc().minusMonths(minMonthsBeforeWipeOut);
logger.atInfo().log(
"About to wipe out all PII of contact history entities prior to %s.", wipeOutTime);
int numOfWipedEntities = 0;
do {
numOfWipedEntities =
tm().transact(
() ->
wipeOutContactHistoryData(
getNextContactHistoryEntitiesWithPiiBatch(wipeOutTime)));
totalNumOfWipedEntities += numOfWipedEntities;
} while (numOfWipedEntities > 0);
String msg =
String.format(
"Done. Wiped out PII of %d ContactHistory entities in total.",
totalNumOfWipedEntities);
logger.atInfo().log(msg);
response.setPayload(msg);
LaunchFlexTemplateResponse launchResponse =
dataflow
.projects()
.locations()
.flexTemplates()
.launch(
projectId,
jobRegion,
new LaunchFlexTemplateRequest().setLaunchParameter(launchParameter))
.execute();
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
response.setStatus(SC_OK);
} catch (Exception e) {
logger.atSevere().withCause(e).log(
"Exception thrown during the process of wiping out contact history PII.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(
String.format(
"Exception thrown during the process of wiping out contact history PII with cause"
+ ": %s",
e));
"Launched contact history PII wipeout pipeline: %s",
launchResponse.getJob().getId()));
} catch (IOException e) {
logger.atWarning().withCause(e).log("Pipeline Launch failed");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(String.format("Pipeline launch failed: %s", e.getMessage()));
}
}
/**
* Returns a stream of up to {@link #wipeOutQueryBatchSize} {@link ContactHistory} entities
* containing PII that are prior to @param wipeOutTime.
*/
@VisibleForTesting
Stream<ContactHistory> getNextContactHistoryEntitiesWithPiiBatch(DateTime wipeOutTime) {
// email is one of the required fields in EPP, meaning it's initially not null.
// Therefore, checking if it's null is one way to avoid processing contact history entities
// that have been processed previously. Refer to RFC 5733 for more information.
return tm().query(
"FROM ContactHistory WHERE modificationTime < :wipeOutTime " + "AND email IS NOT NULL",
ContactHistory.class)
.setParameter("wipeOutTime", wipeOutTime)
.setMaxResults(wipeOutQueryBatchSize)
.getResultStream();
}
/** Wipes out the PII of each of the {@link ContactHistory} entities in the stream. */
@VisibleForTesting
int wipeOutContactHistoryData(Stream<ContactHistory> contactHistoryEntities) {
AtomicInteger numOfEntities = new AtomicInteger(0);
contactHistoryEntities.forEach(
contactHistoryEntity -> {
tm().update(contactHistoryEntity.asBuilder().wipeOutPii().build());
numOfEntities.incrementAndGet();
});
logger.atInfo().log(
"Wiped out all PII fields of %d ContactHistory entities.", numOfEntities.get());
return numOfEntities.get();
}
}

View File

@@ -1,122 +0,0 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch.cannedscript;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.util.RegistrarUtils.normalizeRegistrarId;
import com.google.api.services.admin.directory.Directory;
import com.google.api.services.groupssettings.Groupssettings;
import com.google.common.base.Supplier;
import com.google.common.base.Suppliers;
import com.google.common.base.Throwables;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import dagger.Component;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule;
import google.registry.config.CredentialModule.AdcDelegatedCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.groups.DirectoryGroupsConnection;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarPoc;
import google.registry.util.GoogleCredentialsBundle;
import google.registry.util.UtilsModule;
import java.util.List;
import java.util.Set;
import javax.inject.Singleton;
/**
* Verifies that the credential with the {@link AdcDelegatedCredential} annotation can be used to
* access the Google Workspace Groups API.
*/
// TODO(b/234424397): remove class after credential changes are rolled out.
public class GroupsApiChecker {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final Supplier<GroupsConnectionComponent> COMPONENT_SUPPLIER =
Suppliers.memoize(DaggerGroupsApiChecker_GroupsConnectionComponent::create);
public static void runGroupsApiChecks() {
GroupsConnectionComponent component = COMPONENT_SUPPLIER.get();
DirectoryGroupsConnection groupsConnection = component.groupsConnection();
List<Registrar> registrars =
Streams.stream(Registrar.loadAllCached())
.filter(registrar -> registrar.isLive() && registrar.getType() == Registrar.Type.REAL)
.collect(toImmutableList());
for (Registrar registrar : registrars) {
for (final RegistrarPoc.Type type : RegistrarPoc.Type.values()) {
String groupKey =
String.format(
"%s-%s-contacts@%s",
normalizeRegistrarId(registrar.getRegistrarId()),
type.getDisplayName(),
component.gSuiteDomainName());
try {
Set<String> currentMembers = groupsConnection.getMembersOfGroup(groupKey);
logger.atInfo().log("Found %s members for %s.", currentMembers.size(), groupKey);
} catch (Exception e) {
Throwables.throwIfUnchecked(e);
throw new RuntimeException(e);
}
}
}
}
@Singleton
@Component(
modules = {
ConfigModule.class,
CredentialModule.class,
GroupsApiModule.class,
UtilsModule.class
})
interface GroupsConnectionComponent {
DirectoryGroupsConnection groupsConnection();
@Config("gSuiteDomainName")
String gSuiteDomainName();
}
@Module
static class GroupsApiModule {
@Provides
static Directory provideDirectory(
@AdcDelegatedCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Directory.Builder(
credentialsBundle.getHttpTransport(),
credentialsBundle.getJsonFactory(),
credentialsBundle.getHttpRequestInitializer())
.setApplicationName(projectId)
.build();
}
@Provides
static Groupssettings provideGroupsSettings(
@AdcDelegatedCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Groupssettings.Builder(
credentialsBundle.getHttpTransport(),
credentialsBundle.getJsonFactory(),
credentialsBundle.getHttpRequestInitializer())
.setApplicationName(projectId)
.build();
}
}
}

View File

@@ -64,7 +64,7 @@ public abstract class BillingEvent implements Serializable {
"amount",
"flags");
/** Returns the unique ID for the {@code OneTime} associated with this event. */
/** Returns the unique ID for the {@code BillingEvent} associated with this event. */
abstract long id();
/** Returns the UTC DateTime this event becomes billable. */
@@ -189,7 +189,7 @@ public abstract class BillingEvent implements Serializable {
.minusDays(1)
.toString(),
billingId(),
String.format("%s - %s", registrarId(), tld()),
registrarId(),
String.format("%s | TLD: %s | TERM: %d-year", action(), tld(), years()),
amount(),
currency(),
@@ -233,7 +233,7 @@ public abstract class BillingEvent implements Serializable {
/** Returns the billing account id, which is the {@code BillingEvent.billingId}. */
abstract String productAccountKey();
/** Returns the invoice grouping key, which is in the format "registrarId - tld". */
/** Returns the invoice grouping key, which is the registrar ID. */
abstract String usageGroupingKey();
/** Returns a description of the item, formatted as "action | TLD: tld | TERM: n-year." */

View File

@@ -36,23 +36,25 @@ import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.flows.custom.CustomLogicFactoryModule;
import google.registry.flows.custom.CustomLogicModule;
import google.registry.flows.domain.DomainPricingLogic;
import google.registry.flows.domain.DomainPricingLogic.AllocationTokenInvalidForPremiumNameException;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingEvent.Cancellation;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingCancellation;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.common.Cursor;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.util.Clock;
import google.registry.util.SystemClock;
import java.io.Serializable;
import java.math.BigInteger;
import java.util.Optional;
import java.util.Set;
import javax.inject.Singleton;
import org.apache.beam.sdk.Pipeline;
@@ -75,48 +77,49 @@ import org.apache.beam.sdk.values.PDone;
import org.joda.time.DateTime;
/**
* Definition of a Dataflow Flex pipeline template, which expands {@link Recurring} to {@link
* OneTime} when an autorenew occurs within the given time frame.
* Definition of a Dataflow Flex pipeline template, which expands {@link BillingRecurrence} to
* {@link BillingEvent} when an autorenew occurs within the given time frame.
*
* <p>This pipeline works in three stages:
*
* <ul>
* <li>Gather the {@link Recurring}s that are in scope for expansion. The exact condition of
* {@link Recurring}s to include can be found in {@link #getRecurringsInScope(Pipeline)}.
* <li>Expand the {@link Recurring}s to {@link OneTime} (and corresponding {@link DomainHistory})
* that fall within the [{@link #startTime}, {@link #endTime}) window, excluding those that
* are already present (to make this pipeline idempotent when running with the same parameters
* multiple times, either in parallel or in sequence). The {@link Recurring} is also updated
* with the information on when it was last expanded, so it would not be in scope for
* expansion until at least a year later.
* <li>Gather the {@link BillingRecurrence}s that are in scope for expansion. The exact condition
* of {@link BillingRecurrence}s to include can be found in {@link
* #getRecurrencesInScope(Pipeline)}.
* <li>Expand the {@link BillingRecurrence}s to {@link BillingEvent} (and corresponding {@link
* DomainHistory}) that fall within the [{@link #startTime}, {@link #endTime}) window,
* excluding those that are already present (to make this pipeline idempotent when running
* with the same parameters multiple times, either in parallel or in sequence). The {@link
* BillingRecurrence} is also updated with the information on when it was last expanded, so it
* would not be in scope for expansion until at least a year later.
* <li>If the cursor for billing events should be advanced, advance it to {@link #endTime} after
* all of the expansions in the previous step is done, only when it is currently at {@link
* #startTime}.
* </ul>
*
* <p>Note that the creation of new {@link OneTime} and {@link DomainHistory} is done speculatively
* as soon as its event time is in scope for expansion (i.e. within the window of operation). If a
* domain is subsequently cancelled during the autorenew grace period, a {@link Cancellation} would
* have been created to cancel the {@link OneTime} out. Similarly, a {@link DomainHistory} for the
* delete will be created which negates the effect of the speculatively created {@link
* DomainHistory}, specifically for the transaction records. Both the {@link OneTime} and {@link
* DomainHistory} will only be used (and cancelled out) when the billing time becomes effective,
* which is after the grace period, when the cancellations would have been written, if need be. This
* is no different from what we do with manual renewals or normal creates, where entities are always
* created for the action regardless of whether their effects will be negated later due to
* subsequent actions within respective grace periods.
* <p>Note that the creation of new {@link BillingEvent} and {@link DomainHistory} is done
* speculatively as soon as its event time is in scope for expansion (i.e. within the window of
* operation). If a domain is subsequently cancelled during the autorenew grace period, a {@link
* BillingCancellation} would have been created to cancel the {@link BillingEvent} out. Similarly, a
* {@link DomainHistory} for the delete will be created which negates the effect of the
* speculatively created {@link DomainHistory}, specifically for the transaction records. Both the
* {@link BillingEvent} and {@link DomainHistory} will only be used (and cancelled out) when the
* billing time becomes effective, which is after the grace period, when the cancellations would
* have been written, if need be. This is no different from what we do with manual renewals or
* normal creates, where entities are always created for the action regardless of whether their
* effects will be negated later due to subsequent actions within respective grace periods.
*
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha \
* --pipeline=expandBilling}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*
* @see Cancellation#forGracePeriod
* @see BillingCancellation#forGracePeriod
* @see google.registry.flows.domain.DomainFlowUtils#createCancelingRecords
* @see <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates">Using
* Flex Templates</a>
*/
public class ExpandRecurringBillingEventsPipeline implements Serializable {
public class ExpandBillingRecurrencesPipeline implements Serializable {
private static final long serialVersionUID = -5827984301386630194L;
@@ -126,7 +129,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
static {
PipelineComponent pipelineComponent =
DaggerExpandRecurringBillingEventsPipeline_PipelineComponent.create();
DaggerExpandBillingRecurrencesPipeline_PipelineComponent.create();
domainPricingLogic = pipelineComponent.domainPricingLogic();
batchSize = pipelineComponent.batchSize();
}
@@ -137,8 +140,8 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
private final DateTime endTime;
private final boolean isDryRun;
private final boolean advanceCursor;
private final Counter recurringsInScopeCounter =
Metrics.counter("ExpandBilling", "Recurrings in scope for expansion");
private final Counter recurrencesInScopeCounter =
Metrics.counter("ExpandBilling", "Recurrences in scope for expansion");
// Note that this counter is only accurate when running in dry run mode. Because SQL persistence
// is a side effect and not idempotent, a transaction to save OneTimes could be successful but the
// transform that contains it could be still be retried, rolling back the counter increment. The
@@ -148,8 +151,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
private final Counter oneTimesToExpandCounter =
Metrics.counter("ExpandBilling", "OneTimes that would be expanded");
ExpandRecurringBillingEventsPipeline(
ExpandRecurringBillingEventsPipelineOptions options, Clock clock) {
ExpandBillingRecurrencesPipeline(ExpandBillingRecurrencesPipelineOptions options, Clock clock) {
startTime = DateTime.parse(options.getStartTime());
endTime = DateTime.parse(options.getEndTime());
checkArgument(
@@ -168,16 +170,16 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
}
void setupPipeline(Pipeline pipeline) {
PCollection<KV<Integer, Long>> recurringIds = getRecurringsInScope(pipeline);
PCollection<Void> expanded = expandRecurrings(recurringIds);
PCollection<KV<Integer, Long>> recurrenceIds = getRecurrencesInScope(pipeline);
PCollection<Void> expanded = expandRecurrences(recurrenceIds);
if (!isDryRun && advanceCursor) {
advanceCursor(expanded);
}
}
PCollection<KV<Integer, Long>> getRecurringsInScope(Pipeline pipeline) {
PCollection<KV<Integer, Long>> getRecurrencesInScope(Pipeline pipeline) {
return pipeline.apply(
"Read all Recurrings in scope",
"Read all Recurrences in scope",
// Use native query because JPQL does not support timestamp arithmetics.
RegistryJpaIO.read(
"SELECT billing_recurrence_id "
@@ -201,7 +203,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
endTime.minusYears(1)),
true,
(BigInteger id) -> {
recurringsInScopeCounter.inc();
recurrencesInScopeCounter.inc();
// Note that because all elements are mapped to the same dummy key, the next
// batching transform will effectively be serial. This however does not matter for
// our use case because the elements were obtained from a SQL read query, which
@@ -220,13 +222,13 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
.withCoder(KvCoder.of(VarIntCoder.of(), VarLongCoder.of())));
}
private PCollection<Void> expandRecurrings(PCollection<KV<Integer, Long>> recurringIds) {
return recurringIds
private PCollection<Void> expandRecurrences(PCollection<KV<Integer, Long>> recurrenceIds) {
return recurrenceIds
.apply(
"Group into batches",
GroupIntoBatches.<Integer, Long>ofSize(batchSize).withShardedKey())
.apply(
"Expand and save Recurrings into OneTimes and corresponding DomainHistories",
"Expand and save Recurrences into OneTimes and corresponding DomainHistories",
MapElements.into(voids())
.via(
element -> {
@@ -235,7 +237,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
() -> {
ImmutableSet.Builder<ImmutableObject> results =
new ImmutableSet.Builder<>();
ids.forEach(id -> expandOneRecurring(id, results));
ids.forEach(id -> expandOneRecurrence(id, results));
if (!isDryRun) {
tm().putAll(results.build());
}
@@ -244,14 +246,16 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
}));
}
private void expandOneRecurring(Long recurringId, ImmutableSet.Builder<ImmutableObject> results) {
Recurring recurring = tm().loadByKey(Recurring.createVKey(recurringId));
private void expandOneRecurrence(
Long recurrenceId, ImmutableSet.Builder<ImmutableObject> results) {
BillingRecurrence billingRecurrence =
tm().loadByKey(BillingRecurrence.createVKey(recurrenceId));
// Determine the complete set of EventTimes this recurring event should expand to within
// Determine the complete set of EventTimes this recurrence event should expand to within
// [max(recurrenceLastExpansion + 1 yr, startTime), min(recurrenceEndTime, endTime)).
//
// This range should always be legal for recurrings that are returned from the query. However,
// it is possible that the recurring has changed between when the read transformation occurred
// This range should always be legal for recurrences that are returned from the query. However,
// it is possible that the recurrence has changed between when the read transformation occurred
// and now. This could be caused by some out-of-process mutations (such as a domain deletion
// closing out a previously open-ended recurrence), or more subtly, Beam could execute the same
// work multiple times due to transient communication issues between workers and the scheduler.
@@ -262,24 +266,26 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
// to work on the same batch. The second worker would see a new recurrence_last_expansion that
// causes the range to be illegal.
//
// The best way to handle any unexpected behavior is to simply drop the recurring from
// The best way to handle any unexpected behavior is to simply drop the recurrence from
// expansion, if its new state still calls for an expansion, it would be picked up the next time
// the pipeline runs.
ImmutableSet<DateTime> eventTimes;
try {
eventTimes =
ImmutableSet.copyOf(
recurring
billingRecurrence
.getRecurrenceTimeOfYear()
.getInstancesInRange(
Range.closedOpen(
latestOf(recurring.getRecurrenceLastExpansion().plusYears(1), startTime),
earliestOf(recurring.getRecurrenceEndTime(), endTime))));
latestOf(
billingRecurrence.getRecurrenceLastExpansion().plusYears(1),
startTime),
earliestOf(billingRecurrence.getRecurrenceEndTime(), endTime))));
} catch (IllegalArgumentException e) {
return;
}
Domain domain = tm().loadByKey(Domain.createVKey(recurring.getDomainRepoId()));
Registry tld = Registry.get(domain.getTld());
Domain domain = tm().loadByKey(Domain.createVKey(billingRecurrence.getDomainRepoId()));
Tld tld = Tld.get(domain.getTld());
// Find the times for which the OneTime billing event are already created, making this expansion
// idempotent. There is no need to match to the domain repo ID as the cancellation matching
@@ -290,7 +296,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
"SELECT eventTime FROM BillingEvent WHERE cancellationMatchingBillingEvent ="
+ " :key",
DateTime.class)
.setParameter("key", recurring.createVKey())
.setParameter("key", billingRecurrence.createVKey())
.getResultList());
Set<DateTime> eventTimesToExpand = difference(eventTimes, existingEventTimes);
@@ -299,7 +305,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
return;
}
DateTime recurrenceLastExpansionTime = recurring.getRecurrenceLastExpansion();
DateTime recurrenceLastExpansionTime = billingRecurrence.getRecurrenceLastExpansion();
// Create new OneTime and DomainHistory for EventTimes that needs to be expanded.
for (DateTime eventTime : eventTimesToExpand) {
@@ -331,7 +337,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
DomainHistory historyEntry =
new DomainHistory.Builder()
.setBySuperuser(false)
.setRegistrarId(recurring.getRegistrarId())
.setRegistrarId(billingRecurrence.getRegistrarId())
.setModificationTime(tm().getTransactionTime())
.setDomain(domain)
.setPeriod(Period.create(1, YEARS))
@@ -383,28 +389,43 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
// It is OK to always create a OneTime, even though the domain might be deleted or transferred
// later during autorenew grace period, as a cancellation will always be written out in those
// instances.
OneTime oneTime =
new OneTime.Builder()
.setBillingTime(billingTime)
.setRegistrarId(recurring.getRegistrarId())
// Determine the cost for a one-year renewal.
.setCost(
domainPricingLogic
.getRenewPrice(tld, recurring.getTargetId(), eventTime, 1, recurring)
.getRenewCost())
.setEventTime(eventTime)
.setFlags(union(recurring.getFlags(), Flag.SYNTHETIC))
.setDomainHistory(historyEntry)
.setPeriodYears(1)
.setReason(recurring.getReason())
.setSyntheticCreationTime(endTime)
.setCancellationMatchingBillingEvent(recurring)
.setTargetId(recurring.getTargetId())
.build();
results.add(oneTime);
BillingEvent billingEvent = null;
try {
billingEvent =
new BillingEvent.Builder()
.setBillingTime(billingTime)
.setRegistrarId(billingRecurrence.getRegistrarId())
// Determine the cost for a one-year renewal.
.setCost(
domainPricingLogic
.getRenewPrice(
tld,
billingRecurrence.getTargetId(),
eventTime,
1,
billingRecurrence,
Optional.empty())
.getRenewCost())
.setEventTime(eventTime)
.setFlags(union(billingRecurrence.getFlags(), Flag.SYNTHETIC))
.setDomainHistory(historyEntry)
.setPeriodYears(1)
.setReason(billingRecurrence.getReason())
.setSyntheticCreationTime(endTime)
.setCancellationMatchingBillingEvent(billingRecurrence)
.setTargetId(billingRecurrence.getTargetId())
.build();
} catch (AllocationTokenInvalidForPremiumNameException e) {
// This should not be reached since we are not using an allocation token
return;
}
results.add(billingEvent);
}
results.add(
recurring.asBuilder().setRecurrenceLastExpansion(recurrenceLastExpansionTime).build());
billingRecurrence
.asBuilder()
.setRecurrenceLastExpansion(recurrenceLastExpansionTime)
.build());
}
private PDone advanceCursor(PCollection<Void> persisted) {
@@ -447,11 +468,11 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
}
public static void main(String[] args) {
PipelineOptionsFactory.register(ExpandRecurringBillingEventsPipelineOptions.class);
ExpandRecurringBillingEventsPipelineOptions options =
PipelineOptionsFactory.register(ExpandBillingRecurrencesPipelineOptions.class);
ExpandBillingRecurrencesPipelineOptions options =
PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(ExpandRecurringBillingEventsPipelineOptions.class);
.as(ExpandBillingRecurrencesPipelineOptions.class);
// Hardcode the transaction level to be at serializable we do not want concurrent runs of the
// pipeline for the same window to create duplicate OneTimes. This ensures that the set of
// existing OneTimes do not change by the time new OneTimes are inserted within a transaction.
@@ -459,7 +480,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
// Per PostgreSQL, serializable isolation level does not introduce any blocking beyond that
// present in repeatable read other than some overhead related to monitoring possible
// serializable anomalies. Therefore, in most cases, since each worker of the same job works on
// a different set of recurrings, it is not possible for their execution order to affect
// a different set of recurrences, it is not possible for their execution order to affect
// serialization outcome, and the performance penalty should be minimum when using serializable
// compared to using repeatable read.
//
@@ -471,7 +492,7 @@ public class ExpandRecurringBillingEventsPipeline implements Serializable {
// See: https://www.postgresql.org/docs/current/transaction-iso.html
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_SERIALIZABLE);
Pipeline pipeline = Pipeline.create(options);
new ExpandRecurringBillingEventsPipeline(options, new SystemClock()).run(pipeline);
new ExpandBillingRecurrencesPipeline(options, new SystemClock()).run(pipeline);
}
@Singleton

View File

@@ -18,7 +18,7 @@ import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
public interface ExpandRecurringBillingEventsPipelineOptions extends RegistryPipelineOptions {
public interface ExpandBillingRecurrencesPipelineOptions extends RegistryPipelineOptions {
@Description(
"The inclusive lower bound of on the range of event times that will be expanded, in ISO 8601"
+ " format")

View File

@@ -22,8 +22,8 @@ import google.registry.beam.billing.BillingEvent.InvoiceGroupingKey;
import google.registry.beam.billing.BillingEvent.InvoiceGroupingKey.InvoiceGroupingKeyCoder;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryJpaIO.Read;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingEvent;
import google.registry.model.registrar.Registrar;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.reporting.billing.BillingModule;
@@ -86,29 +86,30 @@ public class InvoicingPipeline implements Serializable {
void setupPipeline(Pipeline pipeline) {
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_COMMITTED);
PCollection<BillingEvent> billingEvents = readFromCloudSql(options, pipeline);
PCollection<google.registry.beam.billing.BillingEvent> billingEvents =
readFromCloudSql(options, pipeline);
saveInvoiceCsv(billingEvents, options);
saveDetailedCsv(billingEvents, options);
}
static PCollection<BillingEvent> readFromCloudSql(
static PCollection<google.registry.beam.billing.BillingEvent> readFromCloudSql(
InvoicingPipelineOptions options, Pipeline pipeline) {
Read<Object[], BillingEvent> read =
RegistryJpaIO.<Object[], BillingEvent>read(
Read<Object[], google.registry.beam.billing.BillingEvent> read =
RegistryJpaIO.<Object[], google.registry.beam.billing.BillingEvent>read(
makeCloudSqlQuery(options.getYearMonth()), false, row -> parseRow(row).orElse(null))
.withCoder(SerializableCoder.of(BillingEvent.class));
.withCoder(SerializableCoder.of(google.registry.beam.billing.BillingEvent.class));
PCollection<BillingEvent> billingEventsWithNulls =
PCollection<google.registry.beam.billing.BillingEvent> billingEventsWithNulls =
pipeline.apply("Read BillingEvents from Cloud SQL", read);
// Remove null billing events
return billingEventsWithNulls.apply(Filter.by(Objects::nonNull));
}
private static Optional<BillingEvent> parseRow(Object[] row) {
OneTime oneTime = (OneTime) row[0];
private static Optional<google.registry.beam.billing.BillingEvent> parseRow(Object[] row) {
BillingEvent billingEvent = (BillingEvent) row[0];
Registrar registrar = (Registrar) row[1];
CurrencyUnit currency = oneTime.getCost().getCurrencyUnit();
CurrencyUnit currency = billingEvent.getCost().getCurrencyUnit();
if (!registrar.getBillingAccountMap().containsKey(currency)) {
logger.atSevere().log(
"Registrar %s does not have a product account key for the currency unit: %s",
@@ -117,37 +118,40 @@ public class InvoicingPipeline implements Serializable {
}
return Optional.of(
BillingEvent.create(
oneTime.getId(),
oneTime.getBillingTime(),
oneTime.getEventTime(),
google.registry.beam.billing.BillingEvent.create(
billingEvent.getId(),
billingEvent.getBillingTime(),
billingEvent.getEventTime(),
registrar.getRegistrarId(),
registrar.getBillingAccountMap().get(currency),
registrar.getPoNumber().orElse(""),
DomainNameUtils.getTldFromDomainName(oneTime.getTargetId()),
oneTime.getReason().toString(),
oneTime.getTargetId(),
oneTime.getDomainRepoId(),
Optional.ofNullable(oneTime.getPeriodYears()).orElse(0),
oneTime.getCost().getCurrencyUnit().toString(),
oneTime.getCost().getAmount().doubleValue(),
DomainNameUtils.getTldFromDomainName(billingEvent.getTargetId()),
billingEvent.getReason().toString(),
billingEvent.getTargetId(),
billingEvent.getDomainRepoId(),
Optional.ofNullable(billingEvent.getPeriodYears()).orElse(0),
billingEvent.getCost().getCurrencyUnit().toString(),
billingEvent.getCost().getAmount().doubleValue(),
String.join(
" ", oneTime.getFlags().stream().map(Flag::toString).collect(toImmutableSet()))));
" ",
billingEvent.getFlags().stream().map(Flag::toString).collect(toImmutableSet()))));
}
/** Transform that converts a {@code BillingEvent} into an invoice CSV row. */
private static class GenerateInvoiceRows
extends PTransform<PCollection<BillingEvent>, PCollection<String>> {
extends PTransform<
PCollection<google.registry.beam.billing.BillingEvent>, PCollection<String>> {
private static final long serialVersionUID = -8090619008258393728L;
@Override
public PCollection<String> expand(PCollection<BillingEvent> input) {
public PCollection<String> expand(
PCollection<google.registry.beam.billing.BillingEvent> input) {
return input
.apply(
"Map to invoicing key",
MapElements.into(TypeDescriptor.of(InvoiceGroupingKey.class))
.via(BillingEvent::getInvoiceGroupingKey))
.via(google.registry.beam.billing.BillingEvent::getInvoiceGroupingKey))
.apply(
"Filter out free events", Filter.by((InvoiceGroupingKey key) -> key.unitPrice() != 0))
.setCoder(new InvoiceGroupingKeyCoder())
@@ -161,7 +165,8 @@ public class InvoicingPipeline implements Serializable {
/** Saves the billing events to a single overall invoice CSV file. */
static void saveInvoiceCsv(
PCollection<BillingEvent> billingEvents, InvoicingPipelineOptions options) {
PCollection<google.registry.beam.billing.BillingEvent> billingEvents,
InvoicingPipelineOptions options) {
billingEvents
.apply("Generate overall invoice rows", new GenerateInvoiceRows())
.apply(
@@ -182,16 +187,17 @@ public class InvoicingPipeline implements Serializable {
/** Saves the billing events to detailed report CSV files keyed by registrar-tld pairs. */
static void saveDetailedCsv(
PCollection<BillingEvent> billingEvents, InvoicingPipelineOptions options) {
PCollection<google.registry.beam.billing.BillingEvent> billingEvents,
InvoicingPipelineOptions options) {
String yearMonth = options.getYearMonth();
billingEvents.apply(
"Write detailed report for each registrar-tld pair",
FileIO.<String, BillingEvent>writeDynamic()
FileIO.<String, google.registry.beam.billing.BillingEvent>writeDynamic()
.to(
String.format(
"%s/%s/%s",
options.getBillingBucketUrl(), BillingModule.INVOICES_DIRECTORY, yearMonth))
.by(BillingEvent::getDetailedReportGroupingKey)
.by(google.registry.beam.billing.BillingEvent::getDetailedReportGroupingKey)
.withNumShards(1)
.withDestinationCoder(StringUtf8Coder.of())
.withNaming(
@@ -200,8 +206,8 @@ public class InvoicingPipeline implements Serializable {
String.format(
"%s_%s_%s.csv", BillingModule.DETAIL_REPORT_PREFIX, yearMonth, key))
.via(
Contextful.fn(BillingEvent::toCsv),
TextIO.sink().withHeader(BillingEvent.getHeader())));
Contextful.fn(google.registry.beam.billing.BillingEvent::toCsv),
TextIO.sink().withHeader(google.registry.beam.billing.BillingEvent.getHeader())));
}
/** Create the Cloud SQL query for a given yearMonth at runtime. */

View File

@@ -26,13 +26,14 @@ import com.google.auto.value.AutoValue;
import com.google.cloud.storage.BlobId;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import google.registry.batch.CloudTasksUtils;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.PgpHelper;
import google.registry.model.common.Cursor;
import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.rde.BrdaCopyAction;
import google.registry.rde.DepositFragment;
import google.registry.rde.Ghostryde;
@@ -42,11 +43,10 @@ import google.registry.rde.RdeMarshaller;
import google.registry.rde.RdeModule;
import google.registry.rde.RdeResourceType;
import google.registry.rde.RdeUploadAction;
import google.registry.rde.RdeUtil;
import google.registry.rde.RdeUtils;
import google.registry.request.Action.Service;
import google.registry.request.RequestParameters;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.util.CloudTasksUtils;
import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdeheader.XjcRdeHeaderElement;
import google.registry.xml.ValidationMode;
@@ -166,7 +166,7 @@ public class RdeIO {
final int revision =
Optional.ofNullable(key.revision())
.orElseGet(() -> RdeRevision.getNextRevision(tld, watermark, mode));
String id = RdeUtil.timestampToId(watermark);
String id = RdeUtils.timestampToId(watermark);
String prefix =
options.getJobName()
+ '/'
@@ -272,12 +272,12 @@ public class RdeIO {
tm().transact(
() -> {
PendingDeposit key = input.getKey();
Registry registry = Registry.get(key.tld());
Tld tld = Tld.get(key.tld());
Optional<Cursor> cursor =
tm().transact(
() ->
tm().loadByKeyIfPresent(
Cursor.createScopedVKey(key.cursor(), registry)));
Cursor.createScopedVKey(key.cursor(), tld)));
DateTime position = getCursorTimeOrStartOfTime(cursor);
checkState(key.interval() != null, "Interval must be present");
DateTime newPosition = key.watermark().plus(key.interval());
@@ -290,7 +290,7 @@ public class RdeIO {
"Partial ordering of RDE deposits broken: %s %s",
position,
key);
tm().put(Cursor.createScoped(key.cursor(), newPosition, registry));
tm().put(Cursor.createScoped(key.cursor(), newPosition, tld));
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), input.getValue());
@@ -306,7 +306,7 @@ public class RdeIO {
RDE_UPLOAD_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
RdeUploadAction.PATH,
Service.BACKEND.getServiceId(),
Service.BACKEND,
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
key.tld(),
@@ -318,7 +318,7 @@ public class RdeIO {
BRDA_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
BrdaCopyAction.PATH,
Service.BACKEND.getServiceId(),
Service.BACKEND,
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
key.tld(),

View File

@@ -38,6 +38,7 @@ import com.google.common.flogger.FluentLogger;
import com.google.common.io.BaseEncoding;
import dagger.BindsInstance;
import dagger.Component;
import google.registry.batch.CloudTasksUtils;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.config.CloudTasksUtilsModule;
@@ -62,7 +63,6 @@ import google.registry.rde.DepositFragment;
import google.registry.rde.PendingDeposit;
import google.registry.rde.PendingDeposit.PendingDepositCoder;
import google.registry.rde.RdeMarshaller;
import google.registry.util.CloudTasksUtils;
import google.registry.util.UtilsModule;
import google.registry.xml.ValidationMode;
import java.io.ByteArrayInputStream;

View File

@@ -0,0 +1,166 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.wipeout;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.apache.beam.sdk.values.TypeDescriptors.voids;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Streams;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.model.contact.ContactHistory;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.VKey;
import java.io.Serializable;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.coders.KvCoder;
import org.apache.beam.sdk.coders.StringUtf8Coder;
import org.apache.beam.sdk.coders.VarLongCoder;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.join.CoGroupByKey;
import org.apache.beam.sdk.transforms.join.KeyedPCollectionTuple;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
/**
* Definition of a Dataflow Flex pipeline template, which finds out {@link ContactHistory} entries
* that are older than a given age (excluding the most recent one, even if it falls with the range)
* and wipe out PII information in them.
*
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha \
* --pipeline=wipeOutContactHistoryPii}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*/
public class WipeOutContactHistoryPiiPipeline implements Serializable {
private static final long serialVersionUID = -4111052675715913820L;
private static final TupleTag<Long> REVISIONS_TO_WIPE = new TupleTag<>();
private static final TupleTag<Long> MOST_RECENT_REVISION = new TupleTag<>();
private final DateTime cutoffTime;
private final boolean dryRun;
private final Counter contactsInScope =
Metrics.counter("WipeOutContactHistoryPii", "contacts in scope");
private final Counter historiesToWipe =
Metrics.counter("WipeOutContactHistoryPii", "contact histories to wipe PII from");
private final Counter historiesWiped =
Metrics.counter("WipeOutContactHistoryPii", "contact histories actually updated");
WipeOutContactHistoryPiiPipeline(WipeOutContactHistoryPiiPipelineOptions options) {
dryRun = options.getIsDryRun();
cutoffTime = DateTime.parse(options.getCutoffTime());
}
void setup(Pipeline pipeline) {
KeyedPCollectionTuple.of(REVISIONS_TO_WIPE, getHistoryEntriesToWipe(pipeline))
.and(MOST_RECENT_REVISION, getMostRecentHistoryEntries(pipeline))
.apply("Group by contact", CoGroupByKey.create())
.apply(
"Wipe out PII",
MapElements.into(voids())
.via(
kv -> {
String repoId = kv.getKey();
long mostRecentRevision = kv.getValue().getOnly(MOST_RECENT_REVISION);
ImmutableList<Long> revisionsToWipe =
Streams.stream(kv.getValue().getAll(REVISIONS_TO_WIPE))
.filter(e -> e != mostRecentRevision)
.collect(toImmutableList());
if (revisionsToWipe.isEmpty()) {
return null;
}
contactsInScope.inc();
tm().transact(
() -> {
for (long revisionId : revisionsToWipe) {
historiesToWipe.inc();
ContactHistory history =
tm().loadByKey(
VKey.create(
ContactHistory.class,
new HistoryEntryId(repoId, revisionId)));
// In the unlikely case where multiple pipelines run at the
// same time, or where the runner decides to rerun a particular
// transform, we might have a history entry that has already been
// wiped at this point. There's no need to wipe it again.
if (!dryRun
&& history.getContactBase().isPresent()
&& history.getContactBase().get().getEmailAddress() != null) {
historiesWiped.inc();
tm().update(history.asBuilder().wipeOutPii().build());
}
}
});
return null;
}));
}
PCollection<KV<String, Long>> getHistoryEntriesToWipe(Pipeline pipeline) {
return pipeline.apply(
"Find contact histories to wipee",
// Email is one of the required fields in EPP, meaning it's initially not null when it
// is set by EPP flows (even though it is nullalbe in the SQL schema). Therefore,
// checking if it's null is one way to avoid processing contact history entities that
// have been processed previously. Refer to RFC 5733 for more information.
RegistryJpaIO.read(
"SELECT repoId, revisionId FROM ContactHistory WHERE email IS NOT NULL AND"
+ " modificationTime < :cutoffTime",
ImmutableMap.of("cutoffTime", cutoffTime),
Object[].class,
row -> KV.of((String) row[0], (long) row[1]))
.withCoder(KvCoder.of(StringUtf8Coder.of(), VarLongCoder.of())));
}
PCollection<KV<String, Long>> getMostRecentHistoryEntries(Pipeline pipeline) {
return pipeline.apply(
"Find the most recent historiy entry for each contact",
RegistryJpaIO.read(
"SELECT repoId, revisionId FROM ContactHistory"
+ " WHERE (repoId, modificationTime) IN"
+ " (SELECT repoId, MAX(modificationTime) FROM ContactHistory GROUP BY repoId)",
ImmutableMap.of(),
Object[].class,
row -> KV.of((String) row[0], (long) row[1]))
.withCoder(KvCoder.of(StringUtf8Coder.of(), VarLongCoder.of())));
}
PipelineResult run(Pipeline pipeline) {
setup(pipeline);
return pipeline.run();
}
public static void main(String[] args) {
PipelineOptionsFactory.register(WipeOutContactHistoryPiiPipelineOptions.class);
WipeOutContactHistoryPiiPipelineOptions options =
PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(WipeOutContactHistoryPiiPipelineOptions.class);
// Repeatable read should be more than enough since we are dealing with old history entries that
// are otherwise immutable.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
Pipeline pipeline = Pipeline.create(options);
new WipeOutContactHistoryPiiPipeline(options).run(pipeline);
}
}

View File

@@ -0,0 +1,37 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.wipeout;
import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
public interface WipeOutContactHistoryPiiPipelineOptions extends RegistryPipelineOptions {
@Description(
"A contact history entry with a history modification time before this time will have its PII"
+ " wiped, unless it is the most entry for the contact.")
String getCutoffTime();
void setCutoffTime(String value);
@Description(
"If true, the wiped out billing events will not be saved but the pipeline metrics counter"
+ " will still be updated.")
@Default.Boolean(false)
boolean getIsDryRun();
void setIsDryRun(boolean value);
}

View File

@@ -20,7 +20,7 @@ import com.google.common.collect.ImmutableList;
import dagger.Module;
import dagger.Provides;
import dagger.multibindings.Multibinds;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.GoogleCredentialsBundle;
import java.util.Map;
@@ -34,7 +34,7 @@ public abstract class BigqueryModule {
@Provides
static Bigquery provideBigquery(
@DefaultCredential GoogleCredentialsBundle credentialsBundle,
@ApplicationDefaultCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Bigquery.Builder(
credentialsBundle.getHttpTransport(),

View File

@@ -19,18 +19,15 @@ import com.google.cloud.tasks.v2.CloudTasksClient;
import com.google.cloud.tasks.v2.CloudTasksSettings;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.batch.CloudTasksUtils;
import google.registry.batch.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.batch.CloudTasksUtils.SerializableCloudTasksClient;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.util.CloudTasksUtils.SerializableCloudTasksClient;
import google.registry.util.GoogleCredentialsBundle;
import google.registry.util.Retrier;
import java.io.IOException;
import java.io.Serializable;
import java.util.function.Supplier;
import javax.inject.Singleton;
/**
* A {@link Module} that provides {@link CloudTasksUtils}.
@@ -41,21 +38,10 @@ import javax.inject.Singleton;
@Module
public abstract class CloudTasksUtilsModule {
@Singleton
@Provides
public static CloudTasksUtils provideCloudTasksUtils(
@Config("projectId") String projectId,
@Config("locationId") String locationId,
SerializableCloudTasksClient client,
Retrier retrier,
Clock clock) {
return new CloudTasksUtils(retrier, clock, projectId, locationId, client);
}
// Provides a supplier instead of using a Dagger @Provider because the latter is not serializable.
@Provides
public static Supplier<CloudTasksClient> provideCloudTasksClientSupplier(
@DefaultCredential GoogleCredentialsBundle credentials) {
@ApplicationDefaultCredential GoogleCredentialsBundle credentials) {
return (Supplier<CloudTasksClient> & Serializable)
() -> {
CloudTasksClient client;

View File

@@ -66,38 +66,6 @@ public abstract class CredentialModule {
return GoogleCredentialsBundle.create(credential);
}
/**
* Provides the default {@link GoogleCredentialsBundle} from the Google Cloud runtime.
*
* <p>The credential returned depends on the runtime environment:
*
* <ul>
* <li>On AppEngine, returns the service account credential for
* PROJECT_ID@appspot.gserviceaccount.com
* <li>On Compute Engine, returns the service account credential for
* PROJECT_NUMBER-compute@developer.gserviceaccount.com
* <li>On end user host, this returns the credential downloaded by gcloud. Please refer to <a
* href="https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login">Cloud
* SDK documentation</a> for details.
* </ul>
*/
@DefaultCredential
@Provides
@Singleton
public static GoogleCredentialsBundle provideDefaultCredential(
@Config("defaultCredentialOauthScopes") ImmutableList<String> requiredScopes) {
GoogleCredentials credential;
try {
credential = GoogleCredentials.getApplicationDefault();
} catch (IOException e) {
throw new RuntimeException(e);
}
if (credential.createScopedRequired()) {
credential = credential.createScoped(requiredScopes);
}
return GoogleCredentialsBundle.create(credential);
}
/**
* Provides a {@link GoogleCredentialsBundle} for accessing Google Workspace APIs, such as Drive
* and Sheets.
@@ -162,13 +130,6 @@ public abstract class CredentialModule {
@Retention(RetentionPolicy.RUNTIME)
public @interface ApplicationDefaultCredential {}
/** Dagger qualifier for the Application Default Credential. */
@Qualifier
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Deprecated // Switching to @ApplicationDefaultCredential
public @interface DefaultCredential {}
/** Dagger qualifier for the credential for Google Workspace APIs. */
@Qualifier
@Documented

View File

@@ -32,6 +32,8 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSortedMap;
import dagger.Module;
import dagger.Provides;
import google.registry.dns.ReadDnsRefreshRequestsAction;
import google.registry.model.common.DnsRefreshRequest;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.util.YamlUtils;
import java.lang.annotation.Documented;
@@ -61,7 +63,7 @@ import org.joda.time.Duration;
* <p>This class does not represent the total configuration of the Nomulus service. It's <b>only
* meant for settings that need to be configured <i>once</i></b>. Settings which may be subject to
* change in the future, should instead be retrieved from the database. The {@link
* google.registry.model.tld.Registry Registry} class is one such example of this.
* google.registry.model.tld.Tld Tld} class is one such example of this.
*
* <p>Note: Only settings that are actually configurable belong in this file. It's not a catch-all
* for anything widely used throughout the code base.
@@ -106,12 +108,6 @@ public final class RegistryConfig {
return config.gcpProject.projectId;
}
@Provides
@Config("cloudSchedulerServiceAccountEmail")
public static String provideCloudSchedulerServiceAccountEmail(RegistryConfigSettings config) {
return config.gcpProject.cloudSchedulerServiceAccountEmail;
}
@Provides
@Config("projectIdNumber")
public static long provideProjectIdNumber(RegistryConfigSettings config) {
@@ -124,6 +120,18 @@ public final class RegistryConfig {
return config.gcpProject.locationId;
}
@Provides
@Config("serviceAccountEmails")
public static ImmutableList<String> provideServiceAccountEmails(RegistryConfigSettings config) {
return ImmutableList.copyOf(config.gcpProject.serviceAccountEmails);
}
@Provides
@Config("defaultServiceAccount")
public static Optional<String> provideDefaultServiceAccount(RegistryConfigSettings config) {
return Optional.ofNullable(config.gcpProject.defaultServiceAccount);
}
/**
* The filename of the logo to be displayed in the header of the registrar console.
*
@@ -294,9 +302,9 @@ public final class RegistryConfig {
/**
* The maximum number of domain and host updates to batch together to send to
* PublishDnsUpdatesAction, to avoid exceeding AppEngine's limits.
* PublishDnsUpdatesAction, to avoid exceeding HTTP request timeout limits.
*
* @see google.registry.dns.ReadDnsQueueAction
* @see google.registry.dns.ReadDnsRefreshRequestsAction
*/
@Provides
@Config("dnsTldUpdateBatchSize")
@@ -327,23 +335,30 @@ public final class RegistryConfig {
}
/**
* The requested maximum duration for ReadDnsQueueAction.
* The requested maximum duration for {@link ReadDnsRefreshRequestsAction}.
*
* <p>ReadDnsQueueAction reads update tasks from the dns-pull queue. It will continue reading
* tasks until either the queue is empty, or this duration has passed.
* <p>{@link ReadDnsRefreshRequestsAction} reads refresh requests from {@link DnsRefreshRequest}
* It will continue reading requests until either no requests exist that matche the condition,
* or this duration has passed.
*
* <p>This time is the maximum duration between the first and last attempt to lease tasks from
* the dns-pull queue. The actual running time might be slightly longer, as we process the
* tasks.
* <p>This time is the maximum duration between the first and last attempt to read requests from
* {@link DnsRefreshRequest}. The actual running time might be slightly longer, as we process
* the requests.
*
* <p>This value should be less than the cron-job repeat rate for ReadDnsQueueAction, to make
* sure we don't have multiple ReadDnsActions leasing tasks simultaneously.
* <p>The requests that are read will not be read again by any action until after this period
* has passed, so concurrent runs (or runs that are very close to each other) of {@link
* ReadDnsRefreshRequestsAction} will not keep reading the same requests with the earliest
* request time.
*
* @see google.registry.dns.ReadDnsQueueAction
* <p>Still, this value should ideally be less than the cloud scheduler job repeat rate for
* {@link ReadDnsRefreshRequestsAction}, to not waste resources on multiple actions running at
* the same time.
*
* <p>see google.registry.dns.ReadDnsRefreshRequestsAction
*/
@Provides
@Config("readDnsQueueActionRuntime")
public static Duration provideReadDnsQueueRuntime() {
@Config("readDnsRefreshRequestsActionRuntime")
public static Duration provideReadDnsRefreshRequestsRuntime() {
return Duration.standardSeconds(45);
}
@@ -930,7 +945,7 @@ public final class RegistryConfig {
* <p>Note that this uses {@code @Named} instead of {@code @Config} so that it can be used from
* the low-level util package, which cannot have a dependency on the config package.
*
* @see google.registry.util.CloudTasksUtils
* @see google.registry.batch.CloudTasksUtils
*/
@Provides
@Named("transientFailureRetries")
@@ -1316,12 +1331,6 @@ public final class RegistryConfig {
return config.contactHistory.minMonthsBeforeWipeOut;
}
@Provides
@Config("wipeOutQueryBatchSize")
public static int provideWipeOutQueryBatchSize(RegistryConfigSettings config) {
return config.contactHistory.wipeOutQueryBatchSize;
}
@Provides
@Config("jdbcBatchSize")
public static int provideHibernateJdbcBatchSize(RegistryConfigSettings config) {

View File

@@ -54,7 +54,8 @@ public class RegistryConfigSettings {
public String backendServiceUrl;
public String toolsServiceUrl;
public String pubapiServiceUrl;
public String cloudSchedulerServiceAccountEmail;
public List<String> serviceAccountEmails;
public String defaultServiceAccount;
}
/** Configuration options for OAuth settings for authenticating users. */
@@ -243,7 +244,6 @@ public class RegistryConfigSettings {
/** Configuration for contact history. */
public static class ContactHistory {
public int minMonthsBeforeWipeOut;
public int wipeOutQueryBatchSize;
}
/** Configuration for dns update. */

View File

@@ -22,8 +22,14 @@ gcpProject:
backendServiceUrl: https://localhost
toolsServiceUrl: https://localhost
pubapiServiceUrl: https://localhost
# Service account used by Cloud Scheduler to send authenticated requests.
cloudSchedulerServiceAccountEmail: cloud-scheduler-email@email.com
# Service accounts eligible for authorization (e.g. default service account,
# account used by Cloud Scheduler) to send authenticated requests.
serviceAccountEmails:
- default-service-account-email@email.com
- cloud-scheduler-email@email.com
# The default service account with which the service is running. For example,
# on GAE this would be {project-id}@appspot.gserviceaccount.com
defaultServiceAccount: null
gSuite:
# Publicly accessible domain name of the running G Suite instance.
@@ -471,8 +477,6 @@ registryTool:
contactHistory:
# The number of months that a ContactHistory entity should be stored in the database.
minMonthsBeforeWipeOut: 18
# The batch size for querying ContactHistory table in the database.
wipeOutQueryBatchSize: 500
# Configuration options relevant to the DNS update functionality.
dnsUpdate:

View File

@@ -27,9 +27,9 @@ import static google.registry.cron.CronModule.FOR_EACH_TEST_TLD_PARAM;
import static google.registry.cron.CronModule.JITTER_SECONDS_PARAM;
import static google.registry.cron.CronModule.QUEUE_PARAM;
import static google.registry.cron.CronModule.RUN_IN_EMPTY_PARAM;
import static google.registry.model.tld.Registries.getTldsOfType;
import static google.registry.model.tld.Registry.TldType.REAL;
import static google.registry.model.tld.Registry.TldType.TEST;
import static google.registry.model.tld.Tld.TldType.REAL;
import static google.registry.model.tld.Tld.TldType.TEST;
import static google.registry.model.tld.Tlds.getTldsOfType;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ArrayListMultimap;
@@ -38,6 +38,7 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Multimap;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import google.registry.batch.CloudTasksUtils;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
@@ -45,7 +46,6 @@ import google.registry.request.ParameterMap;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import java.util.stream.Stream;
import javax.inject.Inject;
@@ -140,13 +140,25 @@ public final class TldFanoutAction implements Runnable {
for (String tld : tlds) {
Task task = createTask(tld, flowThruParams);
Task createdTask = cloudTasksUtils.enqueue(queue, task);
outputPayload.append(
String.format(
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri());
if (createdTask.hasAppEngineHttpRequest()) {
outputPayload.append(
String.format(
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(),
tld,
createdTask.getAppEngineHttpRequest().getRelativeUri()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri());
} else {
outputPayload.append(
String.format(
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(), tld, createdTask.getHttpRequest().getUrl()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getHttpRequest().getUrl());
}
}
response.setContentType(PLAIN_TEXT_UTF_8);
response.setPayload(outputPayload.toString());
@@ -158,6 +170,6 @@ public final class TldFanoutAction implements Runnable {
params.put(RequestParameters.PARAM_TLD, tld);
}
return cloudTasksUtils.createPostTaskWithJitter(
endpoint, Service.BACKEND.toString(), params, jitterSeconds);
endpoint, Service.BACKEND, params, jitterSeconds);
}
}

View File

@@ -1,38 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
/** Static class for DNS-related constants. */
public class DnsConstants {
private DnsConstants() {}
/** The name of the DNS pull queue. */
public static final String DNS_PULL_QUEUE_NAME = "dns-pull"; // See queue.xml.
/** The name of the DNS publish push queue. */
public static final String DNS_PUBLISH_PUSH_QUEUE_NAME = "dns-publish"; // See queue.xml.
/** The parameter to use for storing the target type ("domain" or "host" or "zone"). */
public static final String DNS_TARGET_TYPE_PARAM = "Target-Type";
/** The parameter to use for storing the target name (domain or host name) with the task. */
public static final String DNS_TARGET_NAME_PARAM = "Target-Name";
/** The parameter to use for storing the creation time with the task. */
public static final String DNS_TARGET_CREATE_TIME_PARAM = "Create-Time";
/** The possible values of the {@code DNS_TARGET_TYPE_PARAM} parameter. */
public enum TargetType { DOMAIN, HOST, ZONE }
}

View File

@@ -14,27 +14,25 @@
package google.registry.dns;
import static google.registry.dns.DnsConstants.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsConstants.DNS_PULL_QUEUE_NAME;
import static google.registry.dns.RefreshDnsOnHostRenameAction.PARAM_HOST_KEY;
import static google.registry.request.RequestParameters.extractEnumParameter;
import static google.registry.request.RequestParameters.extractIntParameter;
import static google.registry.request.RequestParameters.extractOptionalIntParameter;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredParameter;
import static google.registry.request.RequestParameters.extractSetOfParameters;
import com.google.appengine.api.taskqueue.Queue;
import com.google.appengine.api.taskqueue.QueueFactory;
import com.google.common.hash.HashFunction;
import com.google.common.hash.Hashing;
import dagger.Binds;
import dagger.Module;
import dagger.Provides;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.dns.writer.DnsWriterZone;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import java.util.Optional;
import java.util.Set;
import javax.inject.Named;
import javax.servlet.http.HttpServletRequest;
import org.joda.time.DateTime;
@@ -48,7 +46,10 @@ public abstract class DnsModule {
public static final String PARAM_DOMAINS = "domains";
public static final String PARAM_HOSTS = "hosts";
public static final String PARAM_PUBLISH_TASK_ENQUEUED = "enqueued";
public static final String PARAM_REFRESH_REQUEST_CREATED = "itemsCreated";
public static final String PARAM_REFRESH_REQUEST_TIME = "requestTime";
// This parameter cannot be named "jitterSeconds", which will be read by TldFanoutAction and not
// be passed down to actions.
public static final String PARAM_DNS_JITTER_SECONDS = "dnsJitterSeconds";
@Binds
@DnsWriterZone
@@ -65,28 +66,19 @@ public abstract class DnsModule {
return Hashing.murmur3_32_fixed();
}
@Provides
@Named(DNS_PULL_QUEUE_NAME)
static Queue provideDnsPullQueue() {
return QueueFactory.getQueue(DNS_PULL_QUEUE_NAME);
}
@Provides
@Named(DNS_PUBLISH_PUSH_QUEUE_NAME)
static Queue provideDnsUpdatePushQueue() {
return QueueFactory.getQueue(DNS_PUBLISH_PUSH_QUEUE_NAME);
}
@Provides
@Parameter(PARAM_PUBLISH_TASK_ENQUEUED)
static DateTime provideCreateTime(HttpServletRequest req) {
return DateTime.parse(extractRequiredParameter(req, PARAM_PUBLISH_TASK_ENQUEUED));
}
// TODO: Retire the old header after DNS pull queue migration.
@Provides
@Parameter(PARAM_REFRESH_REQUEST_CREATED)
@Parameter(PARAM_REFRESH_REQUEST_TIME)
static DateTime provideItemsCreateTime(HttpServletRequest req) {
return DateTime.parse(extractRequiredParameter(req, PARAM_REFRESH_REQUEST_CREATED));
return DateTime.parse(
extractOptionalParameter(req, "itemsCreated")
.orElse(extractRequiredParameter(req, PARAM_REFRESH_REQUEST_TIME)));
}
@Provides
@@ -125,6 +117,12 @@ public abstract class DnsModule {
return extractRequiredParameter(req, PARAM_HOST_KEY);
}
@Provides
@Parameter(PARAM_DNS_JITTER_SECONDS)
static Optional<Integer> provideJitterSeconds(HttpServletRequest req) {
return extractOptionalIntParameter(req, PARAM_DNS_JITTER_SECONDS);
}
@Provides
@Parameter("domainOrHostName")
static String provideName(HttpServletRequest req) {

View File

@@ -1,170 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.dns.DnsConstants.DNS_PULL_QUEUE_NAME;
import static google.registry.dns.DnsConstants.DNS_TARGET_CREATE_TIME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_NAME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_TYPE_PARAM;
import static google.registry.model.tld.Registries.assertTldExists;
import static google.registry.request.RequestParameters.PARAM_TLD;
import static google.registry.util.DomainNameUtils.getTldFromDomainName;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import com.google.appengine.api.taskqueue.Queue;
import com.google.appengine.api.taskqueue.QueueConstants;
import com.google.appengine.api.taskqueue.TaskHandle;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.apphosting.api.DeadlineExceededException;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.InternetDomainName;
import com.google.common.util.concurrent.RateLimiter;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.model.tld.Registries;
import google.registry.util.Clock;
import google.registry.util.NonFinalForTesting;
import java.util.List;
import java.util.Optional;
import java.util.logging.Level;
import javax.inject.Inject;
import javax.inject.Named;
import org.joda.time.Duration;
/**
* Methods for manipulating the queue used for DNS write tasks.
*
* <p>This includes a {@link RateLimiter} to limit the {@link Queue#leaseTasks} call rate to 9 QPS,
* to stay under the 10 QPS limit for this function.
*
* <p>Note that overlapping calls to {@link ReadDnsQueueAction} (the only place where
* {@link DnsQueue#leaseTasks} is used) will have different rate limiters, so they could exceed the
* allowed rate. This should be rare though - because {@link DnsQueue#leaseTasks} is only used in
* {@link ReadDnsQueueAction}, which is run as a cron job with running time shorter than the cron
* repeat time - meaning there should never be two instances running at once.
*
* @see google.registry.config.RegistryConfig.ConfigModule#provideReadDnsQueueRuntime
*/
public class DnsQueue {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final Queue queue;
final Clock clock;
// Queue.leaseTasks is limited to 10 requests per second as per
// https://cloud.google.com/appengine/docs/standard/java/javadoc/com/google/appengine/api/taskqueue/Queue.html
// "If you generate more than 10 LeaseTasks requests per second, only the first 10 requests will
// return results. The others will return no results."
private static final RateLimiter rateLimiter = RateLimiter.create(9);
@Inject
public DnsQueue(@Named(DNS_PULL_QUEUE_NAME) Queue queue, Clock clock) {
this.queue = queue;
this.clock = clock;
}
@VisibleForTesting
public static DnsQueue createForTesting(Clock clock) {
return new DnsQueue(getQueue(DNS_PULL_QUEUE_NAME), clock);
}
@NonFinalForTesting
@VisibleForTesting
long leaseTasksBatchSize = QueueConstants.maxLeaseCount();
/** Enqueues the given task type with the given target name to the DNS queue. */
private TaskHandle addToQueue(
TargetType targetType, String targetName, String tld, Duration countdown) {
logger.atInfo().log(
"Adding task type=%s, target=%s, tld=%s to pull queue %s (%d tasks currently on queue).",
targetType, targetName, tld, DNS_PULL_QUEUE_NAME, queue.fetchStatistics().getNumTasks());
return queue.add(
TaskOptions.Builder.withDefaults()
.method(Method.PULL)
.countdownMillis(countdown.getMillis())
.param(DNS_TARGET_TYPE_PARAM, targetType.toString())
.param(DNS_TARGET_NAME_PARAM, targetName)
.param(DNS_TARGET_CREATE_TIME_PARAM, clock.nowUtc().toString())
.param(PARAM_TLD, tld));
}
/** Adds a task to the queue to refresh the DNS information for the specified subordinate host. */
public TaskHandle addHostRefreshTask(String hostName) {
Optional<InternetDomainName> tld = Registries.findTldForName(InternetDomainName.from(hostName));
checkArgument(
tld.isPresent(), String.format("%s is not a subordinate host to a known tld", hostName));
return addToQueue(TargetType.HOST, hostName, tld.get().toString(), Duration.ZERO);
}
/** Enqueues a task to refresh DNS for the specified domain now. */
public TaskHandle addDomainRefreshTask(String domainName) {
return addDomainRefreshTask(domainName, Duration.ZERO);
}
/** Enqueues a task to refresh DNS for the specified domain at some point in the future. */
public TaskHandle addDomainRefreshTask(String domainName, Duration countdown) {
return addToQueue(
TargetType.DOMAIN,
domainName,
assertTldExists(getTldFromDomainName(domainName)),
countdown);
}
/** Adds a task to the queue to refresh the DNS information for the specified zone. */
public TaskHandle addZoneRefreshTask(String zoneName) {
return addToQueue(TargetType.ZONE, zoneName, zoneName, Duration.ZERO);
}
/**
* Returns the maximum number of tasks that can be leased with {@link #leaseTasks}.
*
* <p>If this many tasks are returned, then there might be more tasks still waiting in the queue.
*
* <p>If less than this number of tasks are returned, then there are no more items in the queue.
*/
public long getLeaseTasksBatchSize() {
return leaseTasksBatchSize;
}
/** Returns handles for a batch of tasks, leased for the specified duration. */
public List<TaskHandle> leaseTasks(Duration leaseDuration) {
try {
rateLimiter.acquire();
int numTasks = queue.fetchStatistics().getNumTasks();
logger.at((numTasks >= leaseTasksBatchSize) ? Level.WARNING : Level.INFO).log(
"There are %d tasks in the DNS queue '%s'.", numTasks, DNS_PULL_QUEUE_NAME);
return queue.leaseTasks(leaseDuration.getMillis(), MILLISECONDS, leaseTasksBatchSize);
} catch (TransientFailureException | DeadlineExceededException e) {
logger.atSevere().withCause(e).log("Failed leasing tasks too fast.");
return ImmutableList.of();
}
}
/** Delete a list of tasks, removing them from the queue permanently. */
public void deleteTasks(List<TaskHandle> tasks) {
try {
queue.deleteTask(tasks);
} catch (TransientFailureException | DeadlineExceededException e) {
logger.atSevere().withCause(e).log("Failed deleting tasks too fast.");
}
}
}

View File

@@ -0,0 +1,133 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.google.common.collect.ImmutableList;
import com.google.common.net.InternetDomainName;
import google.registry.model.common.DnsRefreshRequest;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tlds;
import java.util.Collection;
import java.util.Optional;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/** Utility class to handle DNS refresh requests. */
public final class DnsUtils {
/** The name of the DNS publish push queue. */
public static final String DNS_PUBLISH_PUSH_QUEUE_NAME = "dns-publish"; // See queue.xml.
private DnsUtils() {}
private static void requestDnsRefresh(String name, TargetType type, Duration delay) {
// Throws an IllegalArgumentException if the name is not under a managed TLD -- we only update
// DNS for names that are under our management.
String tld = Tlds.findTldForNameOrThrow(InternetDomainName.from(name)).toString();
tm().transact(
() ->
tm().insert(
new DnsRefreshRequest(
type, name, tld, tm().getTransactionTime().plus(delay))));
}
public static void requestDomainDnsRefresh(String domainName, Duration delay) {
requestDnsRefresh(domainName, TargetType.DOMAIN, delay);
}
public static void requestDomainDnsRefresh(String domainName) {
requestDomainDnsRefresh(domainName, Duration.ZERO);
}
public static void requestHostDnsRefresh(String hostName) {
requestDnsRefresh(hostName, TargetType.HOST, Duration.ZERO);
}
/**
* Returns pending DNS update requests that need further processing up to batch size, in ascending
* order of their request time, and updates their processing time to now.
*
* <p>The criteria to pick the requests to include are:
*
* <ul>
* <li>They are for the given TLD.
* <li>Their request time is not in the future.
* <li>The last time they were processed is before the cooldown period.
* </ul>
*/
public static ImmutableList<DnsRefreshRequest> readAndUpdateRequestsWithLatestProcessTime(
String tld, Duration cooldown, int batchSize) {
return tm().transact(
() -> {
DateTime transactionTime = tm().getTransactionTime();
ImmutableList<DnsRefreshRequest> requests =
tm().query(
"FROM DnsRefreshRequest WHERE tld = :tld "
+ "AND requestTime <= :now AND lastProcessTime < :cutoffTime "
+ "ORDER BY requestTime ASC, id ASC",
DnsRefreshRequest.class)
.setParameter("tld", tld)
.setParameter("now", transactionTime)
.setParameter("cutoffTime", transactionTime.minus(cooldown))
.setMaxResults(batchSize)
.getResultStream()
// Note that the process time is when the request was last read, batched and
// queued up for publishing, not when it is actually published by the DNS
// writer. This timestamp acts as a cooldown so the same request will not be
// retried too frequently. See DnsRefreshRequest.getLastProcessTime for a
// detailed explanation.
.map(e -> e.updateProcessTime(transactionTime))
.collect(toImmutableList());
tm().updateAll(requests);
return requests;
});
}
/**
* Removes the requests that have been processed.
*
* <p>Note that if a request entity has already been deleted, the method still succeeds without
* error because all we care about is that it no longer exists after the method runs.
*/
public static void deleteRequests(Collection<DnsRefreshRequest> requests) {
tm().transact(
() ->
tm().delete(
requests.stream()
.map(DnsRefreshRequest::createVKey)
.collect(toImmutableList())));
}
public static long getDnsAPlusAAAATtlForHost(String host, Duration dnsDefaultATtl) {
Optional<InternetDomainName> tldName = Tlds.findTldForName(InternetDomainName.from(host));
Duration dnsAPlusAaaaTtl = dnsDefaultATtl;
if (tldName.isPresent()) {
Tld tld = Tld.get(tldName.get().toString());
if (tld.getDnsAPlusAaaaTtl().isPresent()) {
dnsAPlusAaaaTtl = tld.getDnsAPlusAaaaTtl().get();
}
}
return dnsAPlusAaaaTtl.getStandardSeconds();
}
/** The possible values of the {@code DNS_TARGET_TYPE_PARAM} parameter. */
public enum TargetType {
DOMAIN,
HOST
}
}

View File

@@ -19,7 +19,7 @@ import static com.google.common.base.Preconditions.checkState;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import google.registry.dns.writer.DnsWriter;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import java.util.Map;
import javax.inject.Inject;
@@ -41,7 +41,7 @@ public final class DnsWriterProxy {
* If the DnsWriter doesn't belong to this TLD, will return null.
*/
public DnsWriter getByClassNameForTld(String className, String tld) {
if (!Registry.get(tld).getDnsWriters().contains(className)) {
if (!Tld.get(tld).getDnsWriters().contains(className)) {
logger.atWarning().log(
"Loaded potentially stale DNS writer %s which is not active on TLD %s.", className, tld);
return null;

View File

@@ -15,14 +15,16 @@
package google.registry.dns;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.dns.DnsConstants.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsModule.PARAM_DNS_WRITER;
import static google.registry.dns.DnsModule.PARAM_DOMAINS;
import static google.registry.dns.DnsModule.PARAM_HOSTS;
import static google.registry.dns.DnsModule.PARAM_LOCK_INDEX;
import static google.registry.dns.DnsModule.PARAM_NUM_PUBLISH_LOCKS;
import static google.registry.dns.DnsModule.PARAM_PUBLISH_TASK_ENQUEUED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_CREATED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_TIME;
import static google.registry.dns.DnsUtils.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.request.Action.Method.POST;
import static google.registry.request.RequestParameters.PARAM_TLD;
@@ -35,6 +37,7 @@ import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.InternetDomainName;
import dagger.Lazy;
import google.registry.batch.CloudTasksUtils;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsMetrics.ActionStatus;
import google.registry.dns.DnsMetrics.CommitStatus;
@@ -44,7 +47,7 @@ import google.registry.model.domain.Domain;
import google.registry.model.host.Host;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarPoc;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Header;
@@ -54,7 +57,6 @@ import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.request.lock.LockHandler;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.DomainNameUtils;
import google.registry.util.EmailMessage;
import google.registry.util.SendEmailService;
@@ -85,7 +87,6 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final DnsQueue dnsQueue;
private final DnsWriterProxy dnsWriterProxy;
private final DnsMetrics dnsMetrics;
private final Duration timeout;
@@ -94,9 +95,9 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
/**
* The DNS writer to use for this batch.
*
* <p>This comes from the fanout in {@link ReadDnsQueueAction} which dispatches each batch to be
* published by each DNS writer on the TLD. So this field contains the value of one of the DNS
* writers configured in {@link Registry#getDnsWriters()}, as of the time the batch was written
* <p>This comes from the fanout in {@link ReadDnsRefreshRequestsAction} which dispatches each
* batch to be published by each DNS writer on the TLD. So this field contains the value of one of
* the DNS writers configured in {@link Tld#getDnsWriters()}, as of the time the batch was written
* out (and not necessarily currently).
*/
private final String dnsWriter;
@@ -124,7 +125,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
public PublishDnsUpdatesAction(
@Parameter(PARAM_DNS_WRITER) String dnsWriter,
@Parameter(PARAM_PUBLISH_TASK_ENQUEUED) DateTime enqueuedTime,
@Parameter(PARAM_REFRESH_REQUEST_CREATED) DateTime itemsCreateTime,
@Parameter(PARAM_REFRESH_REQUEST_TIME) DateTime itemsCreateTime,
@Parameter(PARAM_LOCK_INDEX) int lockIndex,
@Parameter(PARAM_NUM_PUBLISH_LOCKS) int numPublishLocks,
@Parameter(PARAM_DOMAINS) Set<String> domains,
@@ -139,7 +140,6 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
@Config("gSuiteOutgoingEmailAddress") InternetAddress gSuiteOutgoingEmailAddress,
@Header(APP_ENGINE_RETRY_HEADER) Optional<Integer> appEngineRetryCount,
@Header(CLOUD_TASKS_RETRY_HEADER) Optional<Integer> cloudTasksRetryCount,
DnsQueue dnsQueue,
DnsWriterProxy dnsWriterProxy,
DnsMetrics dnsMetrics,
LockHandler lockHandler,
@@ -147,12 +147,11 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
CloudTasksUtils cloudTasksUtils,
SendEmailService sendEmailService,
Response response) {
this.dnsQueue = dnsQueue;
this.dnsWriterProxy = dnsWriterProxy;
this.dnsMetrics = dnsMetrics;
this.timeout = timeout;
this.sendEmailService = sendEmailService;
this.retryCount =
retryCount =
cloudTasksRetryCount.orElse(
appEngineRetryCount.orElseThrow(
() -> new IllegalStateException("Missing a valid retry count header")));
@@ -276,7 +275,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
return null;
}
private InternetAddress emailToInternetAddress(String email) {
private static InternetAddress emailToInternetAddress(String email) {
try {
return new InternetAddress(email, true);
} catch (Exception e) {
@@ -306,7 +305,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
registrar.get().getContacts().stream()
.filter(c -> c.getTypes().contains(RegistrarPoc.Type.ADMIN))
.map(RegistrarPoc::getEmailAddress)
.map(this::emailToInternetAddress)
.map(PublishDnsUpdatesAction::emailToInternetAddress)
.collect(toImmutableList());
sendEmailService.sendEmail(
@@ -339,14 +338,14 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
DNS_PUBLISH_PUSH_QUEUE_NAME,
cloudTasksUtils.createPostTask(
PATH,
Service.BACKEND.toString(),
Service.BACKEND,
ImmutableMultimap.<String, String>builder()
.put(PARAM_TLD, tld)
.put(PARAM_DNS_WRITER, dnsWriter)
.put(PARAM_LOCK_INDEX, Integer.toString(lockIndex))
.put(PARAM_NUM_PUBLISH_LOCKS, Integer.toString(numPublishLocks))
.put(PARAM_PUBLISH_TASK_ENQUEUED, clock.nowUtc().toString())
.put(PARAM_REFRESH_REQUEST_CREATED, itemsCreateTime.toString())
.put(PARAM_REFRESH_REQUEST_TIME, itemsCreateTime.toString())
.put(PARAM_DOMAINS, Joiner.on(",").join(domains))
.put(PARAM_HOSTS, Joiner.on(",").join(hosts))
.build()));
@@ -356,10 +355,10 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
private void requeueBatch() {
logger.atInfo().log("Requeueing batch for retry.");
for (String domain : nullToEmpty(domains)) {
dnsQueue.addDomainRefreshTask(domain);
requestDomainDnsRefresh(domain);
}
for (String host : nullToEmpty(hosts)) {
dnsQueue.addHostRefreshTask(host);
requestHostDnsRefresh(host);
}
}
@@ -372,11 +371,11 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
return false;
}
// Check if the Registry object's num locks has changed since this task was batched
int registryNumPublishLocks = Registry.get(tld).getNumDnsPublishLocks();
if (registryNumPublishLocks != numPublishLocks) {
int tldNumPublishLocks = Tld.get(tld).getNumDnsPublishLocks();
if (tldNumPublishLocks != numPublishLocks) {
logger.atWarning().log(
"Registry numDnsPublishLocks %d out of sync with parameter %d.",
registryNumPublishLocks, numPublishLocks);
tldNumPublishLocks, numPublishLocks);
return false;
}
return true;

View File

@@ -1,401 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.common.collect.ImmutableSetMultimap.toImmutableSetMultimap;
import static com.google.common.collect.Sets.difference;
import static google.registry.dns.DnsConstants.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsConstants.DNS_TARGET_CREATE_TIME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_NAME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_TYPE_PARAM;
import static google.registry.dns.DnsModule.PARAM_DNS_WRITER;
import static google.registry.dns.DnsModule.PARAM_DOMAINS;
import static google.registry.dns.DnsModule.PARAM_HOSTS;
import static google.registry.dns.DnsModule.PARAM_LOCK_INDEX;
import static google.registry.dns.DnsModule.PARAM_NUM_PUBLISH_LOCKS;
import static google.registry.dns.DnsModule.PARAM_PUBLISH_TASK_ENQUEUED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_CREATED;
import static google.registry.request.RequestParameters.PARAM_TLD;
import static google.registry.util.DomainNameUtils.getSecondLevelDomain;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.appengine.api.taskqueue.TaskHandle;
import com.google.auto.value.AutoValue;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ComparisonChain;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.collect.Iterables;
import com.google.common.collect.Ordering;
import com.google.common.flogger.FluentLogger;
import com.google.common.hash.HashFunction;
import com.google.common.hash.Hashing;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Registry;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.io.UnsupportedEncodingException;
import java.util.Collection;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collectors;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Action for fanning out DNS refresh tasks by TLD, using data taken from the DNS pull queue.
*
* <h3>Parameters Reference</h3>
*
* <ul>
* <li>{@code jitterSeconds} Randomly delay each task by up to this many seconds.
* </ul>
*/
@Action(
service = Action.Service.BACKEND,
path = "/_dr/cron/readDnsQueue",
automaticallyPrintOk = true,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public final class ReadDnsQueueAction implements Runnable {
private static final String PARAM_JITTER_SECONDS = "jitterSeconds";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/**
* Buffer time since the end of this action until retriable tasks are available again.
*
* <p>We read batches of tasks from the queue in a loop. Any task that will need to be retried has
* to be kept out of the queue for the duration of this action - otherwise we will lease it again
* in a subsequent loop.
*
* <p>The 'requestedMaximumDuration' value is the maximum delay between the first and last calls
* to lease tasks, hence we want the lease duration to be (slightly) longer than that.
* LEASE_PADDING is the value we add to {@link #requestedMaximumDuration} to make sure the lease
* duration is indeed longer.
*/
private static final Duration LEASE_PADDING = Duration.standardMinutes(1);
private final int tldUpdateBatchSize;
private final Duration requestedMaximumDuration;
private final Optional<Integer> jitterSeconds;
private final Clock clock;
private final DnsQueue dnsQueue;
private final HashFunction hashFunction;
private final CloudTasksUtils cloudTasksUtils;
@Inject
ReadDnsQueueAction(
@Config("dnsTldUpdateBatchSize") int tldUpdateBatchSize,
@Config("readDnsQueueActionRuntime") Duration requestedMaximumDuration,
@Parameter(PARAM_JITTER_SECONDS) Optional<Integer> jitterSeconds,
Clock clock,
DnsQueue dnsQueue,
HashFunction hashFunction,
CloudTasksUtils cloudTasksUtils) {
this.tldUpdateBatchSize = tldUpdateBatchSize;
this.requestedMaximumDuration = requestedMaximumDuration;
this.jitterSeconds = jitterSeconds;
this.clock = clock;
this.dnsQueue = dnsQueue;
this.hashFunction = hashFunction;
this.cloudTasksUtils = cloudTasksUtils;
}
/** Container for items we pull out of the DNS pull queue and process for fanout. */
@AutoValue
abstract static class RefreshItem implements Comparable<RefreshItem> {
static RefreshItem create(TargetType type, String name, DateTime creationTime) {
return new AutoValue_ReadDnsQueueAction_RefreshItem(type, name, creationTime);
}
abstract TargetType type();
abstract String name();
abstract DateTime creationTime();
@Override
public int compareTo(RefreshItem other) {
return ComparisonChain.start()
.compare(this.type(), other.type())
.compare(this.name(), other.name())
.compare(this.creationTime(), other.creationTime())
.result();
}
}
/** Leases all tasks from the pull queue and creates per-tld update actions for them. */
@Override
public void run() {
DateTime requestedEndTime = clock.nowUtc().plus(requestedMaximumDuration);
ImmutableSet<String> tlds = Registries.getTlds();
while (requestedEndTime.isAfterNow()) {
List<TaskHandle> tasks = dnsQueue.leaseTasks(requestedMaximumDuration.plus(LEASE_PADDING));
logger.atInfo().log("Leased %d DNS update tasks.", tasks.size());
if (!tasks.isEmpty()) {
dispatchTasks(ImmutableSet.copyOf(tasks), tlds);
}
if (tasks.size() < dnsQueue.getLeaseTasksBatchSize()) {
return;
}
}
}
/** A set of tasks grouped based on the action to take on them. */
@AutoValue
abstract static class ClassifiedTasks {
/**
* List of tasks we want to keep in the queue (want to retry in the future).
*
* <p>Normally, any task we lease from the queue will be deleted - either because we are going
* to process it now (these tasks are part of refreshItemsByTld), or because these tasks are
* "corrupt" in some way (don't parse, don't have the required parameters etc.).
*
* <p>Some tasks however are valid, but can't be processed at the moment. These tasks will be
* kept in the queue for future processing.
*
* <p>This includes tasks belonging to paused TLDs (which we want to process once the TLD is
* unpaused) and tasks belonging to (currently) unknown TLDs.
*/
abstract ImmutableSet<TaskHandle> tasksToKeep();
/** The paused TLDs for which we found at least one refresh request. */
abstract ImmutableSet<String> pausedTlds();
/**
* The unknown TLDs for which we found at least one refresh request.
*
* <p>Unknown TLDs might have valid requests because the list of TLDs is heavily cached. Hence,
* when we add a new TLD - it is possible that some instances will not have that TLD in their
* list yet. We don't want to discard these tasks, so we wait a bit and retry them again.
*
* <p>This is less likely for production TLDs but is quite likely for test TLDs where we might
* create a TLD and then use it within seconds.
*/
abstract ImmutableSet<String> unknownTlds();
/**
* All the refresh items we need to actually fulfill, grouped by TLD.
*
* <p>By default, the multimap is ordered - this can be changed with the builder.
*
* <p>The items for each TLD will be grouped together, and domains and hosts will be grouped
* within a TLD.
*
* <p>The grouping and ordering of domains and hosts is not technically necessary, but a
* predictable ordering makes it possible to write detailed tests.
*/
abstract ImmutableSetMultimap<String, RefreshItem> refreshItemsByTld();
static Builder builder() {
Builder builder = new AutoValue_ReadDnsQueueAction_ClassifiedTasks.Builder();
builder
.refreshItemsByTldBuilder()
.orderKeysBy(Ordering.natural())
.orderValuesBy(Ordering.natural());
return builder;
}
@AutoValue.Builder
abstract static class Builder {
abstract ImmutableSet.Builder<TaskHandle> tasksToKeepBuilder();
abstract ImmutableSet.Builder<String> pausedTldsBuilder();
abstract ImmutableSet.Builder<String> unknownTldsBuilder();
abstract ImmutableSetMultimap.Builder<String, RefreshItem> refreshItemsByTldBuilder();
abstract ClassifiedTasks build();
}
}
/**
* Creates per-tld update actions for tasks leased from the pull queue.
*
* <p>Will return "irrelevant" tasks to the queue for future processing. "Irrelevant" tasks are
* tasks for paused TLDs or tasks for TLDs not part of {@link Registries#getTlds()}.
*/
private void dispatchTasks(ImmutableSet<TaskHandle> tasks, ImmutableSet<String> tlds) {
ClassifiedTasks classifiedTasks = classifyTasks(tasks, tlds);
if (!classifiedTasks.pausedTlds().isEmpty()) {
logger.atInfo().log(
"The dns-pull queue is paused for TLDs: %s.", classifiedTasks.pausedTlds());
}
if (!classifiedTasks.unknownTlds().isEmpty()) {
logger.atWarning().log(
"The dns-pull queue has unknown TLDs: %s.", classifiedTasks.unknownTlds());
}
bucketRefreshItems(classifiedTasks.refreshItemsByTld());
if (!classifiedTasks.tasksToKeep().isEmpty()) {
logger.atWarning().log(
"Keeping %d DNS update tasks in the queue.", classifiedTasks.tasksToKeep().size());
}
// Delete the tasks we don't want to see again from the queue.
//
// tasksToDelete includes both the tasks that we already fulfilled in this call (were part of
// refreshItemsByTld) and "corrupt" tasks we weren't able to parse correctly (this shouldn't
// happen, and we logged a "severe" error)
//
// We let the lease on the rest of the tasks (the tasks we want to keep) expire on its own. The
// reason we don't release these tasks back to the queue immediately is that we are going to
// immediately read another batch of tasks from the queue - and we don't want to get the same
// tasks again.
ImmutableSet<TaskHandle> tasksToDelete =
difference(tasks, classifiedTasks.tasksToKeep()).immutableCopy();
logger.atInfo().log("Removing %d DNS update tasks from the queue.", tasksToDelete.size());
dnsQueue.deleteTasks(tasksToDelete.asList());
logger.atInfo().log("Done processing DNS tasks.");
}
/**
* Classifies the given tasks based on what action we need to take on them.
*
* <p>Note that some tasks might appear in multiple categories (if multiple actions are to be
* taken on them) or in no category (if no action is to be taken on them)
*/
private static ClassifiedTasks classifyTasks(
ImmutableSet<TaskHandle> tasks, ImmutableSet<String> tlds) {
ClassifiedTasks.Builder classifiedTasksBuilder = ClassifiedTasks.builder();
// Read all tasks on the DNS pull queue and load them into the refresh item multimap.
for (TaskHandle task : tasks) {
try {
Map<String, String> params = ImmutableMap.copyOf(task.extractParams());
DateTime creationTime = DateTime.parse(params.get(DNS_TARGET_CREATE_TIME_PARAM));
String tld = params.get(PARAM_TLD);
if (tld == null) {
logger.atSevere().log(
"Discarding invalid DNS refresh request %s; no TLD specified.", task);
} else if (!tlds.contains(tld)) {
classifiedTasksBuilder.tasksToKeepBuilder().add(task);
classifiedTasksBuilder.unknownTldsBuilder().add(tld);
} else if (Registry.get(tld).getDnsPaused()) {
classifiedTasksBuilder.tasksToKeepBuilder().add(task);
classifiedTasksBuilder.pausedTldsBuilder().add(tld);
} else {
String typeString = params.get(DNS_TARGET_TYPE_PARAM);
String name = params.get(DNS_TARGET_NAME_PARAM);
TargetType type = TargetType.valueOf(typeString);
switch (type) {
case DOMAIN:
case HOST:
classifiedTasksBuilder
.refreshItemsByTldBuilder()
.put(tld, RefreshItem.create(type, name, creationTime));
break;
default:
logger.atSevere().log(
"Discarding DNS refresh request %s of type %s.", task, typeString);
break;
}
}
} catch (RuntimeException | UnsupportedEncodingException e) {
logger.atSevere().withCause(e).log("Discarding invalid DNS refresh request %s.", task);
}
}
return classifiedTasksBuilder.build();
}
/**
* Subdivides the tld to {@link RefreshItem} multimap into buckets by lock index, if applicable.
*
* <p>If the tld has numDnsPublishLocks <= 1, we enqueue all updates on the default lock 1 of 1.
*/
private void bucketRefreshItems(ImmutableSetMultimap<String, RefreshItem> refreshItemsByTld) {
// Loop through the multimap by TLD and generate refresh tasks for the hosts and domains for
// each configured DNS writer.
for (Map.Entry<String, Collection<RefreshItem>> tldRefreshItemsEntry
: refreshItemsByTld.asMap().entrySet()) {
String tld = tldRefreshItemsEntry.getKey();
int numPublishLocks = Registry.get(tld).getNumDnsPublishLocks();
// 1 lock or less implies no TLD-wide locks, simply enqueue everything under lock 1 of 1
if (numPublishLocks <= 1) {
enqueueUpdates(tld, 1, 1, tldRefreshItemsEntry.getValue());
} else {
tldRefreshItemsEntry.getValue().stream()
.collect(
toImmutableSetMultimap(
refreshItem -> getLockIndex(tld, numPublishLocks, refreshItem),
refreshItem -> refreshItem))
.asMap()
.forEach((key, value) -> enqueueUpdates(tld, key, numPublishLocks, value));
}
}
}
/**
* Returns the lock index for a given refreshItem.
*
* <p>We hash the second level domain for all records, to group in-bailiwick hosts (the only ones
* we refresh DNS for) with their superordinate domains. We use consistent hashing to determine
* the lock index because it gives us [0,N) bucketing properties out of the box, then add 1 to
* make indexes within [1,N].
*/
private int getLockIndex(String tld, int numPublishLocks, RefreshItem refreshItem) {
String domain = getSecondLevelDomain(refreshItem.name(), tld);
return Hashing.consistentHash(hashFunction.hashString(domain, UTF_8), numPublishLocks) + 1;
}
/**
* Creates DNS refresh tasks for all writers for the tld within a lock index and batches large
* updates into smaller chunks.
*/
private void enqueueUpdates(
String tld, int lockIndex, int numPublishLocks, Collection<RefreshItem> items) {
for (List<RefreshItem> chunk : Iterables.partition(items, tldUpdateBatchSize)) {
DateTime earliestCreateTime =
chunk.stream().map(RefreshItem::creationTime).min(Comparator.naturalOrder()).get();
for (String dnsWriter : Registry.get(tld).getDnsWriters()) {
Task task =
cloudTasksUtils.createPostTaskWithJitter(
PublishDnsUpdatesAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.<String, String>builder()
.put(PARAM_TLD, tld)
.put(PARAM_DNS_WRITER, dnsWriter)
.put(PARAM_LOCK_INDEX, Integer.toString(lockIndex))
.put(PARAM_NUM_PUBLISH_LOCKS, Integer.toString(numPublishLocks))
.put(PARAM_PUBLISH_TASK_ENQUEUED, clock.nowUtc().toString())
.put(PARAM_REFRESH_REQUEST_CREATED, earliestCreateTime.toString())
.put(
PARAM_DOMAINS,
chunk.stream()
.filter(item -> item.type() == TargetType.DOMAIN)
.map(RefreshItem::name)
.collect(Collectors.joining(",")))
.put(
PARAM_HOSTS,
chunk.stream()
.filter(item -> item.type() == TargetType.HOST)
.map(RefreshItem::name)
.collect(Collectors.joining(",")))
.build(),
jitterSeconds);
cloudTasksUtils.enqueue(DNS_PUBLISH_PUSH_QUEUE_NAME, task);
}
}
}
}

View File

@@ -0,0 +1,205 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.common.collect.ImmutableSetMultimap.toImmutableSetMultimap;
import static google.registry.dns.DnsModule.PARAM_DNS_JITTER_SECONDS;
import static google.registry.dns.DnsModule.PARAM_DNS_WRITER;
import static google.registry.dns.DnsModule.PARAM_DOMAINS;
import static google.registry.dns.DnsModule.PARAM_HOSTS;
import static google.registry.dns.DnsModule.PARAM_LOCK_INDEX;
import static google.registry.dns.DnsModule.PARAM_NUM_PUBLISH_LOCKS;
import static google.registry.dns.DnsModule.PARAM_PUBLISH_TASK_ENQUEUED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_TIME;
import static google.registry.dns.DnsUtils.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsUtils.deleteRequests;
import static google.registry.dns.DnsUtils.readAndUpdateRequestsWithLatestProcessTime;
import static google.registry.request.Action.Method.POST;
import static google.registry.request.RequestParameters.PARAM_TLD;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import static google.registry.util.DomainNameUtils.getSecondLevelDomain;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.cloud.tasks.v2.Task;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.hash.HashFunction;
import com.google.common.hash.Hashing;
import google.registry.batch.CloudTasksUtils;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.model.common.DnsRefreshRequest;
import google.registry.model.tld.Tld;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.Collection;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Action for fanning out DNS refresh tasks by TLD, using data taken from {@link DnsRefreshRequest}
* table.
*/
@Action(
service = Service.BACKEND,
path = "/_dr/task/readDnsRefreshRequests",
automaticallyPrintOk = true,
method = POST,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public final class ReadDnsRefreshRequestsAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final int tldUpdateBatchSize;
private final Duration requestedMaximumDuration;
private final Optional<Integer> jitterSeconds;
private final String tld;
private final Clock clock;
private final HashFunction hashFunction;
private final CloudTasksUtils cloudTasksUtils;
@Inject
ReadDnsRefreshRequestsAction(
@Config("dnsTldUpdateBatchSize") int tldUpdateBatchSize,
@Config("readDnsRefreshRequestsActionRuntime") Duration requestedMaximumDuration,
@Parameter(PARAM_DNS_JITTER_SECONDS) Optional<Integer> jitterSeconds,
@Parameter(PARAM_TLD) String tld,
Clock clock,
HashFunction hashFunction,
CloudTasksUtils cloudTasksUtils) {
this.tldUpdateBatchSize = tldUpdateBatchSize;
this.requestedMaximumDuration = requestedMaximumDuration;
this.jitterSeconds = jitterSeconds;
this.tld = tld;
this.clock = clock;
this.hashFunction = hashFunction;
this.cloudTasksUtils = cloudTasksUtils;
}
/**
* Reads requests up to the maximum requested runtime, and enqueues update batches from the these
* requests.
*/
@Override
public void run() {
if (Tld.get(tld).getDnsPaused()) {
logger.atInfo().log("The queue updated is paused for TLD: %s.", tld);
return;
}
DateTime requestedEndTime = clock.nowUtc().plus(requestedMaximumDuration);
// See getLockIndex(), requests are evenly distributed to [1, numDnsPublishLocks], so each
// bucket would be roughly the size of tldUpdateBatchSize.
int processBatchSize = tldUpdateBatchSize * Tld.get(tld).getNumDnsPublishLocks();
while (requestedEndTime.isAfter(clock.nowUtc())) {
ImmutableList<DnsRefreshRequest> requests =
readAndUpdateRequestsWithLatestProcessTime(
tld, requestedMaximumDuration, processBatchSize);
logger.atInfo().log("Read %d DNS update requests for TLD %s.", requests.size(), tld);
if (!requests.isEmpty()) {
processRequests(requests);
}
if (requests.size() < processBatchSize) {
return;
}
}
}
/**
* Subdivides {@link DnsRefreshRequest} into buckets by lock index, enqueue a Cloud Tasks task per
* bucket, and then delete the requests in each bucket.
*/
void processRequests(Collection<DnsRefreshRequest> requests) {
int numPublishLocks = Tld.get(tld).getNumDnsPublishLocks();
requests.stream()
.collect(
toImmutableSetMultimap(
request -> getLockIndex(numPublishLocks, request), request -> request))
.asMap()
.forEach(
(lockIndex, bucketedRequests) -> {
try {
enqueueUpdates(lockIndex, numPublishLocks, bucketedRequests);
deleteRequests(bucketedRequests);
logger.atInfo().log(
"Processed %d DNS update requests for TLD %s.", bucketedRequests.size(), tld);
} catch (Exception e) {
// Log but continue to process the next bucket. The failed tasks will NOT be
// deleted and will be retried after the cooldown period has passed.
logger.atSevere().withCause(e).log(
"Error processing DNS update requests: %s", bucketedRequests);
}
});
}
/**
* Returns the lock index for a given {@link DnsRefreshRequest}.
*
* <p>We hash the second level domain for all records, to group in-bailiwick hosts (the only ones
* we refresh DNS for) with their superordinate domains. We use consistent hashing to determine
* the lock index because it gives us [0,N) bucketing properties out of the box, then add 1 to
* make indexes within [1,N].
*/
int getLockIndex(int numPublishLocks, DnsRefreshRequest request) {
String domain = getSecondLevelDomain(request.getName(), tld);
return Hashing.consistentHash(hashFunction.hashString(domain, UTF_8), numPublishLocks) + 1;
}
/** Creates DNS refresh tasks for all writers for the tld within a lock index. */
void enqueueUpdates(int lockIndex, int numPublishLocks, Collection<DnsRefreshRequest> requests) {
ImmutableList.Builder<String> domainsBuilder = new ImmutableList.Builder<>();
ImmutableList.Builder<String> hostsBuilder = new ImmutableList.Builder<>();
DateTime earliestRequestTime = END_OF_TIME;
for (DnsRefreshRequest request : requests) {
if (request.getRequestTime().isBefore(earliestRequestTime)) {
earliestRequestTime = request.getRequestTime();
}
String name = request.getName();
if (request.getType().equals(TargetType.DOMAIN)) {
domainsBuilder.add(name);
} else {
hostsBuilder.add(name);
}
}
ImmutableList<String> domains = domainsBuilder.build();
ImmutableList<String> hosts = hostsBuilder.build();
for (String dnsWriter : Tld.get(tld).getDnsWriters()) {
Task task =
cloudTasksUtils.createPostTaskWithJitter(
PublishDnsUpdatesAction.PATH,
Service.BACKEND,
ImmutableMultimap.<String, String>builder()
.put(PARAM_TLD, tld)
.put(PARAM_DNS_WRITER, dnsWriter)
.put(PARAM_LOCK_INDEX, Integer.toString(lockIndex))
.put(PARAM_NUM_PUBLISH_LOCKS, Integer.toString(numPublishLocks))
.put(PARAM_PUBLISH_TASK_ENQUEUED, clock.nowUtc().toString())
.put(PARAM_REFRESH_REQUEST_TIME, earliestRequestTime.toString())
.put(PARAM_DOMAINS, Joiner.on(',').join(domains))
.put(PARAM_HOSTS, Joiner.on(',').join(hosts))
.build(),
jitterSeconds);
cloudTasksUtils.enqueue(DNS_PUBLISH_PUSH_QUEUE_NAME, task);
logger.atInfo().log(
"Enqueued DNS update request for (TLD %s, lock %d) with %d domains and %d hosts.",
tld, lockIndex, domains.size(), hosts.size());
}
}
}

View File

@@ -14,9 +14,11 @@
package google.registry.dns;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.model.EppResource;
import google.registry.model.EppResource.ForeignKeyedEppResource;
import google.registry.model.annotations.ExternalMessagingName;
@@ -39,7 +41,6 @@ import javax.inject.Inject;
public final class RefreshDnsAction implements Runnable {
private final Clock clock;
private final DnsQueue dnsQueue;
private final String domainOrHostName;
private final TargetType type;
@@ -47,12 +48,10 @@ public final class RefreshDnsAction implements Runnable {
RefreshDnsAction(
@Parameter("domainOrHostName") String domainOrHostName,
@Parameter("type") TargetType type,
Clock clock,
DnsQueue dnsQueue) {
Clock clock) {
this.domainOrHostName = domainOrHostName;
this.type = type;
this.clock = clock;
this.dnsQueue = dnsQueue;
}
@Override
@@ -63,11 +62,11 @@ public final class RefreshDnsAction implements Runnable {
switch (type) {
case DOMAIN:
loadAndVerifyExistence(Domain.class, domainOrHostName);
dnsQueue.addDomainRefreshTask(domainOrHostName);
requestDomainDnsRefresh(domainOrHostName);
break;
case HOST:
verifyHostIsSubordinate(loadAndVerifyExistence(Host.class, domainOrHostName));
dnsQueue.addHostRefreshTask(domainOrHostName);
requestHostDnsRefresh(domainOrHostName);
break;
default:
throw new BadRequestException("Unsupported type: " + type);

View File

@@ -14,6 +14,7 @@
package google.registry.dns;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.dns.RefreshDnsOnHostRenameAction.PATH;
import static google.registry.model.EppResourceUtils.getLinkedDomainKeys;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
@@ -45,14 +46,11 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
private final VKey<Host> hostKey;
private final Response response;
private final DnsQueue dnsQueue;
@Inject
RefreshDnsOnHostRenameAction(
@Parameter(PARAM_HOST_KEY) String hostKey, Response response, DnsQueue dnsQueue) {
RefreshDnsOnHostRenameAction(@Parameter(PARAM_HOST_KEY) String hostKey, Response response) {
this.hostKey = VKey.createEppVKeyFromString(hostKey);
this.response = response;
this.dnsQueue = dnsQueue;
}
@Override
@@ -76,7 +74,7 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
.stream()
.map(domainKey -> tm().loadByKey(domainKey))
.filter(Domain::shouldPublishToDns)
.forEach(domain -> dnsQueue.addDomainRefreshTask(domain.getDomainName()));
.forEach(domain -> requestDomainDnsRefresh(domain.getDomainName()));
}
if (!hostValid) {

View File

@@ -16,6 +16,7 @@ package google.registry.dns.writer.clouddns;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.dns.DnsUtils.getDnsAPlusAAAATtlForHost;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.util.DomainNameUtils.getSecondLevelDomain;
@@ -39,7 +40,8 @@ import google.registry.dns.writer.DnsWriterZone;
import google.registry.model.domain.Domain;
import google.registry.model.domain.secdns.DomainDsData;
import google.registry.model.host.Host;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tlds;
import google.registry.util.Clock;
import google.registry.util.Concurrent;
import google.registry.util.Retrier;
@@ -131,6 +133,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
return;
}
Tld tld = Tld.get(domain.get().getTld());
ImmutableSet.Builder<ResourceRecordSet> domainRecords = new ImmutableSet.Builder<>();
// Construct DS records (if any).
@@ -145,7 +148,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteDomainName)
.setTtl((int) defaultDsTtl.getStandardSeconds())
.setTtl((int) tld.getDnsDsTtl().orElse(defaultDsTtl).getStandardSeconds())
.setType("DS")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(dsRrData)));
@@ -170,7 +173,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteDomainName)
.setTtl((int) defaultNsTtl.getStandardSeconds())
.setTtl((int) tld.getDnsNsTtl().orElse(defaultNsTtl).getStandardSeconds())
.setType("NS")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(nsRrData)));
@@ -216,7 +219,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteHostName)
.setTtl((int) defaultATtl.getStandardSeconds())
.setTtl((int) getDnsAPlusAAAATtlForHost(hostName, defaultATtl))
.setType("A")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(aRrData)));
@@ -226,7 +229,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteHostName)
.setTtl((int) defaultATtl.getStandardSeconds())
.setTtl((int) getDnsAPlusAAAATtlForHost(hostName, defaultATtl))
.setType("AAAA")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(aaaaRrData)));
@@ -245,7 +248,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
public void publishHost(String hostName) {
// Get the superordinate domain name of the host.
InternetDomainName host = InternetDomainName.from(hostName);
Optional<InternetDomainName> tld = Registries.findTldForName(host);
Optional<InternetDomainName> tld = Tlds.findTldForName(host);
// Host not managed by our registry, no need to update DNS.
if (!tld.isPresent()) {
@@ -276,11 +279,12 @@ public class CloudDnsWriter extends BaseDnsWriter {
}
/** Returns the glue records for in-bailiwick nameservers for the given domain+records. */
private Stream<String> filterGlueRecords(String domainName, Stream<ResourceRecordSet> records) {
private static Stream<String> filterGlueRecords(
String domainName, Stream<ResourceRecordSet> records) {
return records
.filter(record -> record.getType().equals("NS"))
.filter(record -> "NS".equals(record.getType()))
.flatMap(record -> record.getRrdatas().stream())
.filter(hostName -> hostName.endsWith("." + domainName) && !hostName.equals(domainName));
.filter(hostName -> hostName.endsWith('.' + domainName) && !hostName.equals(domainName));
}
/** Mutate the zone with the provided map of hostnames to desired DNS records. */
@@ -363,8 +367,8 @@ public class CloudDnsWriter extends BaseDnsWriter {
* <p>This call should be used in conjunction with {@link #getResourceRecordsForDomains} in a
* get-and-set retry loop.
*
* <p>See {@link "https://cloud.google.com/dns/troubleshooting"} for a list of errors produced by
* the Google Cloud DNS API.
* <p>See {@link "<a href="https://cloud.google.com/dns/troubleshooting">Troubleshoot Cloud
* DNS</a>"} for a list of errors produced by the Google Cloud DNS API.
*
* @throws ZoneStateException if the operation could not be completely successfully because the
* records to delete do not exist, already exist or have been modified with different
@@ -417,12 +421,12 @@ public class CloudDnsWriter extends BaseDnsWriter {
* @param hostName the fully qualified hostname
*/
private static String getAbsoluteHostName(String hostName) {
return hostName.endsWith(".") ? hostName : hostName + ".";
return hostName.endsWith(".") ? hostName : hostName + '.';
}
/** Zone state on Cloud DNS does not match the expected state. */
static class ZoneStateException extends RuntimeException {
public ZoneStateException(String reason) {
ZoneStateException(String reason) {
super("Zone state on Cloud DNS does not match the expected state: " + reason);
}
}

View File

@@ -22,7 +22,7 @@ import dagger.Provides;
import dagger.multibindings.IntoMap;
import dagger.multibindings.IntoSet;
import dagger.multibindings.StringKey;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.writer.DnsWriter;
import google.registry.util.GoogleCredentialsBundle;
@@ -35,7 +35,7 @@ public abstract class CloudDnsWriterModule {
@Provides
static Dns provideDns(
@DefaultCredential GoogleCredentialsBundle credentialsBundle,
@ApplicationDefaultCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId,
@Config("cloudDnsRootUrl") Optional<String> rootUrl,
@Config("cloudDnsServicePath") Optional<String> servicePath) {

View File

@@ -18,6 +18,7 @@ import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Verify.verify;
import static com.google.common.collect.Sets.intersection;
import static com.google.common.collect.Sets.union;
import static google.registry.dns.DnsUtils.getDnsAPlusAAAATtlForHost;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import com.google.common.base.Joiner;
@@ -30,7 +31,8 @@ import google.registry.dns.writer.DnsWriterZone;
import google.registry.model.domain.Domain;
import google.registry.model.domain.secdns.DomainDsData;
import google.registry.model.host.Host;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tlds;
import google.registry.util.Clock;
import java.io.IOException;
import java.net.Inet4Address;
@@ -152,7 +154,7 @@ public class DnsUpdateWriter extends BaseDnsWriter {
// Get the superordinate domain name of the host.
InternetDomainName host = InternetDomainName.from(hostName);
ImmutableList<String> hostParts = host.parts();
Optional<InternetDomainName> tld = Registries.findTldForName(host);
Optional<InternetDomainName> tld = Tlds.findTldForName(host);
// host not managed by our registry, no need to update DNS.
if (!tld.isPresent()) {
@@ -185,12 +187,13 @@ public class DnsUpdateWriter extends BaseDnsWriter {
private RRset makeDelegationSignerSet(Domain domain) {
RRset signerSet = new RRset();
Tld tld = Tld.get(domain.getTld());
for (DomainDsData signerData : domain.getDsData()) {
DSRecord dsRecord =
new DSRecord(
toAbsoluteName(domain.getDomainName()),
DClass.IN,
dnsDefaultDsTtl.getStandardSeconds(),
tld.getDnsDsTtl().orElse(dnsDefaultDsTtl).getStandardSeconds(),
signerData.getKeyTag(),
signerData.getAlgorithm(),
signerData.getDigestType(),
@@ -224,12 +227,13 @@ public class DnsUpdateWriter extends BaseDnsWriter {
private RRset makeNameServerSet(Domain domain) {
RRset nameServerSet = new RRset();
Tld tld = Tld.get(domain.getTld());
for (String hostName : domain.loadNameserverHostNames()) {
NSRecord record =
new NSRecord(
toAbsoluteName(domain.getDomainName()),
DClass.IN,
dnsDefaultNsTtl.getStandardSeconds(),
tld.getDnsNsTtl().orElse(dnsDefaultNsTtl).getStandardSeconds(),
toAbsoluteName(hostName));
nameServerSet.addRR(record);
}
@@ -244,7 +248,7 @@ public class DnsUpdateWriter extends BaseDnsWriter {
new ARecord(
toAbsoluteName(host.getHostName()),
DClass.IN,
dnsDefaultATtl.getStandardSeconds(),
getDnsAPlusAAAATtlForHost(host.getHostName(), dnsDefaultATtl),
address);
addressSet.addRR(record);
}
@@ -260,7 +264,7 @@ public class DnsUpdateWriter extends BaseDnsWriter {
new AAAARecord(
toAbsoluteName(host.getHostName()),
DClass.IN,
dnsDefaultATtl.getStandardSeconds(),
getDnsAPlusAAAATtlForHost(host.getHostName(), dnsDefaultATtl),
address);
addressSet.addRR(record);
}

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<taskentries>
<entries>
<task>
<url>/_dr/task/rdeStaging</url>
<name>rdeStaging</name>
@@ -80,12 +80,13 @@
</task>
<task>
<url><![CDATA[/_dr/task/expandRecurringBillingEvents?advanceCursor]]></url>
<name>expandRecurringBillingEvents</name>
<url><![CDATA[/_dr/task/expandBillingRecurrences?advanceCursor]]></url>
<name>expandBillingRecurrences</name>
<description>
This job runs an action that creates synthetic OneTime billing events from Recurring billing
events. Events are created for all instances of Recurring billing events that should exist
between the RECURRING_BILLING cursor's time and the execution time of the action.
This job runs an action that creates synthetic one-time billing events
from billing recurrences. Events are created for all recurrences that
should exist between the RECURRING_BILLING cursor's time and the execution
time of the action.
</description>
<schedule>0 3 * * *</schedule>
</task>
@@ -129,12 +130,12 @@
</task>
<task>
<url><![CDATA[/_dr/cron/readDnsQueue?jitterSeconds=45]]></url>
<name>readDnsQueue</name>
<url>
<![CDATA[/_dr/cron/fanout?queue=dns-refresh&forEachRealTld&forEachTestTld&endpoint=/_dr/task/readDnsRefreshRequests&dnsJitterSeconds=45]]></url>
<name>readDnsRefreshRequests</name>
<description>
Lease all tasks from the dns-pull queue, group by TLD, and invoke PublishDnsUpdates for each
group.
Enqueue a ReadDnsRefreshRequestAction for each TLD.
</description>
<schedule>*/1 * * * *</schedule>
</task>
</taskentries>
</entries>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--TODO: ptkach - Remove this file after cloud scheduler is fully up and running -->
<cronentries>
</cronentries>

View File

@@ -151,10 +151,10 @@
<url-pattern>/_dr/task/nordnVerify</url-pattern>
</servlet-mapping>
<!-- Reads the DNS push and pull queues and kick off the appropriate tasks to update zone. -->
<!-- Reads the DNS refresh requests and kick off the appropriate tasks to update zone. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/cron/readDnsQueue</url-pattern>
<url-pattern>/_dr/task/readDnsRefreshRequests</url-pattern>
</servlet-mapping>
<!-- Publishes DNS updates. -->
@@ -240,10 +240,10 @@
<url-pattern>/_dr/task/refreshDnsOnHostRename</url-pattern>
</servlet-mapping>
<!-- Action to expand recurring billing events into OneTimes. -->
<!-- Action to expand BillingRecurrences into BillingEvents. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/task/expandRecurringBillingEvents</url-pattern>
<url-pattern>/_dr/task/expandBillingRecurrences</url-pattern>
</servlet-mapping>
<!-- Background action to delete domains past end of autorenewal. -->

View File

@@ -0,0 +1,113 @@
<?xml version="1.0" encoding="UTF-8"?>
<entries>
<!-- Queue template with all supported params -->
<!-- More information - https://cloud.google.com/sdk/gcloud/reference/tasks/queues/create -->
<!--
<queue>
<name></name>
<max-attempts></max-attempts>
<max-backoff></max-backoff>
<max-concurrent-dispatches></max-concurrent-dispatches>
<max-dispatches-per-second></max-dispatches-per-second>
<max-doublings></max-doublings>
<max-retry-duration></max-retry-duration>
<min-backoff></min-backoff>
</queue>
-->
<!-- Queue for reading DNS update requests and batching them off to the dns-publish queue. -->
<queue>
<name>dns-refresh</name>
<max-dispatches-per-second>100</max-dispatches-per-second>
</queue>
<!-- Queue for publishing DNS updates in batches. -->
<queue>
<name>dns-publish</name>
<max-dispatches-per-second>100</max-dispatches-per-second>
<!-- 30 sec backoff increasing linearly up to 30 minutes. -->
<min-backoff>30s</min-backoff>
<max-backoff>1800s</max-backoff>
<max-doublings>0</max-doublings>
</queue>
<!-- Queue for uploading RDE deposits to the escrow provider. -->
<queue>
<name>rde-upload</name>
<max-dispatches-per-second>0.166666667</max-dispatches-per-second>
<max-concurrent-dispatches>5</max-concurrent-dispatches>
<max-retry-duration>14400s</max-retry-duration>
</queue>
<!-- Queue for uploading RDE reports to ICANN. -->
<queue>
<name>rde-report</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-concurrent-dispatches>1</max-concurrent-dispatches>
<max-retry-duration>14400s</max-retry-duration>
</queue>
<!-- Queue for copying BRDA deposits to GCS. -->
<queue>
<name>brda</name>
<max-dispatches-per-second>0.016666667</max-dispatches-per-second>
<max-concurrent-dispatches>10</max-concurrent-dispatches>
<max-retry-duration>82800s</max-retry-duration>
</queue>
<!-- Queue for tasks that trigger domain DNS update upon host rename. -->
<queue>
<name>async-host-rename</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
</queue>
<!-- Queue for tasks that wait for a Beam pipeline to complete (i.e. Spec11 and invoicing). -->
<queue>
<name>beam-reporting</name>
<max-dispatches-per-second>0.016666667</max-dispatches-per-second>
<max-concurrent-dispatches>1</max-concurrent-dispatches>
<max-attempts>5</max-attempts>
<min-backoff>180s</min-backoff>
<max-backoff>180s</max-backoff>
</queue>
<!-- Queue for tasks that communicate with TMCH MarksDB webserver. -->
<queue>
<name>marksdb</name>
<max-dispatches-per-second>0.016666667</max-dispatches-per-second>
<max-concurrent-dispatches>1</max-concurrent-dispatches>
<max-retry-duration>39600s</max-retry-duration> <!-- cron interval minus hour -->
</queue>
<!-- Queue for tasks to produce LORDN CSV reports, populated by a Cloud Scheduler fanout job. -->
<queue>
<name>nordn</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-concurrent-dispatches>10</max-concurrent-dispatches>
<max-retry-duration>39600s</max-retry-duration> <!-- cron interval minus hour -->
</queue>
<!-- Queue for tasks that sync data to Google Spreadsheets. -->
<queue>
<name>sheet</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<!-- max-concurrent-dispatches is intentionally omitted. -->
<max-retry-duration>3600s</max-retry-duration>
</queue>
<!-- Queue for infrequent cron tasks (i.e. hourly or less often) that should retry three times on failure. -->
<queue>
<name>retryable-cron-tasks</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-attempts>3</max-attempts>
</queue>
<!-- &lt;!&ndash; Queue for async actions that should be run at some point in the future. &ndash;&gt;-->
<queue>
<name>async-actions</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-concurrent-dispatches>5</max-concurrent-dispatches>
</queue>
</entries>

View File

@@ -1,125 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<queue-entries>
<queue>
<name>dns-pull</name>
<mode>pull</mode>
</queue>
<queue>
<name>dns-publish</name>
<rate>100/s</rate>
<bucket-size>100</bucket-size>
<!-- 30 sec backoff increasing linearly up to 30 minutes. -->
<retry-parameters>
<min-backoff-seconds>30</min-backoff-seconds>
<max-backoff-seconds>1800</max-backoff-seconds>
<max-doublings>0</max-doublings>
</retry-parameters>
</queue>
<queue>
<name>rde-upload</name>
<rate>10/m</rate>
<bucket-size>50</bucket-size>
<max-concurrent-requests>5</max-concurrent-requests>
<retry-parameters>
<task-age-limit>4h</task-age-limit>
</retry-parameters>
</queue>
<queue>
<name>rde-report</name>
<rate>1/s</rate>
<max-concurrent-requests>1</max-concurrent-requests>
<retry-parameters>
<task-age-limit>4h</task-age-limit>
</retry-parameters>
</queue>
<queue>
<name>brda</name>
<rate>1/m</rate>
<max-concurrent-requests>10</max-concurrent-requests>
<retry-parameters>
<task-age-limit>23h</task-age-limit>
</retry-parameters>
</queue>
<!-- Queue for tasks that trigger domain DNS update upon host rename. -->
<queue>
<name>async-host-rename</name>
<rate>1/s</rate>
</queue>
<!-- Queue for tasks that wait for a Beam pipeline to complete (i.e. Spec11 and invoicing). -->
<queue>
<name>beam-reporting</name>
<rate>1/m</rate>
<max-concurrent-requests>1</max-concurrent-requests>
<retry-parameters>
<task-retry-limit>5</task-retry-limit>
<min-backoff-seconds>180</min-backoff-seconds>
<max-backoff-seconds>180</max-backoff-seconds>
</retry-parameters>
</queue>
<!-- Queue for tasks that communicate with TMCH MarksDB webserver. -->
<queue>
<name>marksdb</name>
<rate>1/m</rate>
<max-concurrent-requests>1</max-concurrent-requests>
<retry-parameters>
<task-age-limit>11h</task-age-limit> <!-- cron interval minus hour -->
</retry-parameters>
</queue>
<!-- Queue for tasks to produce LORDN CSV reports, either by the query or queue method. -->
<queue>
<name>nordn</name>
<rate>1/s</rate>
<max-concurrent-requests>10</max-concurrent-requests>
<retry-parameters>
<task-age-limit>11h</task-age-limit> <!-- cron interval minus hour -->
</retry-parameters>
</queue>
<!-- Queue for LORDN Claims CSV rows to be periodically queried and then uploaded in batches. -->
<queue>
<name>lordn-claims</name>
<mode>pull</mode>
</queue>
<!-- Queue for LORDN Sunrise CSV rows to be periodically queried and then uploaded in batches. -->
<queue>
<name>lordn-sunrise</name>
<mode>pull</mode>
</queue>
<!-- Queue for tasks that sync data to Google Spreadsheets. -->
<queue>
<name>sheet</name>
<rate>1/s</rate>
<!-- max-concurrent-requests is intentionally omitted. -->
<retry-parameters>
<task-age-limit>1h</task-age-limit>
</retry-parameters>
</queue>
<!-- Queue for infrequent cron tasks (i.e. hourly or less often) that should retry three times on failure. -->
<queue>
<name>retryable-cron-tasks</name>
<rate>1/s</rate>
<retry-parameters>
<task-retry-limit>3</task-retry-limit>
</retry-parameters>
</queue>
<!-- Queue for async actions that should be run at some point in the future. -->
<queue>
<name>async-actions</name>
<rate>1/s</rate>
<max-concurrent-requests>5</max-concurrent-requests>
</queue>
</queue-entries>

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<taskentries>
<entries>
<!--
/cron/fanout params:
@@ -130,11 +130,11 @@
</task>
<task>
<url><![CDATA[/_dr/cron/readDnsQueue?jitterSeconds=45]]></url>
<name>readDnsQueue</name>
<url>
<![CDATA[/_dr/cron/fanout?queue=dns-refresh&forEachRealTld&forEachTestTld&endpoint=/_dr/task/readDnsRefreshRequests&dnsJitterSeconds=45]]></url>
<name>readDnsRefreshRequests</name>
<description>
Lease all tasks from the dns-pull queue, group by TLD, and invoke PublishDnsUpdates for each
group.
Enqueue a ReadDnsRefreshRequestAction for each TLD.
</description>
<schedule>*/1 * * * *</schedule>
</task>
@@ -148,16 +148,4 @@
</description>
<schedule>7 3 * * *</schedule>
</task>
<!--
The next two wipeout jobs are required when crash has production data.
-->
<task>
<url><![CDATA[/_dr/task/wipeOutCloudSql]]></url>
<name>wipeOutCloudSql</name>
<description>
This job runs an action that deletes all data in Cloud SQL.
</description>
<schedule>7 3 * * 6</schedule>
</task>
</taskentries>
</entries>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--TODO: ptkach - Remove this file after cloud scheduler is fully up and running -->
<cronentries>
</cronentries>

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<taskentries>
<entries>
<task>
<url>/_dr/task/rdeStaging</url>
<name>rdeStaging</name>
@@ -90,7 +90,18 @@
<schedule>0 */1 * * *</schedule>
</task>
<!-- TODO: @ptkach add resaveAllEppResourcesPipelineAction https://cs.opensource.google/nomulus/nomulus/+/master:core/src/main/java/google/registry/env/production/default/WEB-INF/cron.xml;l=105 -->
<task>
<url><![CDATA[/_dr/task/resaveAllEppResourcesPipeline?fast=true]]></url>
<name>resaveAllEppResourcesPipeline</name>
<description>
This job resaves all our resources, projected in time to "now".
</description>
<!--
Deviation from cron tasks schedule: 1st monday of month 09:00 is replaced
with 1st of the month 09:00
-->
<schedule>0 9 1 * *</schedule>
</task>
<task>
<url><![CDATA[/_dr/task/updateRegistrarRdapBaseUrls]]></url>
@@ -111,12 +122,13 @@
</task>
<task>
<url><![CDATA[/_dr/task/expandRecurringBillingEvents?advanceCursor]]></url>
<name>expandRecurringBillingEvents</name>
<url><![CDATA[/_dr/task/expandBillingRecurrences?advanceCursor]]></url>
<name>expandBillingRecurrences</name>
<description>
This job runs an action that creates synthetic OneTime billing events from Recurring billing
events. Events are created for all instances of Recurring billing events that should exist
between the RECURRING_BILLING cursor's time and the execution time of the action.
This job runs an action that creates synthetic one-time billing events
from billing recurrences. Events are created for all recurrences that
should exist between the RECURRING_BILLING cursor's time and the execution
time of the action.
</description>
<schedule>0 3 * * *</schedule>
</task>
@@ -162,30 +174,6 @@
<schedule>0 */12 * * *</schedule>
</task>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=nordn&endpoint=/_dr/task/nordnUpload&forEachRealTld&lordnPhase=sunrise&pullQueue]]></url>
<name>nordnUploadSunrisePullQueue</name>
<description>
This job uploads LORDN Sunrise CSV files for each TLD to MarksDB using
pull queue. It should be run at most every three hours, or at absolute
minimum every 26 hours.
</description>
<!-- This may be set anywhere between "every 3 hours" and "every 25 hours". -->
<schedule>0 */12 * * *</schedule>
</task>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=nordn&endpoint=/_dr/task/nordnUpload&forEachRealTld&lordnPhase=claims&pullQueue]]></url>
<name>nordnUploadClaimsPullQueue</name>
<description>
This job uploads LORDN Claims CSV files for each TLD to MarksDB using pull
queue. It should be run at most every three hours, or at absolute minimum
every 26 hours.
</description>
<!-- This may be set anywhere between "every 3 hours" and "every 25 hours". -->
<schedule>0 */12 * * *</schedule>
</task>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<name>deleteProberData</name>
@@ -214,11 +202,11 @@
</task>
<task>
<url><![CDATA[/_dr/cron/readDnsQueue?jitterSeconds=45]]></url>
<name>readDnsQueue</name>
<url>
<![CDATA[/_dr/cron/fanout?queue=dns-refresh&forEachRealTld&forEachTestTld&endpoint=/_dr/task/readDnsRefreshRequests&dnsJitterSeconds=45]]></url>
<name>readDnsRefreshRequests</name>
<description>
Lease all tasks from the dns-pull queue, group by TLD, and invoke PublishDnsUpdates for each
group.
Enqueue a ReadDnsRefreshRequestAction for each TLD.
</description>
<schedule>*/1 * * * *</schedule>
</task>
@@ -256,7 +244,7 @@
reports to the associated registrars' drive folders.
See GenerateInvoicesAction for more details.
</description>
<!--WARNING: This must occur AFTER expandRecurringBillingEvents and AFTER exportSnapshot, as
<!--WARNING: This must occur AFTER expandBillingRecurrences and AFTER exportSnapshot, as
it uses Bigquery as the source of truth for billable events. ExportSnapshot usually takes
about 2 hours to complete, so we give 11 hours to be safe. Normally, we give 24+ hours (see
icannReportingStaging), but the invoicing team prefers receiving the e-mail on the first of
@@ -285,4 +273,4 @@
</description>
<schedule>0 15 * * 1</schedule>
</task>
</taskentries>
</entries>

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--TODO: ptkach - Remove this file after cloud scheduler is fully up and running -->
<cronentries>
</cronentries>

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<taskentries>
<entries>
<task>
<url>/_dr/task/rdeStaging</url>
<name>rdeStaging</name>
@@ -12,11 +12,11 @@
</task>
<task>
<url><![CDATA[/_dr/cron/readDnsQueue?jitterSeconds=45]]></url>
<name>readDnsQueue</name>
<url>
<![CDATA[/_dr/cron/fanout?queue=dns-refresh&forEachRealTld&forEachTestTld&endpoint=/_dr/task/readDnsRefreshRequests&dnsJitterSeconds=45]]></url>
<name>readDnsRefreshRequests</name>
<description>
Lease all tasks from the dns-pull queue, group by TLD, and invoke PublishDnsUpdates for each
group.
Enqueue a ReadDnsRefreshRequestAction for each TLD.
</description>
<schedule>*/1 * * * *</schedule>
</task>
@@ -68,4 +68,4 @@
</description>
<schedule>7 3 * * 6</schedule>
</task>
</taskentries>
</entries>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--TODO: ptkach - Remove this file after cloud scheduler is fully up and running -->
<cronentries>
</cronentries>

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<taskentries>
<entries>
<task>
<url>/_dr/task/rdeStaging</url>
<name>rdeStaging</name>
@@ -11,9 +11,6 @@
<!--
This only needs to run once per day, but we launch additional jobs in case the
cursor is lagging behind, so it'll catch up to the current date eventually.
See <a href="../../../production/default/WEB-INF/cron.xml">production config</a> for an
explanation of job starting times.
-->
<schedule>7 */12 * * *</schedule>
</task>
@@ -85,16 +82,30 @@
</task>
<task>
<url><![CDATA[/_dr/task/expandRecurringBillingEvents?advanceCursor]]></url>
<name>expandRecurringBillingEvents</name>
<url><![CDATA[/_dr/task/expandBillingRecurrences?advanceCursor]]></url>
<name>expandBillingRecurrences</name>
<description>
This job runs an action that creates synthetic OneTime billing events from Recurring billing
events. Events are created for all instances of Recurring billing events that should exist
between the RECURRING_BILLING cursor's time and the execution time of the action.
This job runs an action that creates synthetic one-time billing events
from billing recurrences. Events are created for all recurrences that
should exist between the RECURRING_BILLING cursor's time and the execution
time of the action.
</description>
<schedule>0 3 * * *</schedule>
</task>
<task>
<url><![CDATA[/_dr/task/resaveAllEppResourcesPipeline?fast=true]]></url>
<name>resaveAllEppResourcesPipeline</name>
<description>
This job resaves all our resources, projected in time to "now".
</description>
<!--
Deviation from cron tasks schedule: 1st monday of month 09:00 is replaced
with 1st of the month 09:00
-->
<schedule>0 9 1 * *</schedule>
</task>
<task>
<url><![CDATA[/_dr/task/deleteExpiredDomains]]></url>
<name>deleteExpiredDomains</name>
@@ -132,14 +143,12 @@
<schedule>0 5 * * *</schedule>
</task>
<!-- TODO: @ptkach add resaveAllEppResourcesPipelineAction https://cs.opensource.google/nomulus/nomulus/+/master:core/src/main/java/google/registry/env/sandbox/default/WEB-INF/cron.xml;l=89 -->
<task>
<url><![CDATA[/_dr/cron/readDnsQueue?jitterSeconds=45]]></url>
<name>readDnsQueue</name>
<url>
<![CDATA[/_dr/cron/fanout?queue=dns-refresh&forEachRealTld&forEachTestTld&endpoint=/_dr/task/readDnsRefreshRequests&dnsJitterSeconds=45]]></url>
<name>readDnsRefreshRequests</name>
<description>
Lease all tasks from the dns-pull queue, group by TLD, and invoke PublishDnsUpdates for each
group.
Enqueue a ReadDnsRefreshRequestAction for each TLD.
</description>
<schedule>*/1 * * * *</schedule>
</task>
@@ -153,4 +162,4 @@
</description>
<schedule>0 15 * * 1</schedule>
</task>
</taskentries>
</entries>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--TODO: ptkach - Remove this file after cloud scheduler is fully up and running -->
<cronentries>
</cronentries>

View File

@@ -15,7 +15,7 @@
package google.registry.export;
import static com.google.common.base.Verify.verifyNotNull;
import static google.registry.model.tld.Registries.getTldsOfType;
import static google.registry.model.tld.Tlds.getTldsOfType;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.request.Action.Method.POST;
import static java.nio.charset.StandardCharsets.UTF_8;
@@ -27,8 +27,8 @@ import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.gcs.GcsUtils;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tld.TldType;
import google.registry.request.Action;
import google.registry.request.auth.Auth;
import google.registry.storage.drive.DriveConnection;
@@ -106,27 +106,28 @@ public class ExportDomainListsAction implements Runnable {
}
protected static boolean exportToDrive(
String tld, String domains, DriveConnection driveConnection) {
String tldStr, String domains, DriveConnection driveConnection) {
verifyNotNull(driveConnection, "Expecting non-null driveConnection");
try {
Registry registry = Registry.get(tld);
if (registry.getDriveFolderId() == null) {
Tld tld = Tld.get(tldStr);
if (tld.getDriveFolderId() == null) {
logger.atInfo().log(
"Skipping registered domains export for TLD %s because Drive folder isn't specified.",
tld);
tldStr);
} else {
String resultMsg =
driveConnection.createOrUpdateFile(
REGISTERED_DOMAINS_FILENAME,
MediaType.PLAIN_TEXT_UTF_8,
registry.getDriveFolderId(),
tld.getDriveFolderId(),
domains.getBytes(UTF_8));
logger.atInfo().log(
"Exporting registered domains succeeded for TLD %s, response was: %s", tld, resultMsg);
"Exporting registered domains succeeded for TLD %s, response was: %s",
tldStr, resultMsg);
}
} catch (Throwable e) {
logger.atSevere().withCause(e).log(
"Error exporting registered domains for TLD %s to Drive, skipping...", tld);
"Error exporting registered domains for TLD %s to Drive, skipping...", tldStr);
return false;
}
return true;

View File

@@ -29,7 +29,7 @@ import com.google.common.collect.Iterables;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.tld.label.PremiumList.PremiumEntry;
import google.registry.model.tld.label.PremiumListDao;
import google.registry.request.Action;
@@ -61,7 +61,10 @@ public class ExportPremiumTermsAction implements Runnable {
@Config("premiumTermsExportDisclaimer")
String exportDisclaimer;
@Inject @Parameter(RequestParameters.PARAM_TLD) String tld;
@Inject
@Parameter(RequestParameters.PARAM_TLD)
String tldStr;
@Inject Response response;
@Inject
@@ -88,59 +91,59 @@ public class ExportPremiumTermsAction implements Runnable {
public void run() {
response.setContentType(PLAIN_TEXT_UTF_8);
try {
Registry registry = Registry.get(tld);
String resultMsg = checkConfig(registry).orElseGet(() -> exportPremiumTerms(registry));
Tld tld = Tld.get(tldStr);
String resultMsg = checkConfig(tld).orElseGet(() -> exportPremiumTerms(tld));
response.setStatus(SC_OK);
response.setPayload(resultMsg);
} catch (Throwable e) {
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(e.getMessage());
throw new RuntimeException(
String.format("Exception occurred while exporting premium terms for TLD %s.", tld), e);
String.format("Exception occurred while exporting premium terms for TLD %s.", tldStr), e);
}
}
/**
* Checks if {@code registry} is properly configured to export premium terms.
* Checks if {@link Tld} is properly configured to export premium terms.
*
* @return {@link Optional#empty()} if {@code registry} export may proceed. Otherwise returns an
* error message
* @return {@link Optional#empty()} if {@link Tld} export may proceed. Otherwise returns an error
* message
*/
private Optional<String> checkConfig(Registry registry) {
if (isNullOrEmpty(registry.getDriveFolderId())) {
private Optional<String> checkConfig(Tld tld) {
if (isNullOrEmpty(tld.getDriveFolderId())) {
logger.atInfo().log(
"Skipping premium terms export for TLD %s because Drive folder isn't specified.", tld);
"Skipping premium terms export for TLD %s because Drive folder isn't specified.", tldStr);
return Optional.of("Skipping export because no Drive folder is associated with this TLD");
}
if (!registry.getPremiumListName().isPresent()) {
logger.atInfo().log("No premium terms to export for TLD '%s'.", tld);
if (!tld.getPremiumListName().isPresent()) {
logger.atInfo().log("No premium terms to export for TLD '%s'.", tldStr);
return Optional.of("No premium lists configured");
}
return Optional.empty();
}
private String exportPremiumTerms(Registry registry) {
private String exportPremiumTerms(Tld tld) {
try {
String fileId =
driveConnection.createOrUpdateFile(
PREMIUM_TERMS_FILENAME,
EXPORT_MIME_TYPE,
registry.getDriveFolderId(),
getFormattedPremiumTerms(registry).getBytes(UTF_8));
tld.getDriveFolderId(),
getFormattedPremiumTerms(tld).getBytes(UTF_8));
logger.atInfo().log(
"Exporting premium terms succeeded for TLD %s, file ID is: %s", tld, fileId);
"Exporting premium terms succeeded for TLD %s, file ID is: %s", tldStr, fileId);
return fileId;
} catch (IOException e) {
throw new RuntimeException("Error exporting premium terms file to Drive.", e);
}
}
private String getFormattedPremiumTerms(Registry registry) {
checkState(registry.getPremiumListName().isPresent(), "%s does not have a premium list", tld);
String premiumListName = registry.getPremiumListName().get();
private String getFormattedPremiumTerms(Tld tld) {
checkState(tld.getPremiumListName().isPresent(), "%s does not have a premium list", tldStr);
String premiumListName = tld.getPremiumListName().get();
checkState(
PremiumListDao.getLatestRevision(premiumListName).isPresent(),
"Could not load premium list for " + tld);
"Could not load premium list for " + tldStr);
SortedSet<String> premiumTerms =
PremiumListDao.loadAllPremiumEntries(premiumListName).stream()
.map(PremiumEntry::toString)

View File

@@ -23,7 +23,7 @@ import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
@@ -46,7 +46,11 @@ public class ExportReservedTermsAction implements Runnable {
@Inject DriveConnection driveConnection;
@Inject ExportUtils exportUtils;
@Inject @Parameter(RequestParameters.PARAM_TLD) String tld;
@Inject
@Parameter(RequestParameters.PARAM_TLD)
String tldStr;
@Inject Response response;
@Inject ExportReservedTermsAction() {}
@@ -61,23 +65,25 @@ public class ExportReservedTermsAction implements Runnable {
public void run() {
response.setContentType(PLAIN_TEXT_UTF_8);
try {
Registry registry = Registry.get(tld);
Tld tld = Tld.get(tldStr);
String resultMsg;
if (registry.getReservedListNames().isEmpty() && isNullOrEmpty(registry.getDriveFolderId())) {
if (tld.getReservedListNames().isEmpty() && isNullOrEmpty(tld.getDriveFolderId())) {
resultMsg = "No reserved lists configured";
logger.atInfo().log("No reserved terms to export for TLD '%s'.", tld);
} else if (registry.getDriveFolderId() == null) {
logger.atInfo().log("No reserved terms to export for TLD '%s'.", tldStr);
} else if (tld.getDriveFolderId() == null) {
resultMsg = "Skipping export because no Drive folder is associated with this TLD";
logger.atInfo().log(
"Skipping reserved terms export for TLD %s because Drive folder isn't specified.", tld);
"Skipping reserved terms export for TLD %s because Drive folder isn't specified.",
tldStr);
} else {
resultMsg = driveConnection.createOrUpdateFile(
RESERVED_TERMS_FILENAME,
EXPORT_MIME_TYPE,
registry.getDriveFolderId(),
exportUtils.exportReservedTerms(registry).getBytes(UTF_8));
resultMsg =
driveConnection.createOrUpdateFile(
RESERVED_TERMS_FILENAME,
EXPORT_MIME_TYPE,
tld.getDriveFolderId(),
exportUtils.exportReservedTerms(tld).getBytes(UTF_8));
logger.atInfo().log(
"Exporting reserved terms succeeded for TLD %s, response was: %s", tld, resultMsg);
"Exporting reserved terms succeeded for TLD %s, response was: %s", tldStr, resultMsg);
}
response.setStatus(SC_OK);
response.setPayload(resultMsg);
@@ -85,7 +91,8 @@ public class ExportReservedTermsAction implements Runnable {
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(e.getMessage());
throw new RuntimeException(
String.format("Exception occurred while exporting reserved terms for TLD %s.", tld), e);
String.format("Exception occurred while exporting reserved terms for TLD %s.", tldStr),
e);
}
}
}

View File

@@ -16,7 +16,7 @@ package google.registry.export;
import com.google.common.base.Joiner;
import google.registry.config.RegistryConfig.Config;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.tld.label.ReservedList;
import google.registry.model.tld.label.ReservedList.ReservedListEntry;
import google.registry.model.tld.label.ReservedListDao;
@@ -36,10 +36,10 @@ public final class ExportUtils {
}
/** Returns the file contents of the auto-export reserved terms document for the given TLD. */
public String exportReservedTerms(Registry registry) {
public String exportReservedTerms(Tld tld) {
StringBuilder termsBuilder = new StringBuilder(reservedTermsExportDisclaimer).append("\n");
Set<String> reservedTerms = new TreeSet<>();
for (String reservedListName : registry.getReservedListNames()) {
for (String reservedListName : tld.getReservedListNames()) {
ReservedList reservedList =
ReservedListDao.getLatestRevision(reservedListName)
.orElseThrow(

View File

@@ -46,7 +46,7 @@ import google.registry.flows.domain.DomainFlowUtils.BadCommandForRegistryPhaseEx
import google.registry.flows.domain.DomainFlowUtils.InvalidIdnDomainLabelException;
import google.registry.model.ForeignKeyUtils;
import google.registry.model.domain.Domain;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.tld.label.ReservationType;
import google.registry.monitoring.whitebox.CheckApiMetric;
import google.registry.monitoring.whitebox.CheckApiMetric.Availability;
@@ -116,9 +116,9 @@ public class CheckApiAction implements Runnable {
validateDomainNameWithIdnTables(domainName);
DateTime now = clock.nowUtc();
Registry registry = Registry.get(domainName.parent().toString());
Tld tld = Tld.get(domainName.parent().toString());
try {
verifyNotInPredelegation(registry, now);
verifyNotInPredelegation(tld, now);
} catch (BadCommandForRegistryPhaseException e) {
metricBuilder.status(INVALID_REGISTRY_PHASE);
return fail("Check in this TLD is not allowed in the current registry phase");

View File

@@ -23,7 +23,7 @@ import google.registry.flows.domain.DomainPricingLogic;
import google.registry.flows.domain.FeesAndCredits;
import google.registry.model.ImmutableObject;
import google.registry.model.eppinput.EppInput;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import javax.annotation.Nullable;
import org.joda.time.DateTime;
@@ -81,7 +81,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract FeesAndCredits feesAndCredits();
public abstract Registry registry();
public abstract Tld tld();
public abstract InternetDomainName domainName();
@@ -99,7 +99,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract Builder setFeesAndCredits(FeesAndCredits feesAndCredits);
public abstract Builder setRegistry(Registry registry);
public abstract Builder setTld(Tld tld);
public abstract Builder setDomainName(InternetDomainName domainName);
@@ -117,7 +117,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract FeesAndCredits feesAndCredits();
public abstract Registry registry();
public abstract Tld tld();
public abstract InternetDomainName domainName();
@@ -135,7 +135,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract Builder setFeesAndCredits(FeesAndCredits feesAndCredits);
public abstract Builder setRegistry(Registry registry);
public abstract Builder setTld(Tld tld);
public abstract Builder setDomainName(InternetDomainName domainName);
@@ -153,7 +153,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract FeesAndCredits feesAndCredits();
public abstract Registry registry();
public abstract Tld tld();
public abstract InternetDomainName domainName();
@@ -169,7 +169,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract Builder setFeesAndCredits(FeesAndCredits feesAndCredits);
public abstract Builder setRegistry(Registry registry);
public abstract Builder setTld(Tld tld);
public abstract Builder setDomainName(InternetDomainName domainName);
@@ -185,7 +185,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract FeesAndCredits feesAndCredits();
public abstract Registry registry();
public abstract Tld tld();
public abstract InternetDomainName domainName();
@@ -201,7 +201,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract Builder setFeesAndCredits(FeesAndCredits feesAndCredits);
public abstract Builder setRegistry(Registry registry);
public abstract Builder setTld(Tld tld);
public abstract Builder setDomainName(InternetDomainName domainName);
@@ -217,7 +217,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract FeesAndCredits feesAndCredits();
public abstract Registry registry();
public abstract Tld tld();
public abstract InternetDomainName domainName();
@@ -233,7 +233,7 @@ public class DomainPricingCustomLogic extends BaseFlowCustomLogic {
public abstract Builder setFeesAndCredits(FeesAndCredits feesAndCredits);
public abstract Builder setRegistry(Registry registry);
public abstract Builder setTld(Tld tld);
public abstract Builder setDomainName(InternetDomainName domainName);

View File

@@ -30,7 +30,7 @@ import static google.registry.flows.domain.DomainFlowUtils.isValidReservedCreate
import static google.registry.flows.domain.DomainFlowUtils.validateDomainName;
import static google.registry.flows.domain.DomainFlowUtils.validateDomainNameWithIdnTables;
import static google.registry.flows.domain.DomainFlowUtils.verifyNotInPredelegation;
import static google.registry.model.tld.Registry.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.Tld.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.label.ReservationType.getTypeOfHighestSeverity;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
@@ -50,11 +50,17 @@ import google.registry.flows.annotations.ReportingSpec;
import google.registry.flows.custom.DomainCheckFlowCustomLogic;
import google.registry.flows.custom.DomainCheckFlowCustomLogic.BeforeResponseParameters;
import google.registry.flows.custom.DomainCheckFlowCustomLogic.BeforeResponseReturnData;
import google.registry.flows.domain.DomainPricingLogic.AllocationTokenInvalidForPremiumNameException;
import google.registry.flows.domain.token.AllocationTokenDomainCheckResults;
import google.registry.flows.domain.token.AllocationTokenFlowUtils;
import google.registry.flows.domain.token.AllocationTokenFlowUtils.AllocationTokenNotInPromotionException;
import google.registry.flows.domain.token.AllocationTokenFlowUtils.AllocationTokenNotValidForCommandException;
import google.registry.flows.domain.token.AllocationTokenFlowUtils.AllocationTokenNotValidForDomainException;
import google.registry.flows.domain.token.AllocationTokenFlowUtils.AllocationTokenNotValidForRegistrarException;
import google.registry.flows.domain.token.AllocationTokenFlowUtils.AllocationTokenNotValidForTldException;
import google.registry.model.EppResource;
import google.registry.model.ForeignKeyUtils;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand.Check;
import google.registry.model.domain.fee.FeeCheckCommandExtension;
@@ -72,10 +78,11 @@ import google.registry.model.eppoutput.CheckData.DomainCheckData;
import google.registry.model.eppoutput.EppResponse;
import google.registry.model.eppoutput.EppResponse.ResponseExtension;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldState;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tld.TldState;
import google.registry.model.tld.label.ReservationType;
import google.registry.persistence.VKey;
import google.registry.pricing.PricingEngineProxy;
import google.registry.util.Clock;
import java.util.HashSet;
import java.util.Optional;
@@ -159,7 +166,7 @@ public final class DomainCheckFlow implements Flow {
if (tldFirstTimeSeen && !isSuperuser) {
checkAllowedAccessToTld(registrarId, tld);
checkHasBillingAccount(registrarId, tld);
verifyNotInPredelegation(Registry.get(tld), now);
verifyNotInPredelegation(Tld.get(tld), now);
}
}
ImmutableMap<String, InternetDomainName> parsedDomains = parsedDomainsBuilder.build();
@@ -185,7 +192,7 @@ public final class DomainCheckFlow implements Flow {
ImmutableList.Builder<DomainCheck> checksBuilder = new ImmutableList.Builder<>();
ImmutableSet.Builder<String> availableDomains = new ImmutableSet.Builder<>();
ImmutableMap<String, TldState> tldStates =
Maps.toMap(seenTlds, tld -> Registry.get(tld).getTldState(now));
Maps.toMap(seenTlds, tld -> Tld.get(tld).getTldState(now));
ImmutableMap<InternetDomainName, String> domainCheckResults =
tokenDomainCheckResults
.map(AllocationTokenDomainCheckResults::domainCheckResults)
@@ -266,25 +273,62 @@ public final class DomainCheckFlow implements Flow {
new ImmutableList.Builder<>();
ImmutableMap<String, Domain> domainObjs =
loadDomainsForRestoreChecks(feeCheck, domainNames, existingDomains);
ImmutableMap<String, BillingEvent.Recurring> recurrences =
loadRecurrencesForDomains(domainObjs);
ImmutableMap<String, BillingRecurrence> recurrences = loadRecurrencesForDomains(domainObjs);
for (FeeCheckCommandExtensionItem feeCheckItem : feeCheck.getItems()) {
for (String domainName : getDomainNamesToCheckForFee(feeCheckItem, domainNames.keySet())) {
Optional<AllocationToken> defaultToken =
DomainFlowUtils.checkForDefaultToken(
Tld.get(InternetDomainName.from(domainName).parent().toString()),
domainName,
feeCheckItem.getCommandName(),
registrarId,
now);
FeeCheckResponseExtensionItem.Builder<?> builder = feeCheckItem.createResponseBuilder();
Optional<Domain> domain = Optional.ofNullable(domainObjs.get(domainName));
handleFeeRequest(
feeCheckItem,
builder,
domainNames.get(domainName),
domain,
feeCheck.getCurrency(),
now,
pricingLogic,
allocationToken,
availableDomains.contains(domainName),
recurrences.getOrDefault(domainName, null));
responseItems.add(builder.setDomainNameIfSupported(domainName).build());
try {
if (allocationToken.isPresent()) {
AllocationTokenFlowUtils.validateToken(
InternetDomainName.from(domainName),
allocationToken.get(),
feeCheckItem.getCommandName(),
registrarId,
PricingEngineProxy.isDomainPremium(domainName, now),
now);
}
handleFeeRequest(
feeCheckItem,
builder,
domainNames.get(domainName),
domain,
feeCheck.getCurrency(),
now,
pricingLogic,
allocationToken.isPresent() ? allocationToken : defaultToken,
availableDomains.contains(domainName),
recurrences.getOrDefault(domainName, null));
responseItems.add(builder.setDomainNameIfSupported(domainName).build());
} catch (AllocationTokenInvalidForPremiumNameException
| AllocationTokenNotValidForCommandException
| AllocationTokenNotValidForDomainException
| AllocationTokenNotValidForRegistrarException
| AllocationTokenNotValidForTldException
| AllocationTokenNotInPromotionException e) {
// Allocation token is either not an active token or it is not valid for the EPP command,
// registrar, domain, or TLD.
Tld tld = Tld.get(InternetDomainName.from(domainName).parent().toString());
responseItems.add(
builder
.setDomainNameIfSupported(domainName)
.setPeriod(feeCheckItem.getPeriod())
.setCommand(
feeCheckItem.getCommandName(),
feeCheckItem.getPhase(),
feeCheckItem.getSubphase())
.setCurrencyIfSupported(tld.getCurrency())
.setClass("token-not-supported")
.build());
}
}
}
return ImmutableList.of(feeCheck.createResponse(responseItems.build()));
@@ -336,16 +380,15 @@ public final class DomainCheckFlow implements Flow {
Maps.transformEntries(existingDomainsToLoad, (k, v) -> (Domain) loadedDomains.get(v)));
}
private ImmutableMap<String, BillingEvent.Recurring> loadRecurrencesForDomains(
private ImmutableMap<String, BillingRecurrence> loadRecurrencesForDomains(
ImmutableMap<String, Domain> domainObjs) {
return tm().transact(
() -> {
ImmutableMap<VKey<? extends BillingEvent.Recurring>, BillingEvent.Recurring>
recurrences =
tm().loadByKeys(
domainObjs.values().stream()
.map(Domain::getAutorenewBillingEvent)
.collect(toImmutableSet()));
ImmutableMap<VKey<? extends BillingRecurrence>, BillingRecurrence> recurrences =
tm().loadByKeys(
domainObjs.values().stream()
.map(Domain::getAutorenewBillingEvent)
.collect(toImmutableSet()));
return ImmutableMap.copyOf(
Maps.transformValues(
domainObjs, d -> recurrences.get(d.getAutorenewBillingEvent())));

View File

@@ -45,7 +45,7 @@ import google.registry.model.eppinput.EppInput;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppResponse;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.tmch.ClaimsListDao;
import google.registry.util.Clock;
import java.util.HashSet;
@@ -96,16 +96,16 @@ public final class DomainClaimsCheckFlow implements Flow {
for (String domainName : ImmutableSet.copyOf(domainNames)) {
InternetDomainName parsedDomain = validateDomainName(domainName);
validateDomainNameWithIdnTables(parsedDomain);
String tld = parsedDomain.parent().toString();
String tldStr = parsedDomain.parent().toString();
// Only validate access to a TLD the first time it is encountered.
if (seenTlds.add(tld)) {
if (seenTlds.add(tldStr)) {
if (!isSuperuser) {
checkAllowedAccessToTld(registrarId, tld);
checkHasBillingAccount(registrarId, tld);
Registry registry = Registry.get(tld);
checkAllowedAccessToTld(registrarId, tldStr);
checkHasBillingAccount(registrarId, tldStr);
Tld tld = Tld.get(tldStr);
DateTime now = clock.nowUtc();
verifyNotInPredelegation(registry, now);
verifyClaimsPeriodNotEnded(registry, now);
verifyNotInPredelegation(tld, now);
verifyClaimsPeriodNotEnded(tld, now);
}
}
Optional<String> claimKey = ClaimsListDao.get().getClaimKey(parsedDomain.parts().get(0));

View File

@@ -15,9 +15,8 @@
package google.registry.flows.domain;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.flows.FlowUtils.persistEntityChanges;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.verifyResourceDoesNotExist;
@@ -49,12 +48,11 @@ import static google.registry.model.EppResourceUtils.createDomainRepoId;
import static google.registry.model.IdService.allocateId;
import static google.registry.model.eppcommon.StatusValue.SERVER_HOLD;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_CREATE;
import static google.registry.model.tld.Registry.TldState.GENERAL_AVAILABILITY;
import static google.registry.model.tld.Registry.TldState.QUIET_PERIOD;
import static google.registry.model.tld.Registry.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.Tld.TldState.GENERAL_AVAILABILITY;
import static google.registry.model.tld.Tld.TldState.QUIET_PERIOD;
import static google.registry.model.tld.Tld.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.label.ReservationType.NAME_COLLISION;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.isNullOrEmpty;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import static google.registry.util.DateTimeUtils.leapSafeAddYears;
@@ -62,9 +60,7 @@ import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.net.InternetDomainName;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.EppException.AssociationProhibitsOperationException;
import google.registry.flows.EppException.CommandUseErrorException;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
import google.registry.flows.ExtensionManager;
@@ -81,13 +77,11 @@ import google.registry.flows.domain.token.AllocationTokenFlowUtils;
import google.registry.flows.exceptions.ResourceAlreadyExistsForThisClientException;
import google.registry.flows.exceptions.ResourceCreateContentionException;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingBase.RenewalPriceBehavior;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingEvent.RenewalPriceBehavior;
import google.registry.model.common.DatabaseMigrationStateSchedule;
import google.registry.model.common.DatabaseMigrationStateSchedule.MigrationState;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand;
import google.registry.model.domain.DomainCommand.Create;
@@ -95,6 +89,7 @@ import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.Period;
import google.registry.model.domain.fee.FeeCreateCommandExtension;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem.CommandName;
import google.registry.model.domain.fee.FeeTransformResponseExtension;
import google.registry.model.domain.launch.LaunchCreateExtension;
import google.registry.model.domain.metadata.MetadataExtension;
@@ -117,16 +112,13 @@ import google.registry.model.reporting.DomainTransactionRecord.TransactionReport
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldState;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tld.TldState;
import google.registry.model.tld.Tld.TldType;
import google.registry.model.tld.label.ReservationType;
import google.registry.model.tmch.ClaimsList;
import google.registry.model.tmch.ClaimsListDao;
import google.registry.persistence.VKey;
import google.registry.tmch.LordnTaskUtils;
import google.registry.tmch.LordnTaskUtils.LordnPhase;
import java.util.Map;
import java.util.Optional;
import javax.annotation.Nullable;
import javax.inject.Inject;
@@ -235,7 +227,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
@Inject DomainCreateFlowCustomLogic flowCustomLogic;
@Inject DomainFlowTmchUtils tmchUtils;
@Inject DomainPricingLogic pricingLogic;
@Inject DnsQueue dnsQueue;
@Inject DomainCreateFlow() {}
@Override
@@ -260,9 +252,9 @@ public final class DomainCreateFlow implements TransactionalFlow {
// Validate that this is actually a legal domain name on a TLD that the registrar has access to.
InternetDomainName domainName = validateDomainName(command.getDomainName());
String domainLabel = domainName.parts().get(0);
Registry registry = Registry.get(domainName.parent().toString());
validateCreateCommandContactsAndNameservers(command, registry, domainName);
TldState tldState = registry.getTldState(now);
Tld tld = Tld.get(domainName.parent().toString());
validateCreateCommandContactsAndNameservers(command, tld, domainName);
TldState tldState = tld.getTldState(now);
Optional<LaunchCreateExtension> launchCreate =
eppInput.getSingleExtension(LaunchCreateExtension.class);
boolean hasSignedMarks =
@@ -273,16 +265,19 @@ public final class DomainCreateFlow implements TransactionalFlow {
validateLaunchCreateNotice(launchCreate.get().getNotice(), domainLabel, isSuperuser, now);
}
boolean isSunriseCreate = hasSignedMarks && (tldState == START_DATE_SUNRISE);
// TODO(sarahbot@): Add check for valid EPP actions on the token
Optional<AllocationToken> allocationToken =
allocationTokenFlowUtils.verifyAllocationTokenCreateIfPresent(
command,
registry,
tld,
registrarId,
now,
eppInput.getSingleExtension(AllocationTokenExtension.class));
boolean defaultTokenUsed = false;
if (!allocationToken.isPresent() && !registry.getDefaultPromoTokens().isEmpty()) {
allocationToken = checkForDefaultToken(registry, command);
if (!allocationToken.isPresent()) {
allocationToken =
DomainFlowUtils.checkForDefaultToken(
tld, command.getDomainName(), CommandName.CREATE, registrarId, now);
if (allocationToken.isPresent()) {
defaultTokenUsed = true;
}
@@ -295,12 +290,12 @@ public final class DomainCreateFlow implements TransactionalFlow {
// notice without specifying a claims key, ignore the registry phase, and override blocks on
// registering premium domains.
if (!isSuperuser) {
checkAllowedAccessToTld(registrarId, registry.getTldStr());
checkHasBillingAccount(registrarId, registry.getTldStr());
checkAllowedAccessToTld(registrarId, tld.getTldStr());
checkHasBillingAccount(registrarId, tld.getTldStr());
boolean isValidReservedCreate = isValidReservedCreate(domainName, allocationToken);
ClaimsList claimsList = ClaimsListDao.get();
verifyIsGaOrSpecialCase(
registry,
tld,
claimsList,
now,
domainLabel,
@@ -309,15 +304,15 @@ public final class DomainCreateFlow implements TransactionalFlow {
isValidReservedCreate,
hasSignedMarks);
if (launchCreate.isPresent()) {
verifyLaunchPhaseMatchesRegistryPhase(registry, launchCreate.get(), now);
verifyLaunchPhaseMatchesRegistryPhase(tld, launchCreate.get(), now);
}
if (!isAnchorTenant && !isValidReservedCreate) {
verifyNotReserved(domainName, isSunriseCreate);
}
if (hasClaimsNotice) {
verifyClaimsPeriodNotEnded(registry, now);
verifyClaimsPeriodNotEnded(tld, now);
}
if (now.isBefore(registry.getClaimsPeriodEnd())) {
if (now.isBefore(tld.getClaimsPeriodEnd())) {
verifyClaimsNoticeIfAndOnlyIfNeeded(
domainName, claimsList, hasSignedMarks, hasClaimsNotice);
}
@@ -342,20 +337,19 @@ public final class DomainCreateFlow implements TransactionalFlow {
Optional<FeeCreateCommandExtension> feeCreate =
eppInput.getSingleExtension(FeeCreateCommandExtension.class);
FeesAndCredits feesAndCredits =
pricingLogic.getCreatePrice(
registry, targetId, now, years, isAnchorTenant, allocationToken);
pricingLogic.getCreatePrice(tld, targetId, now, years, isAnchorTenant, allocationToken);
validateFeeChallenge(feeCreate, feesAndCredits, defaultTokenUsed);
Optional<SecDnsCreateExtension> secDnsCreate =
validateSecDnsExtension(eppInput.getSingleExtension(SecDnsCreateExtension.class));
DateTime registrationExpirationTime = leapSafeAddYears(now, years);
String repoId = createDomainRepoId(allocateId(), registry.getTldStr());
String repoId = createDomainRepoId(allocateId(), tld.getTldStr());
long historyRevisionId = allocateId();
HistoryEntryId domainHistoryId = new HistoryEntryId(repoId, historyRevisionId);
historyBuilder.setRevisionId(historyRevisionId);
// Bill for the create.
BillingEvent.OneTime createBillingEvent =
createOneTimeBillingEvent(
registry,
BillingEvent createBillingEvent =
createBillingEvent(
tld,
isAnchorTenant,
isSunriseCreate,
isReserved(domainName, isSunriseCreate),
@@ -365,7 +359,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
allocationToken,
now);
// Create a new autorenew billing event and poll message starting at the expiration time.
BillingEvent.Recurring autorenewBillingEvent =
BillingRecurrence autorenewBillingEvent =
createAutorenewBillingEvent(
domainHistoryId,
registrationExpirationTime,
@@ -405,12 +399,9 @@ public final class DomainCreateFlow implements TransactionalFlow {
.addGracePeriod(
GracePeriod.forBillingEvent(GracePeriodStatus.ADD, repoId, createBillingEvent))
.setLordnPhase(
!DatabaseMigrationStateSchedule.getValueAtTime(tm().getTransactionTime())
.equals(MigrationState.NORDN_SQL)
? LordnPhase.NONE
: hasSignedMarks
? LordnPhase.SUNRISE
: hasClaimsNotice ? LordnPhase.CLAIMS : LordnPhase.NONE);
hasSignedMarks
? LordnPhase.SUNRISE
: hasClaimsNotice ? LordnPhase.CLAIMS : LordnPhase.NONE);
Domain domain = domainBuilder.build();
if (allocationToken.isPresent()
&& allocationToken.get().getTokenType().equals(TokenType.PACKAGE)) {
@@ -421,7 +412,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
domain.asBuilder().setCurrentPackageToken(allocationToken.get().createVKey()).build();
}
DomainHistory domainHistory =
buildDomainHistory(domain, registry, now, period, registry.getAddGracePeriodLength());
buildDomainHistory(domain, tld, now, period, tld.getAddGracePeriodLength());
if (reservationTypes.contains(NAME_COLLISION)) {
entitiesToSave.add(
createNameCollisionOneTimePollMessage(targetId, domainHistory, registrarId, now));
@@ -433,8 +424,9 @@ public final class DomainCreateFlow implements TransactionalFlow {
allocationTokenFlowUtils.redeemToken(
allocationToken.get(), domainHistory.getHistoryEntryId()));
}
enqueueTasks(domain, hasSignedMarks, hasClaimsNotice);
if (domain.shouldPublishToDns()) {
requestDomainDnsRefresh(domain.getDomainName());
}
EntityChanges entityChanges =
flowCustomLogic.beforeSave(
DomainCreateFlowCustomLogic.BeforeSaveParameters.newBuilder()
@@ -458,36 +450,6 @@ public final class DomainCreateFlow implements TransactionalFlow {
.build();
}
private Optional<AllocationToken> checkForDefaultToken(
Registry registry, DomainCommand.Create command) throws EppException {
Map<VKey<AllocationToken>, Optional<AllocationToken>> tokens =
AllocationToken.getAll(registry.getDefaultPromoTokens());
ImmutableList<Optional<AllocationToken>> tokenList =
registry.getDefaultPromoTokens().stream()
.map(tokens::get)
.filter(Optional::isPresent)
.collect(toImmutableList());
checkState(
!isNullOrEmpty(tokenList),
"Failure while loading default TLD promotions from the database");
// Check if any of the tokens are valid for this domain registration
for (Optional<AllocationToken> token : tokenList) {
try {
AllocationTokenFlowUtils.validateToken(
InternetDomainName.from(command.getDomainName()),
token.get(),
registrarId,
tm().getTransactionTime());
} catch (AssociationProhibitsOperationException e) {
// Allocation token was not valid for this registration, continue to check the next token in
// the list
continue;
}
// Only use the first valid token in the list
return token;
}
return Optional.empty();
}
/**
* Verifies that signed marks are only sent during sunrise.
*
@@ -529,7 +491,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
* non-superusers.
*/
private void verifyIsGaOrSpecialCase(
Registry registry,
Tld tld,
ClaimsList claimsList,
DateTime now,
String domainLabel,
@@ -541,7 +503,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
MustHaveSignedMarksInCurrentPhaseException,
NoTrademarkedRegistrationsBeforeSunriseException {
// We allow general registration during GA.
TldState currentState = registry.getTldState(now);
TldState currentState = tld.getTldState(now);
if (currentState.equals(GENERAL_AVAILABILITY)) {
return;
}
@@ -562,7 +524,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
// Trademarked domains cannot be registered until after the sunrise period has ended, unless
// a valid signed mark is provided. Signed marks can only be provided during sunrise.
// Thus, when bypassing TLD state checks, a post-sunrise state is always fine.
if (registry.getTldStateTransitions().headMap(now).containsValue(START_DATE_SUNRISE)) {
if (tld.getTldStateTransitions().headMap(now).containsValue(START_DATE_SUNRISE)) {
return;
} else {
// If sunrise hasn't happened yet, trademarked domains are unavailable
@@ -595,23 +557,22 @@ public final class DomainCreateFlow implements TransactionalFlow {
}
private DomainHistory buildDomainHistory(
Domain domain, Registry registry, DateTime now, Period period, Duration addGracePeriod) {
Domain domain, Tld tld, DateTime now, Period period, Duration addGracePeriod) {
// We ignore prober transactions
if (registry.getTldType() == TldType.REAL) {
historyBuilder
.setDomainTransactionRecords(
ImmutableSet.of(
DomainTransactionRecord.create(
registry.getTldStr(),
now.plus(addGracePeriod),
TransactionReportField.netAddsFieldFromYears(period.getValue()),
1)));
if (tld.getTldType() == TldType.REAL) {
historyBuilder.setDomainTransactionRecords(
ImmutableSet.of(
DomainTransactionRecord.create(
tld.getTldStr(),
now.plus(addGracePeriod),
TransactionReportField.netAddsFieldFromYears(period.getValue()),
1)));
}
return historyBuilder.setType(DOMAIN_CREATE).setPeriod(period).setDomain(domain).build();
}
private BillingEvent.OneTime createOneTimeBillingEvent(
Registry registry,
private BillingEvent createBillingEvent(
Tld tld,
boolean isAnchorTenant,
boolean isSunriseCreate,
boolean isReserved,
@@ -632,7 +593,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
// it if it's reserved for other reasons.
flagsBuilder.add(Flag.RESERVED);
}
return new BillingEvent.OneTime.Builder()
return new BillingEvent.Builder()
.setReason(Reason.CREATE)
.setTargetId(targetId)
.setRegistrarId(registrarId)
@@ -643,18 +604,18 @@ public final class DomainCreateFlow implements TransactionalFlow {
.setBillingTime(
now.plus(
isAnchorTenant
? registry.getAnchorTenantAddGracePeriodLength()
: registry.getAddGracePeriodLength()))
? tld.getAnchorTenantAddGracePeriodLength()
: tld.getAddGracePeriodLength()))
.setFlags(flagsBuilder.build())
.setDomainHistoryId(domainHistoryId)
.build();
}
private Recurring createAutorenewBillingEvent(
private BillingRecurrence createAutorenewBillingEvent(
HistoryEntryId domainHistoryId,
DateTime registrationExpirationTime,
RenewalPriceInfo renewalpriceInfo) {
return new BillingEvent.Recurring.Builder()
return new BillingRecurrence.Builder()
.setReason(Reason.RENEW)
.setFlags(ImmutableSet.of(Flag.AUTO_RENEW))
.setTargetId(targetId)
@@ -678,9 +639,9 @@ public final class DomainCreateFlow implements TransactionalFlow {
.build();
}
private static BillingEvent.OneTime createEapBillingEvent(
FeesAndCredits feesAndCredits, BillingEvent.OneTime createBillingEvent) {
return new BillingEvent.OneTime.Builder()
private static BillingEvent createEapBillingEvent(
FeesAndCredits feesAndCredits, BillingEvent createBillingEvent) {
return new BillingEvent.Builder()
.setReason(Reason.FEE_EARLY_ACCESS)
.setTargetId(createBillingEvent.getTargetId())
.setRegistrarId(createBillingEvent.getRegistrarId())
@@ -707,20 +668,9 @@ public final class DomainCreateFlow implements TransactionalFlow {
.build();
}
private void enqueueTasks(Domain newDomain, boolean hasSignedMarks, boolean hasClaimsNotice) {
if (newDomain.shouldPublishToDns()) {
dnsQueue.addDomainRefreshTask(newDomain.getDomainName());
}
if (!DatabaseMigrationStateSchedule.getValueAtTime(tm().getTransactionTime())
.equals(MigrationState.NORDN_SQL)
&& (hasClaimsNotice || hasSignedMarks)) {
LordnTaskUtils.enqueueDomainTask(newDomain);
}
}
/**
* Determines the {@link RenewalPriceBehavior} and the renewal price that needs be stored in the
* {@link Recurring} billing events.
* {@link BillingRecurrence} billing events.
*
* <p>By default, the renewal price is calculated during the process of renewal. Renewal price
* should be the createCost if and only if the renewal price behavior in the {@link
@@ -746,7 +696,7 @@ public final class DomainCreateFlow implements TransactionalFlow {
}
}
/** A class to store renewal info used in {@link Recurring} billing events. */
/** A class to store renewal info used in {@link BillingRecurrence} billing events. */
@AutoValue
public abstract static class RenewalPriceInfo {
static DomainCreateFlow.RenewalPriceInfo create(

View File

@@ -16,6 +16,7 @@ package google.registry.flows.domain;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Strings.isNullOrEmpty;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.flows.FlowUtils.createHistoryEntryId;
import static google.registry.flows.FlowUtils.persistEntityChanges;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
@@ -44,7 +45,6 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Sets;
import google.registry.batch.AsyncTaskEnqueuer;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.EppException.AssociationProhibitsOperationException;
import google.registry.flows.ExtensionManager;
@@ -61,8 +61,9 @@ import google.registry.flows.custom.DomainDeleteFlowCustomLogic.BeforeResponseRe
import google.registry.flows.custom.DomainDeleteFlowCustomLogic.BeforeSaveParameters;
import google.registry.flows.custom.EntityChanges;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingCancellation;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.GracePeriod;
@@ -88,8 +89,8 @@ import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tld.TldType;
import google.registry.model.transfer.TransferStatus;
import java.util.Collections;
import java.util.Optional;
@@ -129,7 +130,6 @@ public final class DomainDeleteFlow implements TransactionalFlow {
@Inject @TargetId String targetId;
@Inject @Superuser boolean isSuperuser;
@Inject DomainHistory.Builder historyBuilder;
@Inject DnsQueue dnsQueue;
@Inject Trid trid;
@Inject AsyncTaskEnqueuer asyncTaskEnqueuer;
@Inject EppResponse.Builder responseBuilder;
@@ -146,8 +146,8 @@ public final class DomainDeleteFlow implements TransactionalFlow {
DateTime now = tm().getTransactionTime();
// Loads the target resource if it exists
Domain existingDomain = loadAndVerifyExistence(Domain.class, targetId, now);
Registry registry = Registry.get(existingDomain.getTld());
verifyDeleteAllowed(existingDomain, registry, now);
Tld tld = Tld.get(existingDomain.getTld());
verifyDeleteAllowed(existingDomain, tld, now);
flowCustomLogic.afterValidation(
AfterValidationParameters.newBuilder().setExistingDomain(existingDomain).build());
ImmutableSet.Builder<ImmutableObject> entitiesToSave = new ImmutableSet.Builder<>();
@@ -160,8 +160,8 @@ public final class DomainDeleteFlow implements TransactionalFlow {
builder = existingDomain.asBuilder();
}
builder.setLastEppUpdateTime(now).setLastEppUpdateRegistrarId(registrarId);
Duration redemptionGracePeriodLength = registry.getRedemptionGracePeriodLength();
Duration pendingDeleteLength = registry.getPendingDeleteLength();
Duration redemptionGracePeriodLength = tld.getRedemptionGracePeriodLength();
Duration pendingDeleteLength = tld.getPendingDeleteLength();
Optional<DomainDeleteSuperuserExtension> domainDeleteSuperuserExtension =
eppInput.getSingleExtension(DomainDeleteSuperuserExtension.class);
if (domainDeleteSuperuserExtension.isPresent()) {
@@ -232,15 +232,15 @@ public final class DomainDeleteFlow implements TransactionalFlow {
// No cancellation is written if the grace period was not for a billable event.
if (gracePeriod.hasBillingEvent()) {
entitiesToSave.add(
BillingEvent.Cancellation.forGracePeriod(gracePeriod, now, domainHistoryId, targetId));
if (gracePeriod.getOneTimeBillingEvent() != null) {
BillingCancellation.forGracePeriod(gracePeriod, now, domainHistoryId, targetId));
if (gracePeriod.getBillingEvent() != null) {
// Take the amount of registration time being refunded off the expiration time.
// This can be either add grace periods or renew grace periods.
BillingEvent.OneTime oneTime = tm().loadByKey(gracePeriod.getOneTimeBillingEvent());
newExpirationTime = newExpirationTime.minusYears(oneTime.getPeriodYears());
} else if (gracePeriod.getRecurringBillingEvent() != null) {
BillingEvent billingEvent = tm().loadByKey(gracePeriod.getBillingEvent());
newExpirationTime = newExpirationTime.minusYears(billingEvent.getPeriodYears());
} else if (gracePeriod.getBillingRecurrence() != null) {
// Take 1 year off the registration if in the autorenew grace period (no need to load the
// recurring billing event; all autorenews are for 1 year).
// recurrence billing event; all autorenews are for 1 year).
newExpirationTime = newExpirationTime.minusYears(1);
}
}
@@ -249,18 +249,19 @@ public final class DomainDeleteFlow implements TransactionalFlow {
Domain newDomain = builder.build();
DomainHistory domainHistory =
buildDomainHistory(newDomain, registry, now, durationUntilDelete, inAddGracePeriod);
buildDomainHistory(newDomain, tld, now, durationUntilDelete, inAddGracePeriod);
handlePendingTransferOnDelete(existingDomain, newDomain, now, domainHistory);
// Close the autorenew billing event and poll message. This may delete the poll message. Store
// the updated recurring billing event, we'll need it later and can't reload it.
Recurring existingRecurring = tm().loadByKey(existingDomain.getAutorenewBillingEvent());
BillingEvent.Recurring recurringBillingEvent =
// the updated recurrence billing event, we'll need it later and can't reload it.
BillingRecurrence existingBillingRecurrence =
tm().loadByKey(existingDomain.getAutorenewBillingEvent());
BillingRecurrence billingRecurrence =
updateAutorenewRecurrenceEndTime(
existingDomain, existingRecurring, now, domainHistory.getHistoryEntryId());
existingDomain, existingBillingRecurrence, now, domainHistory.getHistoryEntryId());
// If there's a pending transfer, the gaining client's autorenew billing
// event and poll message will already have been deleted in
// ResourceDeleteFlow since it's listed in serverApproveEntities.
dnsQueue.addDomainRefreshTask(existingDomain.getDomainName());
requestDomainDnsRefresh(existingDomain.getDomainName());
entitiesToSave.add(newDomain, domainHistory);
EntityChanges entityChanges =
@@ -280,7 +281,7 @@ public final class DomainDeleteFlow implements TransactionalFlow {
? SUCCESS_WITH_ACTION_PENDING
: SUCCESS)
.setResponseExtensions(
getResponseExtensions(recurringBillingEvent, existingDomain, now))
getResponseExtensions(billingRecurrence, existingDomain, now))
.build());
persistEntityChanges(entityChanges);
return responseBuilder
@@ -289,14 +290,14 @@ public final class DomainDeleteFlow implements TransactionalFlow {
.build();
}
private void verifyDeleteAllowed(Domain existingDomain, Registry registry, DateTime now)
private void verifyDeleteAllowed(Domain existingDomain, Tld tld, DateTime now)
throws EppException {
verifyNoDisallowedStatuses(existingDomain, DISALLOWED_STATUSES);
verifyOptionalAuthInfo(authInfo, existingDomain);
if (!isSuperuser) {
verifyResourceOwnership(registrarId, existingDomain);
verifyNotInPredelegation(registry, now);
checkAllowedAccessToTld(registrarId, registry.getTld().toString());
verifyNotInPredelegation(tld, now);
checkAllowedAccessToTld(registrarId, tld.getTld().toString());
}
if (!existingDomain.getSubordinateHosts().isEmpty()) {
throw new DomainToDeleteHasHostsException();
@@ -305,17 +306,18 @@ public final class DomainDeleteFlow implements TransactionalFlow {
private DomainHistory buildDomainHistory(
Domain domain,
Registry registry,
Tld tld,
DateTime now,
Duration durationUntilDelete,
boolean inAddGracePeriod) {
// We ignore prober transactions
if (registry.getTldType() == TldType.REAL) {
Duration maxGracePeriod = Collections.max(
ImmutableSet.of(
registry.getAddGracePeriodLength(),
registry.getAutoRenewGracePeriodLength(),
registry.getRenewGracePeriodLength()));
if (tld.getTldType() == TldType.REAL) {
Duration maxGracePeriod =
Collections.max(
ImmutableSet.of(
tld.getAddGracePeriodLength(),
tld.getAutoRenewGracePeriodLength(),
tld.getRenewGracePeriodLength()));
ImmutableSet<DomainTransactionRecord> cancelledRecords =
createCancelingRecords(
domain,
@@ -379,7 +381,7 @@ public final class DomainDeleteFlow implements TransactionalFlow {
@Nullable
private ImmutableList<FeeTransformResponseExtension> getResponseExtensions(
BillingEvent.Recurring recurringBillingEvent, Domain existingDomain, DateTime now) {
BillingRecurrence billingRecurrence, Domain existingDomain, DateTime now) {
FeeTransformResponseExtension.Builder feeResponseBuilder = getDeleteResponseBuilder();
if (feeResponseBuilder == null) {
return ImmutableList.of();
@@ -387,7 +389,7 @@ public final class DomainDeleteFlow implements TransactionalFlow {
ImmutableList.Builder<Credit> creditsBuilder = new ImmutableList.Builder<>();
for (GracePeriod gracePeriod : existingDomain.getGracePeriods()) {
if (gracePeriod.hasBillingEvent()) {
Money cost = getGracePeriodCost(recurringBillingEvent, gracePeriod, now);
Money cost = getGracePeriodCost(billingRecurrence, gracePeriod, now);
creditsBuilder.add(Credit.create(
cost.negated().getAmount(), FeeType.CREDIT, gracePeriod.getType().getXmlName()));
feeResponseBuilder.setCurrency(checkNotNull(cost.getCurrencyUnit()));
@@ -401,14 +403,14 @@ public final class DomainDeleteFlow implements TransactionalFlow {
}
private Money getGracePeriodCost(
BillingEvent.Recurring recurringBillingEvent, GracePeriod gracePeriod, DateTime now) {
BillingRecurrence billingRecurrence, GracePeriod gracePeriod, DateTime now) {
if (gracePeriod.getType() == GracePeriodStatus.AUTO_RENEW) {
// If we updated the autorenew billing event, reuse it.
DateTime autoRenewTime =
recurringBillingEvent.getRecurrenceTimeOfYear().getLastInstanceBeforeOrAt(now);
billingRecurrence.getRecurrenceTimeOfYear().getLastInstanceBeforeOrAt(now);
return getDomainRenewCost(targetId, autoRenewTime, 1);
}
return tm().loadByKey(checkNotNull(gracePeriod.getOneTimeBillingEvent())).getCost();
return tm().loadByKey(checkNotNull(gracePeriod.getBillingEvent())).getCost();
}
@Nullable
@@ -428,7 +430,7 @@ public final class DomainDeleteFlow implements TransactionalFlow {
/** Domain to be deleted has subordinate hosts. */
static class DomainToDeleteHasHostsException extends AssociationProhibitsOperationException {
public DomainToDeleteHasHostsException() {
DomainToDeleteHasHostsException() {
super("Domain to be deleted has subordinate hosts");
}
}

View File

@@ -26,12 +26,12 @@ import static com.google.common.collect.Sets.difference;
import static com.google.common.collect.Sets.intersection;
import static com.google.common.collect.Sets.union;
import static google.registry.model.domain.Domain.MAX_REGISTRATION_YEARS;
import static google.registry.model.tld.Registries.findTldForName;
import static google.registry.model.tld.Registries.getTlds;
import static google.registry.model.tld.Registry.TldState.GENERAL_AVAILABILITY;
import static google.registry.model.tld.Registry.TldState.PREDELEGATION;
import static google.registry.model.tld.Registry.TldState.QUIET_PERIOD;
import static google.registry.model.tld.Registry.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.Tld.TldState.GENERAL_AVAILABILITY;
import static google.registry.model.tld.Tld.TldState.PREDELEGATION;
import static google.registry.model.tld.Tld.TldState.QUIET_PERIOD;
import static google.registry.model.tld.Tld.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.Tlds.findTldForName;
import static google.registry.model.tld.Tlds.getTlds;
import static google.registry.model.tld.label.ReservationType.ALLOWED_IN_SUNRISE;
import static google.registry.model.tld.label.ReservationType.FULLY_BLOCKED;
import static google.registry.model.tld.label.ReservationType.NAME_COLLISION;
@@ -39,6 +39,7 @@ import static google.registry.model.tld.label.ReservationType.RESERVED_FOR_ANCHO
import static google.registry.model.tld.label.ReservationType.RESERVED_FOR_SPECIFIC_USE;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.pricing.PricingEngineProxy.isDomainPremium;
import static google.registry.util.CollectionUtils.isNullOrEmpty;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import static google.registry.util.DateTimeUtils.isAtOrAfter;
@@ -63,6 +64,7 @@ import com.google.common.collect.Sets;
import com.google.common.collect.Streams;
import com.google.common.net.InternetDomainName;
import google.registry.flows.EppException;
import google.registry.flows.EppException.AssociationProhibitsOperationException;
import google.registry.flows.EppException.AuthorizationErrorException;
import google.registry.flows.EppException.CommandUseErrorException;
import google.registry.flows.EppException.ObjectDoesNotExistException;
@@ -72,12 +74,13 @@ import google.registry.flows.EppException.ParameterValueSyntaxErrorException;
import google.registry.flows.EppException.RequiredParameterMissingException;
import google.registry.flows.EppException.StatusProhibitsOperationException;
import google.registry.flows.EppException.UnimplementedOptionException;
import google.registry.flows.domain.DomainPricingLogic.AllocationTokenInvalidForPremiumNameException;
import google.registry.flows.domain.token.AllocationTokenFlowUtils;
import google.registry.flows.exceptions.ResourceHasClientUpdateProhibitedException;
import google.registry.model.EppResource;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.contact.Contact;
import google.registry.model.domain.DesignatedContact;
import google.registry.model.domain.DesignatedContact.Type;
@@ -94,6 +97,7 @@ import google.registry.model.domain.fee.BaseFee.FeeType;
import google.registry.model.domain.fee.Credit;
import google.registry.model.domain.fee.Fee;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem.CommandName;
import google.registry.model.domain.fee.FeeQueryResponseExtensionItem;
import google.registry.model.domain.fee.FeeTransformCommandExtension;
import google.registry.model.domain.fee.FeeTransformResponseExtension;
@@ -120,9 +124,9 @@ import google.registry.model.registrar.Registrar.State;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldState;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tld.TldState;
import google.registry.model.tld.Tld.TldType;
import google.registry.model.tld.label.ReservationType;
import google.registry.model.tld.label.ReservedList;
import google.registry.model.tmch.ClaimsList;
@@ -172,8 +176,7 @@ public class DomainFlowUtils {
CharMatcher.inRange('a', 'z').or(CharMatcher.inRange('0', '9').or(CharMatcher.anyOf("-.")));
/** Default validator used to determine if an IDN name can be provisioned on a TLD. */
private static final IdnLabelValidator IDN_LABEL_VALIDATOR =
IdnLabelValidator.createDefaultIdnLabelValidator();
private static final IdnLabelValidator IDN_LABEL_VALIDATOR = new IdnLabelValidator();
/** The maximum number of DS records allowed on a domain. */
private static final int MAX_DS_RECORDS_PER_DOMAIN = 8;
@@ -299,17 +302,17 @@ public class DomainFlowUtils {
}
/** Check if the registrar has the correct billing account map configured. */
public static void checkHasBillingAccount(String registrarId, String tld) throws EppException {
Registry registry = Registry.get(tld);
public static void checkHasBillingAccount(String registrarId, String tldStr) throws EppException {
Tld tld = Tld.get(tldStr);
// Don't enforce the billing account check on test (i.e. prober/OT&E) TLDs.
if (registry.getTldType() == TldType.TEST) {
if (tld.getTldType() == TldType.TEST) {
return;
}
if (!Registrar.loadByRegistrarIdCached(registrarId)
.get()
.getBillingAccountMap()
.containsKey(registry.getCurrency())) {
throw new DomainFlowUtils.MissingBillingAccountMapException(registry.getCurrency());
.containsKey(tld.getCurrency())) {
throw new DomainFlowUtils.MissingBillingAccountMapException(tld.getCurrency());
}
}
@@ -409,8 +412,7 @@ public class DomainFlowUtils {
static void validateNameserversCountForTld(String tld, InternetDomainName domainName, int count)
throws EppException {
// For TLDs with a nameserver allow list, all domains must have at least 1 nameserver.
ImmutableSet<String> tldNameserversAllowList =
Registry.get(tld).getAllowedFullyQualifiedHostNames();
ImmutableSet<String> tldNameserversAllowList = Tld.get(tld).getAllowedFullyQualifiedHostNames();
if (!tldNameserversAllowList.isEmpty() && count == 0) {
throw new NameserversNotSpecifiedForTldWithNameserverAllowListException(
domainName.toString());
@@ -467,7 +469,7 @@ public class DomainFlowUtils {
static void validateRegistrantAllowedOnTld(String tld, String registrantContactId)
throws RegistrantNotAllowedException {
ImmutableSet<String> allowedRegistrants = Registry.get(tld).getAllowedRegistrantContactIds();
ImmutableSet<String> allowedRegistrants = Tld.get(tld).getAllowedRegistrantContactIds();
// Empty allow list or null registrantContactId are ignored.
if (registrantContactId != null
&& !allowedRegistrants.isEmpty()
@@ -478,7 +480,7 @@ public class DomainFlowUtils {
static void validateNameserversAllowedOnTld(String tld, Set<String> fullyQualifiedHostNames)
throws EppException {
ImmutableSet<String> allowedHostNames = Registry.get(tld).getAllowedFullyQualifiedHostNames();
ImmutableSet<String> allowedHostNames = Tld.get(tld).getAllowedFullyQualifiedHostNames();
Set<String> hostnames = nullToEmpty(fullyQualifiedHostNames);
if (!allowedHostNames.isEmpty()) { // Empty allow list is ignored.
Set<String> disallowedNameservers = difference(hostnames, allowedHostNames);
@@ -511,13 +513,13 @@ public class DomainFlowUtils {
domainName.parts().get(0), domainName.parent().toString());
}
/** Verifies that a launch extension's specified phase matches the specified registry's phase. */
/** Verifies that a launch extension's specified phase matches the specified tld's phase. */
static void verifyLaunchPhaseMatchesRegistryPhase(
Registry registry, LaunchExtension launchExtension, DateTime now) throws EppException {
Tld tld, LaunchExtension launchExtension, DateTime now) throws EppException {
if (!LAUNCH_PHASE_TO_TLD_STATES.containsKey(launchExtension.getPhase())
|| !LAUNCH_PHASE_TO_TLD_STATES
.get(launchExtension.getPhase())
.contains(registry.getTldState(now))) {
.contains(tld.getTldState(now))) {
// No launch operations are allowed during the quiet period or predelegation.
throw new LaunchPhaseMismatchException();
}
@@ -554,8 +556,8 @@ public class DomainFlowUtils {
* Fills in a builder with the data needed for an autorenew billing event for this domain. This
* does not copy over the id of the current autorenew billing event.
*/
public static BillingEvent.Recurring.Builder newAutorenewBillingEvent(Domain domain) {
return new BillingEvent.Recurring.Builder()
public static BillingRecurrence.Builder newAutorenewBillingEvent(Domain domain) {
return new BillingRecurrence.Builder()
.setReason(Reason.RENEW)
.setFlags(ImmutableSet.of(Flag.AUTO_RENEW))
.setTargetId(domain.getDomainName())
@@ -582,11 +584,11 @@ public class DomainFlowUtils {
* (if opening the message interval). This may cause an autorenew billing event to have an end
* time earlier than its event time (i.e. if it's being ended before it was ever triggered).
*
* <p>Returns the new autorenew recurring billing event.
* <p>Returns the new autorenew recurrence billing event.
*/
public static Recurring updateAutorenewRecurrenceEndTime(
public static BillingRecurrence updateAutorenewRecurrenceEndTime(
Domain domain,
Recurring existingRecurring,
BillingRecurrence existingBillingRecurrence,
DateTime newEndTime,
@Nullable HistoryEntryId historyId) {
Optional<PollMessage.Autorenew> autorenewPollMessage =
@@ -621,9 +623,10 @@ public class DomainFlowUtils {
tm().put(updatedAutorenewPollMessage);
}
Recurring newRecurring = existingRecurring.asBuilder().setRecurrenceEndTime(newEndTime).build();
tm().put(newRecurring);
return newRecurring;
BillingRecurrence newBillingRecurrence =
existingBillingRecurrence.asBuilder().setRecurrenceEndTime(newEndTime).build();
tm().put(newBillingRecurrence);
return newBillingRecurrence;
}
/**
@@ -640,7 +643,7 @@ public class DomainFlowUtils {
DomainPricingLogic pricingLogic,
Optional<AllocationToken> allocationToken,
boolean isAvailable,
@Nullable Recurring recurringBillingEvent)
@Nullable BillingRecurrence billingRecurrence)
throws EppException {
DateTime now = currentDate;
// Use the custom effective date specified in the fee check request, if there is one.
@@ -649,9 +652,9 @@ public class DomainFlowUtils {
builder.setEffectiveDateIfSupported(now);
}
String domainNameString = domainName.toString();
Registry registry = Registry.get(domainName.parent().toString());
Tld tld = Tld.get(domainName.parent().toString());
int years = verifyUnitIsYears(feeRequest.getPeriod()).getValue();
boolean isSunrise = (registry.getTldState(now) == START_DATE_SUNRISE);
boolean isSunrise = (tld.getTldState(now) == START_DATE_SUNRISE);
if (feeRequest.getPhase() != null || feeRequest.getSubphase() != null) {
throw new FeeChecksDontSupportPhasesException();
@@ -659,13 +662,13 @@ public class DomainFlowUtils {
CurrencyUnit currency =
feeRequest.getCurrency() != null ? feeRequest.getCurrency() : topLevelCurrency;
if ((currency != null) && !currency.equals(registry.getCurrency())) {
if ((currency != null) && !currency.equals(tld.getCurrency())) {
throw new CurrencyUnitMismatchException();
}
builder
.setCommand(feeRequest.getCommandName(), feeRequest.getPhase(), feeRequest.getSubphase())
.setCurrencyIfSupported(registry.getCurrency())
.setCurrencyIfSupported(tld.getCurrency())
.setPeriod(feeRequest.getPeriod());
String feeClass = null;
@@ -682,7 +685,7 @@ public class DomainFlowUtils {
fees =
pricingLogic
.getCreatePrice(
registry,
tld,
domainNameString,
now,
years,
@@ -695,7 +698,8 @@ public class DomainFlowUtils {
builder.setAvailIfSupported(true);
fees =
pricingLogic
.getRenewPrice(registry, domainNameString, now, years, recurringBillingEvent)
.getRenewPrice(
tld, domainNameString, now, years, billingRecurrence, allocationToken)
.getFees();
break;
case RESTORE:
@@ -713,7 +717,7 @@ public class DomainFlowUtils {
// restore because they can't be restored in the first place.
boolean isExpired =
domain.isPresent() && domain.get().getRegistrationExpirationTime().isBefore(now);
fees = pricingLogic.getRestorePrice(registry, domainNameString, now, isExpired).getFees();
fees = pricingLogic.getRestorePrice(tld, domainNameString, now, isExpired).getFees();
break;
case TRANSFER:
if (years != 1) {
@@ -721,13 +725,11 @@ public class DomainFlowUtils {
}
builder.setAvailIfSupported(true);
fees =
pricingLogic
.getTransferPrice(registry, domainNameString, now, recurringBillingEvent)
.getFees();
pricingLogic.getTransferPrice(tld, domainNameString, now, billingRecurrence).getFees();
break;
case UPDATE:
builder.setAvailIfSupported(true);
fees = pricingLogic.getUpdatePrice(registry, domainNameString, now).getFees();
fees = pricingLogic.getUpdatePrice(tld, domainNameString, now).getFees();
break;
default:
throw new UnknownFeeCommandException(feeRequest.getUnparsedCommandName());
@@ -738,7 +740,7 @@ public class DomainFlowUtils {
// are returning any premium fees, but only if the fee class isn't already set (i.e. because
// the domain is reserved, which overrides any other classes).
boolean isNameCollisionInSunrise =
registry.getTldState(now).equals(START_DATE_SUNRISE)
tld.getTldState(now).equals(START_DATE_SUNRISE)
&& getReservationTypes(domainName).contains(NAME_COLLISION);
boolean isPremium = fees.stream().anyMatch(BaseFee::isPremium);
feeClass =
@@ -989,7 +991,7 @@ public class DomainFlowUtils {
}
/** Check that the registry phase is not predelegation, during which some flows are forbidden. */
public static void verifyNotInPredelegation(Registry registry, DateTime now)
public static void verifyNotInPredelegation(Tld registry, DateTime now)
throws BadCommandForRegistryPhaseException {
if (registry.getTldState(now) == PREDELEGATION) {
throw new BadCommandForRegistryPhaseException();
@@ -998,17 +1000,17 @@ public class DomainFlowUtils {
/** Validate the contacts and nameservers specified in a domain create command. */
static void validateCreateCommandContactsAndNameservers(
Create command, Registry registry, InternetDomainName domainName) throws EppException {
Create command, Tld tld, InternetDomainName domainName) throws EppException {
verifyNotInPendingDelete(
command.getContacts(), command.getRegistrant(), command.getNameservers());
validateContactsHaveTypes(command.getContacts());
String tld = registry.getTldStr();
validateRegistrantAllowedOnTld(tld, command.getRegistrantContactId());
String tldStr = tld.getTldStr();
validateRegistrantAllowedOnTld(tldStr, command.getRegistrantContactId());
validateNoDuplicateContacts(command.getContacts());
validateRequiredContactsPresent(command.getRegistrant(), command.getContacts());
ImmutableSet<String> hostNames = command.getNameserverHostNames();
validateNameserversCountForTld(tld, domainName, hostNames.size());
validateNameserversAllowedOnTld(tld, hostNames);
validateNameserversCountForTld(tldStr, domainName, hostNames.size());
validateNameserversAllowedOnTld(tldStr, hostNames);
}
/** Validate the secDNS extension, if present. */
@@ -1057,10 +1059,9 @@ public class DomainFlowUtils {
}
/** Check that the claims period hasn't ended. */
static void verifyClaimsPeriodNotEnded(Registry registry, DateTime now)
throws ClaimsPeriodEndedException {
if (isAtOrAfter(now, registry.getClaimsPeriodEnd())) {
throw new ClaimsPeriodEndedException(registry.getTldStr());
static void verifyClaimsPeriodNotEnded(Tld tld, DateTime now) throws ClaimsPeriodEndedException {
if (isAtOrAfter(now, tld.getClaimsPeriodEnd())) {
throw new ClaimsPeriodEndedException(tld.getTldStr());
}
}
@@ -1191,6 +1192,52 @@ public class DomainFlowUtils {
.getResultList();
}
/**
* Checks if there is a valid default token to be used for a domain create command.
*
* <p>If there is more than one valid default token for the registration, only the first valid
* token found on the TLD's default token list will be returned.
*/
public static Optional<AllocationToken> checkForDefaultToken(
Tld tld, String domainName, CommandName commandName, String registrarId, DateTime now)
throws EppException {
if (isNullOrEmpty(tld.getDefaultPromoTokens())) {
return Optional.empty();
}
Map<VKey<AllocationToken>, Optional<AllocationToken>> tokens =
AllocationToken.getAll(tld.getDefaultPromoTokens());
ImmutableList<Optional<AllocationToken>> tokenList =
tld.getDefaultPromoTokens().stream()
.map(tokens::get)
.filter(Optional::isPresent)
.collect(toImmutableList());
checkState(
!isNullOrEmpty(tokenList),
"Failure while loading default TLD promotions from the database");
// Check if any of the tokens are valid for this domain registration
for (Optional<AllocationToken> token : tokenList) {
try {
AllocationTokenFlowUtils.validateToken(
InternetDomainName.from(domainName),
token.get(),
commandName,
registrarId,
isDomainPremium(domainName, now),
now);
} catch (AssociationProhibitsOperationException
| StatusProhibitsOperationException
| AllocationTokenInvalidForPremiumNameException e) {
// Allocation token was not valid for this registration, continue to check the next token in
// the list
continue;
}
// Only use the first valid token in the list
return token;
}
// No valid default token found
return Optional.empty();
}
/** Resource linked to this domain does not exist. */
static class LinkedResourcesDoNotExistException extends ObjectDoesNotExistException {
public LinkedResourcesDoNotExistException(Class<?> type, ImmutableSet<String> resourceIds) {

View File

@@ -16,6 +16,7 @@ package google.registry.flows.domain;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.flows.domain.DomainFlowUtils.zeroInCurrency;
import static google.registry.flows.domain.token.AllocationTokenFlowUtils.validateTokenForPossiblePremiumName;
import static google.registry.pricing.PricingEngineProxy.getPricesForDomainName;
import static google.registry.util.DomainNameUtils.getTldFromDomainName;
import static google.registry.util.PreconditionsUtils.checkArgumentPresent;
@@ -29,13 +30,14 @@ import google.registry.flows.custom.DomainPricingCustomLogic.RenewPriceParameter
import google.registry.flows.custom.DomainPricingCustomLogic.RestorePriceParameters;
import google.registry.flows.custom.DomainPricingCustomLogic.TransferPriceParameters;
import google.registry.flows.custom.DomainPricingCustomLogic.UpdatePriceParameters;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.fee.BaseFee;
import google.registry.model.domain.fee.BaseFee.FeeType;
import google.registry.model.domain.fee.Fee;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.domain.token.AllocationToken.TokenBehavior;
import google.registry.model.pricing.PremiumPricingEngine.DomainPrices;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import java.math.RoundingMode;
import java.util.Optional;
import javax.annotation.Nullable;
@@ -65,14 +67,14 @@ public final class DomainPricingLogic {
* applied to the first year.
*/
FeesAndCredits getCreatePrice(
Registry registry,
Tld tld,
String domainName,
DateTime dateTime,
int years,
boolean isAnchorTenant,
Optional<AllocationToken> allocationToken)
throws EppException {
CurrencyUnit currency = registry.getCurrency();
CurrencyUnit currency = tld.getCurrency();
BaseFee createFeeOrCredit;
// Domain create cost is always zero for anchor tenants
@@ -87,7 +89,7 @@ public final class DomainPricingLogic {
}
// Create fees for the cost and the EAP fee, if any.
Fee eapFee = registry.getEapFeeFor(dateTime);
Fee eapFee = tld.getEapFeeFor(dateTime);
FeesAndCredits.Builder feesBuilder =
new FeesAndCredits.Builder().setCurrency(currency).addFeeOrCredit(createFeeOrCredit);
// Don't charge anchor tenants EAP fees.
@@ -99,7 +101,7 @@ public final class DomainPricingLogic {
return customLogic.customizeCreatePrice(
CreatePriceParameters.newBuilder()
.setFeesAndCredits(feesBuilder.build())
.setRegistry(registry)
.setTld(tld)
.setDomainName(InternetDomainName.from(domainName))
.setAsOfDate(dateTime)
.setYears(years)
@@ -108,51 +110,55 @@ public final class DomainPricingLogic {
/** Returns a new renewal cost for the pricer. */
public FeesAndCredits getRenewPrice(
Registry registry,
Tld tld,
String domainName,
DateTime dateTime,
int years,
@Nullable Recurring recurringBillingEvent) {
@Nullable BillingRecurrence billingRecurrence,
Optional<AllocationToken> allocationToken)
throws AllocationTokenInvalidForPremiumNameException {
checkArgument(years > 0, "Number of years must be positive");
Money renewCost;
DomainPrices domainPrices = getPricesForDomainName(domainName, dateTime);
boolean isRenewCostPremiumPrice;
// recurring billing event is null if the domain is still available. Billing events are created
// recurrence is null if the domain is still available. Billing events are created
// in the process of domain creation.
if (recurringBillingEvent == null) {
DomainPrices domainPrices = getPricesForDomainName(domainName, dateTime);
renewCost = domainPrices.getRenewCost().multipliedBy(years);
if (billingRecurrence == null) {
renewCost = getDomainRenewCostWithDiscount(domainPrices, years, allocationToken);
isRenewCostPremiumPrice = domainPrices.isPremium();
} else {
switch (recurringBillingEvent.getRenewalPriceBehavior()) {
switch (billingRecurrence.getRenewalPriceBehavior()) {
case DEFAULT:
DomainPrices domainPrices = getPricesForDomainName(domainName, dateTime);
renewCost = domainPrices.getRenewCost().multipliedBy(years);
renewCost = getDomainRenewCostWithDiscount(domainPrices, years, allocationToken);
isRenewCostPremiumPrice = domainPrices.isPremium();
break;
// if the renewal price behavior is specified, then the renewal price should be the same
// as the creation price, which is stored in the billing event as the renewal price
case SPECIFIED:
checkArgumentPresent(
recurringBillingEvent.getRenewalPrice(),
billingRecurrence.getRenewalPrice(),
"Unexpected behavior: renewal price cannot be null when renewal behavior is"
+ " SPECIFIED");
renewCost = recurringBillingEvent.getRenewalPrice().get().multipliedBy(years);
// Don't apply allocation token to renewal price when SPECIFIED
renewCost = billingRecurrence.getRenewalPrice().get().multipliedBy(years);
isRenewCostPremiumPrice = false;
break;
// if the renewal price behavior is nonpremium, it means that the domain should be renewed
// at standard price of domains at the time, even if the domain is premium
case NONPREMIUM:
renewCost =
Registry.get(getTldFromDomainName(domainName))
.getStandardRenewCost(dateTime)
.multipliedBy(years);
getDomainCostWithDiscount(
false,
years,
allocationToken,
Tld.get(getTldFromDomainName(domainName)).getStandardRenewCost(dateTime));
isRenewCostPremiumPrice = false;
break;
default:
throw new IllegalArgumentException(
String.format(
"Unknown RenewalPriceBehavior enum value: %s",
recurringBillingEvent.getRenewalPriceBehavior()));
billingRecurrence.getRenewalPriceBehavior()));
}
}
return customLogic.customizeRenewPrice(
@@ -163,7 +169,7 @@ public final class DomainPricingLogic {
.addFeeOrCredit(
Fee.create(renewCost.getAmount(), FeeType.RENEW, isRenewCostPremiumPrice))
.build())
.setRegistry(registry)
.setTld(tld)
.setDomainName(InternetDomainName.from(domainName))
.setAsOfDate(dateTime)
.setYears(years)
@@ -171,15 +177,14 @@ public final class DomainPricingLogic {
}
/** Returns a new restore price for the pricer. */
FeesAndCredits getRestorePrice(
Registry registry, String domainName, DateTime dateTime, boolean isExpired)
FeesAndCredits getRestorePrice(Tld tld, String domainName, DateTime dateTime, boolean isExpired)
throws EppException {
DomainPrices domainPrices = getPricesForDomainName(domainName, dateTime);
FeesAndCredits.Builder feesAndCredits =
new FeesAndCredits.Builder()
.setCurrency(registry.getCurrency())
.setCurrency(tld.getCurrency())
.addFeeOrCredit(
Fee.create(registry.getStandardRestoreCost().getAmount(), FeeType.RESTORE, false));
Fee.create(tld.getStandardRestoreCost().getAmount(), FeeType.RESTORE, false));
if (isExpired) {
feesAndCredits.addFeeOrCredit(
Fee.create(
@@ -188,7 +193,7 @@ public final class DomainPricingLogic {
return customLogic.customizeRestorePrice(
RestorePriceParameters.newBuilder()
.setFeesAndCredits(feesAndCredits.build())
.setRegistry(registry)
.setTld(tld)
.setDomainName(InternetDomainName.from(domainName))
.setAsOfDate(dateTime)
.build());
@@ -196,34 +201,30 @@ public final class DomainPricingLogic {
/** Returns a new transfer price for the pricer. */
FeesAndCredits getTransferPrice(
Registry registry,
String domainName,
DateTime dateTime,
@Nullable Recurring recurringBillingEvent)
Tld tld, String domainName, DateTime dateTime, @Nullable BillingRecurrence billingRecurrence)
throws EppException {
FeesAndCredits renewPrice =
getRenewPrice(registry, domainName, dateTime, 1, recurringBillingEvent);
getRenewPrice(tld, domainName, dateTime, 1, billingRecurrence, Optional.empty());
return customLogic.customizeTransferPrice(
TransferPriceParameters.newBuilder()
.setFeesAndCredits(
new FeesAndCredits.Builder()
.setCurrency(registry.getCurrency())
.setCurrency(tld.getCurrency())
.addFeeOrCredit(
Fee.create(
renewPrice.getRenewCost().getAmount(),
FeeType.RENEW,
renewPrice.hasAnyPremiumFees()))
.build())
.setRegistry(registry)
.setTld(tld)
.setDomainName(InternetDomainName.from(domainName))
.setAsOfDate(dateTime)
.build());
}
/** Returns a new update price for the pricer. */
FeesAndCredits getUpdatePrice(Registry registry, String domainName, DateTime dateTime)
throws EppException {
CurrencyUnit currency = registry.getCurrency();
FeesAndCredits getUpdatePrice(Tld tld, String domainName, DateTime dateTime) throws EppException {
CurrencyUnit currency = tld.getCurrency();
BaseFee feeOrCredit = Fee.create(zeroInCurrency(currency), FeeType.UPDATE, false);
return customLogic.customizeUpdatePrice(
UpdatePriceParameters.newBuilder()
@@ -232,7 +233,7 @@ public final class DomainPricingLogic {
.setCurrency(currency)
.setFeesAndCredits(feeOrCredit)
.build())
.setRegistry(registry)
.setTld(tld)
.setDomainName(InternetDomainName.from(domainName))
.setAsOfDate(dateTime)
.build());
@@ -242,32 +243,42 @@ public final class DomainPricingLogic {
private Money getDomainCreateCostWithDiscount(
DomainPrices domainPrices, int years, Optional<AllocationToken> allocationToken)
throws EppException {
if (allocationToken.isPresent()
&& allocationToken.get().getDiscountFraction() != 0.0
&& domainPrices.isPremium()
&& !allocationToken.get().shouldDiscountPremiums()) {
throw new AllocationTokenInvalidForPremiumNameException();
}
Money oneYearCreateCost = domainPrices.getCreateCost();
Money totalDomainCreateCost = oneYearCreateCost.multipliedBy(years);
return getDomainCostWithDiscount(
domainPrices.isPremium(), years, allocationToken, domainPrices.getCreateCost());
}
/** Returns the domain renew cost with allocation-token-related discounts applied. */
private Money getDomainRenewCostWithDiscount(
DomainPrices domainPrices, int years, Optional<AllocationToken> allocationToken)
throws AllocationTokenInvalidForPremiumNameException {
return getDomainCostWithDiscount(
domainPrices.isPremium(), years, allocationToken, domainPrices.getRenewCost());
}
private Money getDomainCostWithDiscount(
boolean isPremium, int years, Optional<AllocationToken> allocationToken, Money oneYearCost)
throws AllocationTokenInvalidForPremiumNameException {
validateTokenForPossiblePremiumName(allocationToken, isPremium);
Money totalDomainFlowCost = oneYearCost.multipliedBy(years);
// Apply the allocation token discount, if applicable.
if (allocationToken.isPresent()) {
if (allocationToken.isPresent()
&& allocationToken.get().getTokenBehavior().equals(TokenBehavior.DEFAULT)) {
int discountedYears = Math.min(years, allocationToken.get().getDiscountYears());
Money discount =
oneYearCreateCost.multipliedBy(
oneYearCost.multipliedBy(
discountedYears * allocationToken.get().getDiscountFraction(),
RoundingMode.HALF_EVEN);
totalDomainCreateCost = totalDomainCreateCost.minus(discount);
totalDomainFlowCost = totalDomainFlowCost.minus(discount);
}
return totalDomainCreateCost;
return totalDomainFlowCost;
}
/** An allocation token was provided that is invalid for premium domains. */
public static class AllocationTokenInvalidForPremiumNameException
extends CommandUseErrorException {
AllocationTokenInvalidForPremiumNameException() {
super("A nonzero discount code cannot be applied to premium domains");
public AllocationTokenInvalidForPremiumNameException() {
super("Token not valid for premium name");
}
}
}

View File

@@ -54,10 +54,9 @@ import google.registry.flows.custom.DomainRenewFlowCustomLogic.BeforeSaveParamet
import google.registry.flows.custom.EntityChanges;
import google.registry.flows.domain.token.AllocationTokenFlowUtils;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand.Renew;
import google.registry.model.domain.DomainHistory;
@@ -66,6 +65,7 @@ import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.Period;
import google.registry.model.domain.fee.BaseFee.FeeType;
import google.registry.model.domain.fee.Fee;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem.CommandName;
import google.registry.model.domain.fee.FeeRenewCommandExtension;
import google.registry.model.domain.fee.FeeTransformResponseExtension;
import google.registry.model.domain.metadata.MetadataExtension;
@@ -83,7 +83,7 @@ import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.money.Money;
@@ -172,13 +172,25 @@ public final class DomainRenewFlow implements TransactionalFlow {
Renew command = (Renew) resourceCommand;
// Loads the target resource if it exists
Domain existingDomain = loadAndVerifyExistence(Domain.class, targetId, now);
String tldStr = existingDomain.getTld();
Tld tld = Tld.get(tldStr);
Optional<AllocationToken> allocationToken =
allocationTokenFlowUtils.verifyAllocationTokenIfPresent(
existingDomain,
Registry.get(existingDomain.getTld()),
tld,
registrarId,
now,
CommandName.RENEW,
eppInput.getSingleExtension(AllocationTokenExtension.class));
boolean defaultTokenUsed = false;
if (!allocationToken.isPresent()) {
allocationToken =
DomainFlowUtils.checkForDefaultToken(
tld, existingDomain.getDomainName(), CommandName.RENEW, registrarId, now);
if (allocationToken.isPresent()) {
defaultTokenUsed = true;
}
}
verifyRenewAllowed(authInfo, existingDomain, command, allocationToken);
// If client passed an applicable static token this updates the domain
@@ -190,16 +202,17 @@ public final class DomainRenewFlow implements TransactionalFlow {
validateRegistrationPeriod(now, newExpirationTime);
Optional<FeeRenewCommandExtension> feeRenew =
eppInput.getSingleExtension(FeeRenewCommandExtension.class);
Recurring existingRecurringBillingEvent =
BillingRecurrence existingBillingRecurrence =
tm().loadByKey(existingDomain.getAutorenewBillingEvent());
FeesAndCredits feesAndCredits =
pricingLogic.getRenewPrice(
Registry.get(existingDomain.getTld()),
Tld.get(existingDomain.getTld()),
targetId,
now,
years,
existingRecurringBillingEvent);
validateFeeChallenge(feeRenew, feesAndCredits, false);
existingBillingRecurrence,
allocationToken);
validateFeeChallenge(feeRenew, feesAndCredits, defaultTokenUsed);
flowCustomLogic.afterValidation(
AfterValidationParameters.newBuilder()
.setExistingDomain(existingDomain)
@@ -208,17 +221,16 @@ public final class DomainRenewFlow implements TransactionalFlow {
.build());
HistoryEntryId domainHistoryId = createHistoryEntryId(existingDomain);
historyBuilder.setRevisionId(domainHistoryId.getRevisionId());
String tld = existingDomain.getTld();
// Bill for this explicit renew itself.
BillingEvent.OneTime explicitRenewEvent =
BillingEvent explicitRenewEvent =
createRenewBillingEvent(
tld, feesAndCredits.getTotalCost(), years, domainHistoryId, allocationToken, now);
tldStr, feesAndCredits.getTotalCost(), years, domainHistoryId, allocationToken, now);
// Create a new autorenew billing event and poll message starting at the new expiration time.
BillingEvent.Recurring newAutorenewEvent =
BillingRecurrence newAutorenewEvent =
newAutorenewBillingEvent(existingDomain)
.setEventTime(newExpirationTime)
.setRenewalPrice(existingRecurringBillingEvent.getRenewalPrice().orElse(null))
.setRenewalPriceBehavior(existingRecurringBillingEvent.getRenewalPriceBehavior())
.setRenewalPrice(existingBillingRecurrence.getRenewalPrice().orElse(null))
.setRenewalPriceBehavior(existingBillingRecurrence.getRenewalPriceBehavior())
.setDomainHistoryId(domainHistoryId)
.build();
PollMessage.Autorenew newAutorenewPollMessage =
@@ -227,8 +239,8 @@ public final class DomainRenewFlow implements TransactionalFlow {
.setDomainHistoryId(domainHistoryId)
.build();
// End the old autorenew billing event and poll message now. This may delete the poll message.
Recurring existingRecurring = tm().loadByKey(existingDomain.getAutorenewBillingEvent());
updateAutorenewRecurrenceEndTime(existingDomain, existingRecurring, now, domainHistoryId);
updateAutorenewRecurrenceEndTime(
existingDomain, existingBillingRecurrence, now, domainHistoryId);
Domain newDomain =
existingDomain
.asBuilder()
@@ -241,10 +253,8 @@ public final class DomainRenewFlow implements TransactionalFlow {
GracePeriod.forBillingEvent(
GracePeriodStatus.RENEW, existingDomain.getRepoId(), explicitRenewEvent))
.build();
Registry registry = Registry.get(existingDomain.getTld());
DomainHistory domainHistory =
buildDomainHistory(
newDomain, now, command.getPeriod(), registry.getRenewGracePeriodLength());
buildDomainHistory(newDomain, now, command.getPeriod(), tld.getRenewGracePeriodLength());
ImmutableSet.Builder<ImmutableObject> entitiesToSave = new ImmutableSet.Builder<>();
entitiesToSave.add(
newDomain, domainHistory, explicitRenewEvent, newAutorenewEvent, newAutorenewPollMessage);
@@ -327,14 +337,14 @@ public final class DomainRenewFlow implements TransactionalFlow {
}
}
private OneTime createRenewBillingEvent(
private BillingEvent createRenewBillingEvent(
String tld,
Money renewCost,
int years,
HistoryEntryId domainHistoryId,
Optional<AllocationToken> allocationToken,
DateTime now) {
return new BillingEvent.OneTime.Builder()
return new BillingEvent.Builder()
.setReason(Reason.RENEW)
.setTargetId(targetId)
.setRegistrarId(registrarId)
@@ -346,7 +356,7 @@ public final class DomainRenewFlow implements TransactionalFlow {
.filter(t -> AllocationToken.TokenBehavior.DEFAULT.equals(t.getTokenBehavior()))
.map(AllocationToken::createVKey)
.orElse(null))
.setBillingTime(now.plus(Registry.get(tld).getRenewGracePeriodLength()))
.setBillingTime(now.plus(Tld.get(tld).getRenewGracePeriodLength()))
.setDomainHistoryId(domainHistoryId)
.build();
}

View File

@@ -14,6 +14,7 @@
package google.registry.flows.domain;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.flows.FlowUtils.createHistoryEntryId;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
@@ -34,7 +35,6 @@ import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.net.InternetDomainName;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.EppException.CommandUseErrorException;
import google.registry.flows.EppException.StatusProhibitsOperationException;
@@ -45,9 +45,9 @@ import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand.Update;
import google.registry.model.domain.DomainHistory;
@@ -67,7 +67,7 @@ import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.money.Money;
@@ -122,7 +122,6 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
@Inject @TargetId String targetId;
@Inject @Superuser boolean isSuperuser;
@Inject DomainHistory.Builder historyBuilder;
@Inject DnsQueue dnsQueue;
@Inject EppResponse.Builder responseBuilder;
@Inject DomainPricingLogic pricingLogic;
@Inject DomainRestoreRequestFlow() {}
@@ -141,8 +140,7 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
Domain existingDomain = loadAndVerifyExistence(Domain.class, targetId, now);
boolean isExpired = existingDomain.getRegistrationExpirationTime().isBefore(now);
FeesAndCredits feesAndCredits =
pricingLogic.getRestorePrice(
Registry.get(existingDomain.getTld()), targetId, now, isExpired);
pricingLogic.getRestorePrice(Tld.get(existingDomain.getTld()), targetId, now, isExpired);
Optional<FeeUpdateCommandExtension> feeUpdate =
eppInput.getSingleExtension(FeeUpdateCommandExtension.class);
verifyRestoreAllowed(command, existingDomain, feeUpdate, feesAndCredits, now);
@@ -162,7 +160,7 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
entitiesToSave.add(
createRestoreBillingEvent(domainHistoryId, feesAndCredits.getRestoreCost(), now));
BillingEvent.Recurring autorenewEvent =
BillingRecurrence autorenewEvent =
newAutorenewBillingEvent(existingDomain)
.setEventTime(newExpirationTime)
.setRecurrenceEndTime(END_OF_TIME)
@@ -186,7 +184,7 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
entitiesToSave.add(newDomain, domainHistory, autorenewEvent, autorenewPollMessage);
tm().putAll(entitiesToSave.build());
tm().delete(existingDomain.getDeletePollMessage());
dnsQueue.addDomainRefreshTask(existingDomain.getDomainName());
requestDomainDnsRefresh(existingDomain.getDomainName());
return responseBuilder
.setExtensions(createResponseExtensions(feesAndCredits, feeUpdate, isExpired))
.build();
@@ -232,7 +230,7 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
private static Domain performRestore(
Domain existingDomain,
DateTime newExpirationTime,
BillingEvent.Recurring autorenewEvent,
BillingRecurrence autorenewEvent,
PollMessage.Autorenew autorenewPollMessage,
DateTime now,
String registrarId) {
@@ -253,19 +251,19 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
.build();
}
private OneTime createRenewBillingEvent(
private BillingEvent createRenewBillingEvent(
HistoryEntryId domainHistoryId, Money renewCost, DateTime now) {
return prepareBillingEvent(domainHistoryId, renewCost, now).setReason(Reason.RENEW).build();
}
private BillingEvent.OneTime createRestoreBillingEvent(
private BillingEvent createRestoreBillingEvent(
HistoryEntryId domainHistoryId, Money restoreCost, DateTime now) {
return prepareBillingEvent(domainHistoryId, restoreCost, now).setReason(Reason.RESTORE).build();
}
private OneTime.Builder prepareBillingEvent(
private BillingEvent.Builder prepareBillingEvent(
HistoryEntryId domainHistoryId, Money cost, DateTime now) {
return new BillingEvent.OneTime.Builder()
return new BillingEvent.Builder()
.setTargetId(targetId)
.setRegistrarId(registrarId)
.setEventTime(now)
@@ -306,14 +304,14 @@ public final class DomainRestoreRequestFlow implements TransactionalFlow {
/** Restore command cannot have other changes specified. */
static class RestoreCommandIncludesChangesException extends CommandUseErrorException {
public RestoreCommandIncludesChangesException() {
RestoreCommandIncludesChangesException() {
super("Restore command cannot have other changes specified");
}
}
/** Domain is not eligible for restore. */
static class DomainNotEligibleForRestoreException extends StatusProhibitsOperationException {
public DomainNotEligibleForRestoreException() {
DomainNotEligibleForRestoreException() {
super("Domain is not eligible for restore");
}
}

View File

@@ -45,14 +45,16 @@ import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.flows.domain.token.AllocationTokenFlowUtils;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingBase.RenewalPriceBehavior;
import google.registry.model.billing.BillingCancellation;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingEvent.RenewalPriceBehavior;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem.CommandName;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.domain.rgp.GracePeriodStatus;
import google.registry.model.domain.token.AllocationTokenExtension;
@@ -63,7 +65,7 @@ import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.DomainTransferData;
import google.registry.model.transfer.TransferStatus;
import java.util.Optional;
@@ -132,31 +134,34 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
Domain existingDomain = loadAndVerifyExistence(Domain.class, targetId, now);
allocationTokenFlowUtils.verifyAllocationTokenIfPresent(
existingDomain,
Registry.get(existingDomain.getTld()),
Tld.get(existingDomain.getTld()),
registrarId,
now,
CommandName.TRANSFER,
eppInput.getSingleExtension(AllocationTokenExtension.class));
verifyOptionalAuthInfo(authInfo, existingDomain);
verifyHasPendingTransfer(existingDomain);
verifyResourceOwnership(registrarId, existingDomain);
String tld = existingDomain.getTld();
String tldStr = existingDomain.getTld();
if (!isSuperuser) {
checkAllowedAccessToTld(registrarId, tld);
checkAllowedAccessToTld(registrarId, tldStr);
}
DomainTransferData transferData = existingDomain.getTransferData();
String gainingRegistrarId = transferData.getGainingRegistrarId();
// Create a transfer billing event for 1 year, unless the superuser extension was used to set
// the transfer period to zero. There is not a transfer cost if the transfer period is zero.
Recurring existingRecurring = tm().loadByKey(existingDomain.getAutorenewBillingEvent());
BillingRecurrence existingBillingRecurrence =
tm().loadByKey(existingDomain.getAutorenewBillingEvent());
HistoryEntryId domainHistoryId = createHistoryEntryId(existingDomain);
historyBuilder.setRevisionId(domainHistoryId.getRevisionId());
boolean hasPackageToken = existingDomain.getCurrentPackageToken().isPresent();
Money renewalPrice = hasPackageToken ? null : existingRecurring.getRenewalPrice().orElse(null);
Optional<BillingEvent.OneTime> billingEvent =
Money renewalPrice =
hasPackageToken ? null : existingBillingRecurrence.getRenewalPrice().orElse(null);
Optional<BillingEvent> billingEvent =
transferData.getTransferPeriod().getValue() == 0
? Optional.empty()
: Optional.of(
new BillingEvent.OneTime.Builder()
new BillingEvent.Builder()
.setReason(Reason.TRANSFER)
.setTargetId(targetId)
.setRegistrarId(gainingRegistrarId)
@@ -164,16 +169,16 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
.setCost(
pricingLogic
.getTransferPrice(
Registry.get(tld),
Tld.get(tldStr),
targetId,
transferData.getTransferRequestTime(),
// When removing a domain from a package it should return to the
// default recurring billing behavior so the existing recurring
// default recurrence billing behavior so the existing recurrence
// billing event should not be passed in.
hasPackageToken ? null : existingRecurring)
hasPackageToken ? null : existingBillingRecurrence)
.getRenewCost())
.setEventTime(now)
.setBillingTime(now.plus(Registry.get(tld).getTransferGracePeriodLength()))
.setBillingTime(now.plus(Tld.get(tldStr).getTransferGracePeriodLength()))
.setDomainHistoryId(domainHistoryId)
.build());
@@ -190,18 +195,18 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
// still needs to be charged for the auto-renew.
if (billingEvent.isPresent()) {
entitiesToSave.add(
BillingEvent.Cancellation.forGracePeriod(
autorenewGrace, now, domainHistoryId, targetId));
BillingCancellation.forGracePeriod(autorenewGrace, now, domainHistoryId, targetId));
}
}
// Close the old autorenew event and poll message at the transfer time (aka now). This may end
// up deleting the poll message.
updateAutorenewRecurrenceEndTime(existingDomain, existingRecurring, now, domainHistoryId);
updateAutorenewRecurrenceEndTime(
existingDomain, existingBillingRecurrence, now, domainHistoryId);
DateTime newExpirationTime =
computeExDateForApprovalTime(existingDomain, now, transferData.getTransferPeriod());
// Create a new autorenew event starting at the expiration time.
BillingEvent.Recurring autorenewEvent =
new BillingEvent.Recurring.Builder()
BillingRecurrence autorenewEvent =
new BillingRecurrence.Builder()
.setReason(Reason.RENEW)
.setFlags(ImmutableSet.of(Flag.AUTO_RENEW))
.setTargetId(targetId)
@@ -210,7 +215,7 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
.setRenewalPriceBehavior(
hasPackageToken
? RenewalPriceBehavior.DEFAULT
: existingRecurring.getRenewalPriceBehavior())
: existingBillingRecurrence.getRenewalPriceBehavior())
.setRenewalPrice(renewalPrice)
.setRecurrenceEndTime(END_OF_TIME)
.setDomainHistoryId(domainHistoryId)
@@ -246,12 +251,10 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
.setGracePeriods(
billingEvent
.map(
oneTime ->
event ->
ImmutableSet.of(
GracePeriod.forBillingEvent(
GracePeriodStatus.TRANSFER,
existingDomain.getRepoId(),
oneTime)))
GracePeriodStatus.TRANSFER, existingDomain.getRepoId(), event)))
.orElseGet(ImmutableSet::of))
.setLastEppUpdateTime(now)
.setLastEppUpdateRegistrarId(registrarId)
@@ -260,8 +263,8 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
.setCurrentPackageToken(null)
.build();
Registry registry = Registry.get(existingDomain.getTld());
DomainHistory domainHistory = buildDomainHistory(newDomain, registry, now, gainingRegistrarId);
Tld tld = Tld.get(existingDomain.getTld());
DomainHistory domainHistory = buildDomainHistory(newDomain, tld, now, gainingRegistrarId);
// Create a poll message for the gaining client.
PollMessage gainingClientPollMessage =
createGainingTransferPollMessage(
@@ -284,12 +287,12 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
}
private DomainHistory buildDomainHistory(
Domain newDomain, Registry registry, DateTime now, String gainingRegistrarId) {
Domain newDomain, Tld tld, DateTime now, String gainingRegistrarId) {
ImmutableSet<DomainTransactionRecord> cancelingRecords =
createCancelingRecords(
newDomain,
now,
registry.getAutomaticTransferLength().plus(registry.getTransferGracePeriodLength()),
tld.getAutomaticTransferLength().plus(tld.getTransferGracePeriodLength()),
ImmutableSet.of(TRANSFER_SUCCESSFUL));
return historyBuilder
.setType(DOMAIN_TRANSFER_APPROVE)
@@ -300,7 +303,7 @@ public final class DomainTransferApproveFlow implements TransactionalFlow {
cancelingRecords,
DomainTransactionRecord.create(
newDomain.getTld(),
now.plus(registry.getTransferGracePeriodLength()),
now.plus(tld.getTransferGracePeriodLength()),
TRANSFER_SUCCESSFUL,
1)))
.build();

View File

@@ -39,7 +39,7 @@ import google.registry.flows.FlowModule.Superuser;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.metadata.MetadataExtension;
@@ -48,7 +48,7 @@ import google.registry.model.eppoutput.EppResponse;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.TransferStatus;
import java.util.Optional;
import javax.inject.Inject;
@@ -100,7 +100,7 @@ public final class DomainTransferCancelFlow implements TransactionalFlow {
if (!isSuperuser) {
checkAllowedAccessToTld(registrarId, existingDomain.getTld());
}
Registry registry = Registry.get(existingDomain.getTld());
Tld tld = Tld.get(existingDomain.getTld());
HistoryEntryId domainHistoryId = createHistoryEntryId(existingDomain);
historyBuilder
@@ -109,7 +109,7 @@ public final class DomainTransferCancelFlow implements TransactionalFlow {
Domain newDomain =
denyPendingTransfer(existingDomain, TransferStatus.CLIENT_CANCELLED, now, registrarId);
DomainHistory domainHistory = buildDomainHistory(newDomain, registry, now);
DomainHistory domainHistory = buildDomainHistory(newDomain, tld, now);
tm().putAll(
newDomain,
domainHistory,
@@ -117,9 +117,10 @@ public final class DomainTransferCancelFlow implements TransactionalFlow {
targetId, newDomain.getTransferData(), null, domainHistoryId));
// Reopen the autorenew event and poll message that we closed for the implicit transfer. This
// may recreate the autorenew poll message if it was deleted when the transfer request was made.
Recurring existingRecurring = tm().loadByKey(existingDomain.getAutorenewBillingEvent());
BillingRecurrence existingBillingRecurrence =
tm().loadByKey(existingDomain.getAutorenewBillingEvent());
updateAutorenewRecurrenceEndTime(
existingDomain, existingRecurring, END_OF_TIME, domainHistory.getHistoryEntryId());
existingDomain, existingBillingRecurrence, END_OF_TIME, domainHistory.getHistoryEntryId());
// Delete the billing event and poll messages that were written in case the transfer would have
// been implicitly server approved.
tm().delete(existingDomain.getTransferData().getServerApproveEntities());
@@ -128,12 +129,12 @@ public final class DomainTransferCancelFlow implements TransactionalFlow {
.build();
}
private DomainHistory buildDomainHistory(Domain newDomain, Registry registry, DateTime now) {
private DomainHistory buildDomainHistory(Domain newDomain, Tld tld, DateTime now) {
ImmutableSet<DomainTransactionRecord> cancelingRecords =
createCancelingRecords(
newDomain,
now,
registry.getAutomaticTransferLength().plus(registry.getTransferGracePeriodLength()),
tld.getAutomaticTransferLength().plus(tld.getTransferGracePeriodLength()),
ImmutableSet.of(TRANSFER_SUCCESSFUL));
return historyBuilder
.setType(DOMAIN_TRANSFER_CANCEL)

View File

@@ -41,7 +41,7 @@ import google.registry.flows.FlowModule.Superuser;
import google.registry.flows.FlowModule.TargetId;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.annotations.ReportingSpec;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.metadata.MetadataExtension;
@@ -50,7 +50,7 @@ import google.registry.model.eppoutput.EppResponse;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.TransferStatus;
import java.util.Optional;
import javax.inject.Inject;
@@ -94,7 +94,7 @@ public final class DomainTransferRejectFlow implements TransactionalFlow {
extensionManager.validate();
DateTime now = tm().getTransactionTime();
Domain existingDomain = loadAndVerifyExistence(Domain.class, targetId, now);
Registry registry = Registry.get(existingDomain.getTld());
Tld tld = Tld.get(existingDomain.getTld());
HistoryEntryId domainHistoryId = createHistoryEntryId(existingDomain);
historyBuilder
.setRevisionId(domainHistoryId.getRevisionId())
@@ -108,7 +108,7 @@ public final class DomainTransferRejectFlow implements TransactionalFlow {
}
Domain newDomain =
denyPendingTransfer(existingDomain, TransferStatus.CLIENT_REJECTED, now, registrarId);
DomainHistory domainHistory = buildDomainHistory(newDomain, registry, now);
DomainHistory domainHistory = buildDomainHistory(newDomain, tld, now);
tm().putAll(
newDomain,
domainHistory,
@@ -116,9 +116,10 @@ public final class DomainTransferRejectFlow implements TransactionalFlow {
targetId, newDomain.getTransferData(), null, now, domainHistoryId));
// Reopen the autorenew event and poll message that we closed for the implicit transfer. This
// may end up recreating the poll message if it was deleted upon the transfer request.
Recurring existingRecurring = tm().loadByKey(existingDomain.getAutorenewBillingEvent());
BillingRecurrence existingBillingRecurrence =
tm().loadByKey(existingDomain.getAutorenewBillingEvent());
updateAutorenewRecurrenceEndTime(
existingDomain, existingRecurring, END_OF_TIME, domainHistory.getHistoryEntryId());
existingDomain, existingBillingRecurrence, END_OF_TIME, domainHistory.getHistoryEntryId());
// Delete the billing event and poll messages that were written in case the transfer would have
// been implicitly server approved.
tm().delete(existingDomain.getTransferData().getServerApproveEntities());
@@ -127,12 +128,12 @@ public final class DomainTransferRejectFlow implements TransactionalFlow {
.build();
}
private DomainHistory buildDomainHistory(Domain newDomain, Registry registry, DateTime now) {
private DomainHistory buildDomainHistory(Domain newDomain, Tld tld, DateTime now) {
ImmutableSet<DomainTransactionRecord> cancelingRecords =
createCancelingRecords(
newDomain,
now,
registry.getAutomaticTransferLength().plus(registry.getTransferGracePeriodLength()),
tld.getAutomaticTransferLength().plus(tld.getTransferGracePeriodLength()),
ImmutableSet.of(TRANSFER_SUCCESSFUL));
return historyBuilder
.setType(DOMAIN_TRANSFER_REJECT)

View File

@@ -52,11 +52,12 @@ import google.registry.flows.exceptions.InvalidTransferPeriodValueException;
import google.registry.flows.exceptions.ObjectAlreadySponsoredException;
import google.registry.flows.exceptions.TransferPeriodMustBeOneYearException;
import google.registry.flows.exceptions.TransferPeriodZeroAndFeeTransferExtensionException;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand.Transfer;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem.CommandName;
import google.registry.model.domain.fee.FeeTransferCommandExtension;
import google.registry.model.domain.fee.FeeTransformResponseExtension;
import google.registry.model.domain.metadata.MetadataExtension;
@@ -73,7 +74,7 @@ import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.DomainTransferData;
import google.registry.model.transfer.TransferData.TransferServerApproveEntity;
import google.registry.model.transfer.TransferResponse.DomainTransferResponse;
@@ -170,9 +171,10 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
Domain existingDomain = loadAndVerifyExistence(Domain.class, targetId, now);
allocationTokenFlowUtils.verifyAllocationTokenIfPresent(
existingDomain,
Registry.get(existingDomain.getTld()),
Tld.get(existingDomain.getTld()),
gainingClientId,
now,
CommandName.TRANSFER,
eppInput.getSingleExtension(AllocationTokenExtension.class));
Optional<DomainTransferRequestSuperuserExtension> superuserExtension =
eppInput.getSingleExtension(DomainTransferRequestSuperuserExtension.class);
@@ -182,8 +184,8 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
: ((Transfer) resourceCommand).getPeriod();
verifyTransferAllowed(existingDomain, period, now, superuserExtension);
String tld = existingDomain.getTld();
Registry registry = Registry.get(tld);
String tldStr = existingDomain.getTld();
Tld tld = Tld.get(tldStr);
// An optional extension from the client specifying what they think the transfer should cost.
Optional<FeeTransferCommandExtension> feeTransfer =
eppInput.getSingleExtension(FeeTransferCommandExtension.class);
@@ -193,20 +195,21 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
throw new TransferPeriodZeroAndFeeTransferExtensionException();
}
// If the period is zero, then there is no fee for the transfer.
Recurring existingRecurring = tm().loadByKey(existingDomain.getAutorenewBillingEvent());
BillingRecurrence existingBillingRecurrence =
tm().loadByKey(existingDomain.getAutorenewBillingEvent());
Optional<FeesAndCredits> feesAndCredits;
if (period.getValue() == 0) {
feesAndCredits = Optional.empty();
} else if (!existingDomain.getCurrentPackageToken().isPresent()) {
feesAndCredits =
Optional.of(pricingLogic.getTransferPrice(registry, targetId, now, existingRecurring));
Optional.of(pricingLogic.getTransferPrice(tld, targetId, now, existingBillingRecurrence));
} else {
// If existing domain is in a package, calculate the transfer price with default renewal price
// behavior
feesAndCredits =
period.getValue() == 0
? Optional.empty()
: Optional.of(pricingLogic.getTransferPrice(registry, targetId, now, null));
: Optional.of(pricingLogic.getTransferPrice(tld, targetId, now, null));
}
if (feesAndCredits.isPresent()) {
@@ -222,7 +225,7 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
domainTransferRequestSuperuserExtension ->
now.plusDays(
domainTransferRequestSuperuserExtension.getAutomaticTransferLength()))
.orElseGet(() -> now.plus(registry.getAutomaticTransferLength()));
.orElseGet(() -> now.plus(tld.getAutomaticTransferLength()));
// If the domain will be in the auto-renew grace period at the moment of transfer, the transfer
// will subsume the autorenew, so we don't add the normal extra year from the transfer.
// The gaining registrar is still billed for the extra year; the losing registrar will get a
@@ -241,7 +244,7 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
serverApproveNewExpirationTime,
domainHistoryId,
existingDomain,
existingRecurring,
existingBillingRecurrence,
trid,
gainingClientId,
feesAndCredits.map(FeesAndCredits::getTotalCost),
@@ -272,7 +275,7 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
// cloneProjectedAtTime() will replace these old autorenew entities with the server approve ones
// that we've created in this flow and stored in pendingTransferData.
updateAutorenewRecurrenceEndTime(
existingDomain, existingRecurring, automaticTransferTime, domainHistoryId);
existingDomain, existingBillingRecurrence, automaticTransferTime, domainHistoryId);
Domain newDomain =
existingDomain
.asBuilder()
@@ -281,7 +284,7 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
.setLastEppUpdateTime(now)
.setLastEppUpdateRegistrarId(gainingClientId)
.build();
DomainHistory domainHistory = buildDomainHistory(newDomain, registry, now, period);
DomainHistory domainHistory = buildDomainHistory(newDomain, tld, now, period);
asyncTaskEnqueuer.enqueueAsyncResave(newDomain.createVKey(), now, automaticTransferTime);
tm().putAll(
@@ -361,8 +364,7 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
}
}
private DomainHistory buildDomainHistory(
Domain newDomain, Registry registry, DateTime now, Period period) {
private DomainHistory buildDomainHistory(Domain newDomain, Tld tld, DateTime now, Period period) {
return historyBuilder
.setType(DOMAIN_TRANSFER_REQUEST)
.setPeriod(period)
@@ -370,9 +372,9 @@ public final class DomainTransferRequestFlow implements TransactionalFlow {
.setDomainTransactionRecords(
ImmutableSet.of(
DomainTransactionRecord.create(
registry.getTldStr(),
now.plus(registry.getAutomaticTransferLength())
.plus(registry.getTransferGracePeriodLength()),
tld.getTldStr(),
now.plus(tld.getAutomaticTransferLength())
.plus(tld.getTransferGracePeriodLength()),
TransactionReportField.TRANSFER_SUCCESSFUL,
1)))
.build();

View File

@@ -20,11 +20,12 @@ import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingBase.RenewalPriceBehavior;
import google.registry.model.billing.BillingCancellation;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingEvent.RenewalPriceBehavior;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.Period;
@@ -33,7 +34,7 @@ import google.registry.model.eppcommon.Trid;
import google.registry.model.poll.PendingActionNotificationResponse.DomainPendingActionNotificationResponse;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.DomainTransferData;
import google.registry.model.transfer.TransferData;
import google.registry.model.transfer.TransferData.TransferServerApproveEntity;
@@ -66,8 +67,8 @@ public final class DomainTransferUtils {
// Unless superuser sets period to 0, add a transfer billing event.
transferDataBuilder.setServerApproveBillingEvent(
serverApproveEntities.stream()
.filter(BillingEvent.OneTime.class::isInstance)
.map(BillingEvent.OneTime.class::cast)
.filter(BillingEvent.class::isInstance)
.map(BillingEvent.class::cast)
.collect(onlyElement())
.createVKey());
}
@@ -75,8 +76,8 @@ public final class DomainTransferUtils {
.setTransferStatus(TransferStatus.PENDING)
.setServerApproveAutorenewEvent(
serverApproveEntities.stream()
.filter(BillingEvent.Recurring.class::isInstance)
.map(BillingEvent.Recurring.class::cast)
.filter(BillingRecurrence.class::isInstance)
.map(BillingRecurrence.class::cast)
.collect(onlyElement())
.createVKey())
.setServerApproveAutorenewPollMessage(
@@ -110,7 +111,7 @@ public final class DomainTransferUtils {
DateTime serverApproveNewExpirationTime,
HistoryEntryId domainHistoryId,
Domain existingDomain,
Recurring existingRecurring,
BillingRecurrence existingBillingRecurrence,
Trid trid,
String gainingRegistrarId,
Optional<Money> transferCost,
@@ -127,7 +128,7 @@ public final class DomainTransferUtils {
.setTransferredRegistrationExpirationTime(serverApproveNewExpirationTime)
.setTransferStatus(TransferStatus.SERVER_APPROVED)
.build();
Registry registry = Registry.get(existingDomain.getTld());
Tld tld = Tld.get(existingDomain.getTld());
ImmutableSet.Builder<TransferServerApproveEntity> builder = new ImmutableSet.Builder<>();
transferCost.ifPresent(
cost ->
@@ -137,7 +138,7 @@ public final class DomainTransferUtils {
domainHistoryId,
targetId,
gainingRegistrarId,
registry,
tld,
cost)));
createOptionalAutorenewCancellation(
automaticTransferTime, now, domainHistoryId, targetId, existingDomain, transferCost)
@@ -146,12 +147,12 @@ public final class DomainTransferUtils {
.add(
createGainingClientAutorenewEvent(
existingDomain.getCurrentPackageToken().isPresent()
? existingRecurring
? existingBillingRecurrence
.asBuilder()
.setRenewalPriceBehavior(RenewalPriceBehavior.DEFAULT)
.setRenewalPrice(null)
.build()
: existingRecurring,
: existingBillingRecurrence,
serverApproveNewExpirationTime,
domainHistoryId,
targetId,
@@ -246,21 +247,21 @@ public final class DomainTransferUtils {
.build();
}
private static BillingEvent.Recurring createGainingClientAutorenewEvent(
Recurring existingRecurring,
private static BillingRecurrence createGainingClientAutorenewEvent(
BillingRecurrence existingBillingRecurrence,
DateTime serverApproveNewExpirationTime,
HistoryEntryId domainHistoryId,
String targetId,
String gainingRegistrarId) {
return new BillingEvent.Recurring.Builder()
return new BillingRecurrence.Builder()
.setReason(Reason.RENEW)
.setFlags(ImmutableSet.of(Flag.AUTO_RENEW))
.setTargetId(targetId)
.setRegistrarId(gainingRegistrarId)
.setEventTime(serverApproveNewExpirationTime)
.setRecurrenceEndTime(END_OF_TIME)
.setRenewalPriceBehavior(existingRecurring.getRenewalPriceBehavior())
.setRenewalPrice(existingRecurring.getRenewalPrice().orElse(null))
.setRenewalPriceBehavior(existingBillingRecurrence.getRenewalPriceBehavior())
.setRenewalPrice(existingBillingRecurrence.getRenewalPrice().orElse(null))
.setDomainHistoryId(domainHistoryId)
.build();
}
@@ -281,7 +282,7 @@ public final class DomainTransferUtils {
* <p>For details on the policy justification, see b/19430703#comment17 and <a
* href="https://www.icann.org/news/advisory-2002-06-06-en">this ICANN advisory</a>.
*/
private static Optional<BillingEvent.Cancellation> createOptionalAutorenewCancellation(
private static Optional<BillingCancellation> createOptionalAutorenewCancellation(
DateTime automaticTransferTime,
DateTime now,
HistoryEntryId domainHistoryId,
@@ -294,8 +295,7 @@ public final class DomainTransferUtils {
domainAtTransferTime.getGracePeriodsOfType(GracePeriodStatus.AUTO_RENEW), null);
if (autorenewGracePeriod != null && transferCost.isPresent()) {
return Optional.of(
BillingEvent.Cancellation.forGracePeriod(
autorenewGracePeriod, now, domainHistoryId, targetId)
BillingCancellation.forGracePeriod(autorenewGracePeriod, now, domainHistoryId, targetId)
.asBuilder()
.setEventTime(automaticTransferTime)
.build());
@@ -303,14 +303,14 @@ public final class DomainTransferUtils {
return Optional.empty();
}
private static BillingEvent.OneTime createTransferBillingEvent(
private static BillingEvent createTransferBillingEvent(
DateTime automaticTransferTime,
HistoryEntryId domainHistoryId,
String targetId,
String gainingRegistrarId,
Registry registry,
Tld registry,
Money transferCost) {
return new BillingEvent.OneTime.Builder()
return new BillingEvent.Builder()
.setReason(Reason.TRANSFER)
.setTargetId(targetId)
.setRegistrarId(gainingRegistrarId)

View File

@@ -19,6 +19,7 @@ import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static com.google.common.collect.ImmutableSortedSet.toImmutableSortedSet;
import static com.google.common.collect.Sets.symmetricDifference;
import static com.google.common.collect.Sets.union;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.flows.FlowUtils.persistEntityChanges;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.checkSameValuesNotAddedAndRemoved;
@@ -49,7 +50,6 @@ import com.google.common.collect.ImmutableSortedSet;
import com.google.common.collect.Ordering;
import com.google.common.collect.Sets;
import com.google.common.net.InternetDomainName;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.RegistrarId;
@@ -64,8 +64,8 @@ import google.registry.flows.custom.EntityChanges;
import google.registry.flows.domain.DomainFlowUtils.MissingRegistrantException;
import google.registry.flows.domain.DomainFlowUtils.NameserversNotSpecifiedForTldWithNameserverAllowListException;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingBase.Reason;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.domain.DesignatedContact;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand.Update;
@@ -86,7 +86,7 @@ import google.registry.model.eppoutput.EppResponse;
import google.registry.model.poll.PendingActionNotificationResponse.DomainPendingActionNotificationResponse;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import java.util.Objects;
import java.util.Optional;
import javax.inject.Inject;
@@ -156,7 +156,6 @@ public final class DomainUpdateFlow implements TransactionalFlow {
@Inject @Superuser boolean isSuperuser;
@Inject Trid trid;
@Inject DomainHistory.Builder historyBuilder;
@Inject DnsQueue dnsQueue;
@Inject EppResponse.Builder responseBuilder;
@Inject DomainUpdateFlowCustomLogic flowCustomLogic;
@Inject DomainPricingLogic pricingLogic;
@@ -183,11 +182,11 @@ public final class DomainUpdateFlow implements TransactionalFlow {
historyBuilder.setType(DOMAIN_UPDATE).setDomain(newDomain).build();
validateNewState(newDomain);
if (requiresDnsUpdate(existingDomain, newDomain)) {
dnsQueue.addDomainRefreshTask(targetId);
requestDomainDnsRefresh(targetId);
}
ImmutableSet.Builder<ImmutableObject> entitiesToSave = new ImmutableSet.Builder<>();
entitiesToSave.add(newDomain, domainHistory);
Optional<BillingEvent.OneTime> statusUpdateBillingEvent =
Optional<BillingEvent> statusUpdateBillingEvent =
createBillingEventForStatusUpdates(existingDomain, newDomain, domainHistory, now);
statusUpdateBillingEvent.ifPresent(entitiesToSave::add);
Optional<PollMessage.OneTime> serverStatusUpdatePollMessage =
@@ -207,7 +206,7 @@ public final class DomainUpdateFlow implements TransactionalFlow {
}
/** Determines if any of the changes to new domain should trigger DNS update. */
private boolean requiresDnsUpdate(Domain existingDomain, Domain newDomain) {
private static boolean requiresDnsUpdate(Domain existingDomain, Domain newDomain) {
return existingDomain.shouldPublishToDns() != newDomain.shouldPublishToDns()
|| !Objects.equals(newDomain.getDsData(), existingDomain.getDsData())
|| !Objects.equals(newDomain.getNsHosts(), existingDomain.getNsHosts());
@@ -219,18 +218,18 @@ public final class DomainUpdateFlow implements TransactionalFlow {
verifyOptionalAuthInfo(authInfo, existingDomain);
AddRemove add = command.getInnerAdd();
AddRemove remove = command.getInnerRemove();
String tld = existingDomain.getTld();
String tldStr = existingDomain.getTld();
if (!isSuperuser) {
verifyNoDisallowedStatuses(existingDomain, UPDATE_DISALLOWED_STATUSES);
verifyResourceOwnership(registrarId, existingDomain);
verifyClientUpdateNotProhibited(command, existingDomain);
verifyAllStatusesAreClientSettable(union(add.getStatusValues(), remove.getStatusValues()));
checkAllowedAccessToTld(registrarId, tld);
checkAllowedAccessToTld(registrarId, tldStr);
}
Registry registry = Registry.get(tld);
Tld tld = Tld.get(tldStr);
Optional<FeeUpdateCommandExtension> feeUpdate =
eppInput.getSingleExtension(FeeUpdateCommandExtension.class);
FeesAndCredits feesAndCredits = pricingLogic.getUpdatePrice(registry, targetId, now);
FeesAndCredits feesAndCredits = pricingLogic.getUpdatePrice(tld, targetId, now);
validateFeesAckedIfPresent(feeUpdate, feesAndCredits, false);
verifyNotInPendingDelete(
add.getContacts(),
@@ -238,8 +237,8 @@ public final class DomainUpdateFlow implements TransactionalFlow {
add.getNameservers());
validateContactsHaveTypes(add.getContacts());
validateContactsHaveTypes(remove.getContacts());
validateRegistrantAllowedOnTld(tld, command.getInnerChange().getRegistrantContactId());
validateNameserversAllowedOnTld(tld, add.getNameserverHostNames());
validateRegistrantAllowedOnTld(tldStr, command.getInnerChange().getRegistrantContactId());
validateNameserversAllowedOnTld(tldStr, add.getNameserverHostNames());
}
private Domain performUpdate(Update command, Domain domain, DateTime now) throws EppException {
@@ -256,7 +255,7 @@ public final class DomainUpdateFlow implements TransactionalFlow {
// We have to verify no duplicate contacts _before_ constructing the domain because it is
// illegal to construct a domain with duplicate contacts.
Sets.SetView<DesignatedContact> newContacts =
Sets.union(Sets.difference(domain.getContacts(), remove.getContacts()), add.getContacts());
union(Sets.difference(domain.getContacts(), remove.getContacts()), add.getContacts());
validateNoDuplicateContacts(newContacts);
Domain.Builder domainBuilder =
@@ -301,7 +300,7 @@ public final class DomainUpdateFlow implements TransactionalFlow {
return domainBuilder.build();
}
private void validateRegistrantIsntBeingRemoved(Change change) throws EppException {
private static void validateRegistrantIsntBeingRemoved(Change change) throws EppException {
if (change.getRegistrantContactId() != null && change.getRegistrantContactId().isEmpty()) {
throw new MissingRegistrantException();
}
@@ -314,7 +313,7 @@ public final class DomainUpdateFlow implements TransactionalFlow {
* compliant with the additions or amendments, otherwise existing data can become invalid and
* cause Domain update failure.
*/
private void validateNewState(Domain newDomain) throws EppException {
private static void validateNewState(Domain newDomain) throws EppException {
validateRequiredContactsPresent(newDomain.getRegistrant(), newDomain.getContacts());
validateDsData(newDomain.getDsData());
validateNameserversCountForTld(
@@ -324,7 +323,7 @@ public final class DomainUpdateFlow implements TransactionalFlow {
}
/** Some status updates cost money. Bill only once no matter how many of them are changed. */
private Optional<BillingEvent.OneTime> createBillingEventForStatusUpdates(
private Optional<BillingEvent> createBillingEventForStatusUpdates(
Domain existingDomain, Domain newDomain, DomainHistory historyEntry, DateTime now) {
Optional<MetadataExtension> metadataExtension =
eppInput.getSingleExtension(MetadataExtension.class);
@@ -334,11 +333,11 @@ public final class DomainUpdateFlow implements TransactionalFlow {
if (statusValue.isChargedStatus()) {
// Only charge once.
return Optional.of(
new BillingEvent.OneTime.Builder()
new BillingEvent.Builder()
.setReason(Reason.SERVER_STATUS)
.setTargetId(targetId)
.setRegistrarId(registrarId)
.setCost(Registry.get(existingDomain.getTld()).getServerStatusChangeCost())
.setCost(Tld.get(existingDomain.getTld()).getServerStatusChangeCost())
.setEventTime(now)
.setBillingTime(now)
.setDomainHistory(historyEntry)
@@ -368,17 +367,17 @@ public final class DomainUpdateFlow implements TransactionalFlow {
.collect(toImmutableSortedSet(Ordering.natural()));
String msg;
if (addedServerStatuses.size() > 0 && removedServerStatuses.size() > 0) {
if (!addedServerStatuses.isEmpty() && !removedServerStatuses.isEmpty()) {
msg =
String.format(
"The registry administrator has added the status(es) %s and removed the status(es)"
+ " %s.",
addedServerStatuses, removedServerStatuses);
} else if (addedServerStatuses.size() > 0) {
} else if (!addedServerStatuses.isEmpty()) {
msg =
String.format(
"The registry administrator has added the status(es) %s.", addedServerStatuses);
} else if (removedServerStatuses.size() > 0) {
} else if (!removedServerStatuses.isEmpty()) {
msg =
String.format(
"The registry administrator has removed the status(es) %s.", removedServerStatuses);

View File

@@ -22,7 +22,7 @@ import google.registry.flows.EppException;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import org.joda.time.DateTime;
/**
@@ -36,7 +36,7 @@ public class AllocationTokenCustomLogic {
public AllocationToken validateToken(
DomainCommand.Create command,
AllocationToken token,
Registry registry,
Tld tld,
String registrarId,
DateTime now)
throws EppException {
@@ -46,7 +46,7 @@ public class AllocationTokenCustomLogic {
/** Performs additional custom logic for validating a token on an existing domain. */
public AllocationToken validateToken(
Domain domain, AllocationToken token, Registry registry, String registrarId, DateTime now)
Domain domain, AllocationToken token, Tld tld, String registrarId, DateTime now)
throws EppException {
// Do nothing.
return token;

View File

@@ -16,6 +16,7 @@ package google.registry.flows.domain.token;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.pricing.PricingEngineProxy.isDomainPremium;
import com.google.common.base.Strings;
import com.google.common.collect.ImmutableList;
@@ -26,17 +27,19 @@ import google.registry.flows.EppException;
import google.registry.flows.EppException.AssociationProhibitsOperationException;
import google.registry.flows.EppException.AuthorizationErrorException;
import google.registry.flows.EppException.StatusProhibitsOperationException;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingEvent.RenewalPriceBehavior;
import google.registry.flows.domain.DomainPricingLogic.AllocationTokenInvalidForPremiumNameException;
import google.registry.model.billing.BillingBase.RenewalPriceBehavior;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainCommand;
import google.registry.model.domain.fee.FeeQueryCommandExtensionItem.CommandName;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.domain.token.AllocationToken.TokenBehavior;
import google.registry.model.domain.token.AllocationToken.TokenStatus;
import google.registry.model.domain.token.AllocationToken.TokenType;
import google.registry.model.domain.token.AllocationTokenExtension;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.persistence.VKey;
import java.util.List;
import java.util.Optional;
@@ -78,7 +81,13 @@ public class AllocationTokenFlowUtils {
ImmutableMap.Builder<InternetDomainName, String> resultsBuilder = new ImmutableMap.Builder<>();
for (InternetDomainName domainName : domainNames) {
try {
validateToken(domainName, tokenEntity, registrarId, now);
validateToken(
domainName,
tokenEntity,
CommandName.CREATE,
registrarId,
isDomainPremium(domainName.toString(), now),
now);
validDomainNames.add(domainName);
} catch (EppException e) {
resultsBuilder.put(domainName, e.getMessage());
@@ -104,34 +113,57 @@ public class AllocationTokenFlowUtils {
/**
* Validates a given token. The token could be invalid if it has allowed client IDs or TLDs that
* do not include this client ID / TLD, or if the token has a promotion that is not currently
* running.
* running, or the token is not valid for a premium name when necessary.
*
* @throws EppException if the token is invalid in any way
*/
public static void validateToken(
InternetDomainName domainName, AllocationToken token, String registrarId, DateTime now)
InternetDomainName domainName,
AllocationToken token,
CommandName commandName,
String registrarId,
boolean isPremium,
DateTime now)
throws EppException {
// Only tokens with default behavior require validation
if (TokenBehavior.DEFAULT.equals(token.getTokenBehavior())) {
if (!token.getAllowedRegistrarIds().isEmpty()
&& !token.getAllowedRegistrarIds().contains(registrarId)) {
throw new AllocationTokenNotValidForRegistrarException();
}
if (!token.getAllowedTlds().isEmpty()
&& !token.getAllowedTlds().contains(domainName.parent().toString())) {
throw new AllocationTokenNotValidForTldException();
}
if (token.getDomainName().isPresent()
&& !token.getDomainName().get().equals(domainName.toString())) {
throw new AllocationTokenNotValidForDomainException();
}
// Tokens without status transitions will just have a single-entry NOT_STARTED map, so only
// check the status transitions map if it's non-trivial.
if (token.getTokenStatusTransitions().size() > 1
&& !TokenStatus.VALID.equals(token.getTokenStatusTransitions().getValueAtTime(now))) {
throw new AllocationTokenNotInPromotionException();
}
if (!TokenBehavior.DEFAULT.equals(token.getTokenBehavior())) {
return;
}
validateTokenForPossiblePremiumName(Optional.of(token), isPremium);
if (!token.getAllowedEppActions().isEmpty()
&& !token.getAllowedEppActions().contains(commandName)) {
throw new AllocationTokenNotValidForCommandException();
}
if (!token.getAllowedRegistrarIds().isEmpty()
&& !token.getAllowedRegistrarIds().contains(registrarId)) {
throw new AllocationTokenNotValidForRegistrarException();
}
if (!token.getAllowedTlds().isEmpty()
&& !token.getAllowedTlds().contains(domainName.parent().toString())) {
throw new AllocationTokenNotValidForTldException();
}
if (token.getDomainName().isPresent()
&& !token.getDomainName().get().equals(domainName.toString())) {
throw new AllocationTokenNotValidForDomainException();
}
// Tokens without status transitions will just have a single-entry NOT_STARTED map, so only
// check the status transitions map if it's non-trivial.
if (token.getTokenStatusTransitions().size() > 1
&& !TokenStatus.VALID.equals(token.getTokenStatusTransitions().getValueAtTime(now))) {
throw new AllocationTokenNotInPromotionException();
}
}
/** Validates that the given token is valid for a premium name if the name is premium. */
public static void validateTokenForPossiblePremiumName(
Optional<AllocationToken> token, boolean isPremium)
throws AllocationTokenInvalidForPremiumNameException {
if (token.isPresent()
&& token.get().getDiscountFraction() != 0.0
&& isPremium
&& !token.get().shouldDiscountPremiums()) {
throw new AllocationTokenInvalidForPremiumNameException();
}
}
@@ -164,24 +196,7 @@ public class AllocationTokenFlowUtils {
/** Verifies and returns the allocation token if one is specified, otherwise does nothing. */
public Optional<AllocationToken> verifyAllocationTokenCreateIfPresent(
DomainCommand.Create command,
Registry registry,
String registrarId,
DateTime now,
Optional<AllocationTokenExtension> extension)
throws EppException {
if (!extension.isPresent()) {
return Optional.empty();
}
AllocationToken tokenEntity = loadToken(extension.get().getAllocationToken());
validateToken(InternetDomainName.from(command.getDomainName()), tokenEntity, registrarId, now);
return Optional.of(
tokenCustomLogic.validateToken(command, tokenEntity, registry, registrarId, now));
}
/** Verifies and returns the allocation token if one is specified, otherwise does nothing. */
public Optional<AllocationToken> verifyAllocationTokenIfPresent(
Domain existingDomain,
Registry registry,
Tld tld,
String registrarId,
DateTime now,
Optional<AllocationTokenExtension> extension)
@@ -191,9 +206,37 @@ public class AllocationTokenFlowUtils {
}
AllocationToken tokenEntity = loadToken(extension.get().getAllocationToken());
validateToken(
InternetDomainName.from(existingDomain.getDomainName()), tokenEntity, registrarId, now);
InternetDomainName.from(command.getDomainName()),
tokenEntity,
CommandName.CREATE,
registrarId,
isDomainPremium(command.getDomainName(), now),
now);
return Optional.of(tokenCustomLogic.validateToken(command, tokenEntity, tld, registrarId, now));
}
/** Verifies and returns the allocation token if one is specified, otherwise does nothing. */
public Optional<AllocationToken> verifyAllocationTokenIfPresent(
Domain existingDomain,
Tld tld,
String registrarId,
DateTime now,
CommandName commandName,
Optional<AllocationTokenExtension> extension)
throws EppException {
if (!extension.isPresent()) {
return Optional.empty();
}
AllocationToken tokenEntity = loadToken(extension.get().getAllocationToken());
validateToken(
InternetDomainName.from(existingDomain.getDomainName()),
tokenEntity,
commandName,
registrarId,
isDomainPremium(existingDomain.getDomainName(), now),
now);
return Optional.of(
tokenCustomLogic.validateToken(existingDomain, tokenEntity, registry, registrarId, now));
tokenCustomLogic.validateToken(existingDomain, tokenEntity, tld, registrarId, now));
}
public static void verifyTokenAllowedOnDomain(
@@ -218,16 +261,16 @@ public class AllocationTokenFlowUtils {
return domain;
}
Recurring newRecurringBillingEvent =
BillingRecurrence newBillingRecurrence =
tm().loadByKey(domain.getAutorenewBillingEvent())
.asBuilder()
.setRenewalPriceBehavior(RenewalPriceBehavior.DEFAULT)
.setRenewalPrice(null)
.build();
// the Recurring billing event is reloaded later in the renew flow, so we synchronize changed
// RecurringBillingEvent with storage manually
tm().put(newRecurringBillingEvent);
// the Recurrence is reloaded later in the renew flow, so we synchronize changed
// Recurrences with storage manually
tm().put(newBillingRecurrence);
tm().getEntityManager().flush();
tm().getEntityManager().clear();
@@ -235,7 +278,7 @@ public class AllocationTokenFlowUtils {
return domain
.asBuilder()
.setCurrentPackageToken(null)
.setAutorenewBillingEvent(newRecurringBillingEvent.createVKey())
.setAutorenewBillingEvent(newBillingRecurrence.createVKey())
.build();
}
@@ -280,6 +323,14 @@ public class AllocationTokenFlowUtils {
}
}
/** The allocation token is not valid for this EPP command. */
public static class AllocationTokenNotValidForCommandException
extends AssociationProhibitsOperationException {
AllocationTokenNotValidForCommandException() {
super("Allocation token not valid for the EPP command");
}
}
/** The allocation token is invalid. */
public static class InvalidAllocationTokenException extends AuthorizationErrorException {
InvalidAllocationTokenException() {

View File

@@ -14,6 +14,7 @@
package google.registry.flows.host;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.verifyResourceDoesNotExist;
import static google.registry.flows.host.HostFlowUtils.lookupSuperordinateDomain;
@@ -28,7 +29,6 @@ import static google.registry.util.CollectionUtils.isNullOrEmpty;
import com.google.common.collect.ImmutableSet;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.EppException.ParameterValueRangeErrorException;
import google.registry.flows.EppException.RequiredParameterMissingException;
@@ -85,7 +85,6 @@ public final class HostCreateFlow implements TransactionalFlow {
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@Inject HostHistory.Builder historyBuilder;
@Inject DnsQueue dnsQueue;
@Inject EppResponse.Builder responseBuilder;
@Inject
@@ -138,7 +137,7 @@ public final class HostCreateFlow implements TransactionalFlow {
.build());
// Only update DNS if this is a subordinate host. External hosts have no glue to write, so
// they are only written as NS records from the referencing domain.
dnsQueue.addHostRefreshTask(targetId);
requestHostDnsRefresh(targetId);
}
tm().insertAll(entitiesToSave);
return responseBuilder.setResData(HostCreateData.create(targetId, now)).build();

View File

@@ -14,6 +14,7 @@
package google.registry.flows.host;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
import static google.registry.flows.ResourceFlowUtils.checkLinkedDomains;
import static google.registry.flows.ResourceFlowUtils.loadAndVerifyExistence;
@@ -24,7 +25,6 @@ import static google.registry.model.eppoutput.Result.Code.SUCCESS;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.google.common.collect.ImmutableSet;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.ExtensionManager;
import google.registry.flows.FlowModule.RegistrarId;
@@ -71,7 +71,6 @@ public final class HostDeleteFlow implements TransactionalFlow {
StatusValue.PENDING_DELETE,
StatusValue.SERVER_DELETE_PROHIBITED);
@Inject DnsQueue dnsQueue;
@Inject ExtensionManager extensionManager;
@Inject @RegistrarId String registrarId;
@Inject @TargetId String targetId;
@@ -104,7 +103,7 @@ public final class HostDeleteFlow implements TransactionalFlow {
}
Host newHost = existingHost.asBuilder().setStatusValues(null).setDeletionTime(now).build();
if (existingHost.isSubordinate()) {
dnsQueue.addHostRefreshTask(existingHost.getHostName());
requestHostDnsRefresh(existingHost.getHostName());
tm().update(
tm().loadByKey(existingHost.getSuperordinateDomain())
.asBuilder()

View File

@@ -16,7 +16,7 @@ package google.registry.flows.host;
import static google.registry.model.EppResourceUtils.isActive;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.model.tld.Registries.findTldForName;
import static google.registry.model.tld.Tlds.findTldForName;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import static java.util.stream.Collectors.joining;

View File

@@ -16,6 +16,7 @@ package google.registry.flows.host;
import static com.google.common.base.MoreObjects.firstNonNull;
import static com.google.common.collect.Sets.union;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.dns.RefreshDnsOnHostRenameAction.PARAM_HOST_KEY;
import static google.registry.dns.RefreshDnsOnHostRenameAction.QUEUE_HOST_RENAME;
import static google.registry.flows.FlowUtils.validateRegistrarIsLoggedIn;
@@ -36,7 +37,7 @@ import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSet;
import google.registry.batch.AsyncTaskEnqueuer;
import google.registry.dns.DnsQueue;
import google.registry.batch.CloudTasksUtils;
import google.registry.dns.RefreshDnsOnHostRenameAction;
import google.registry.flows.EppException;
import google.registry.flows.EppException.ObjectAlreadyExistsException;
@@ -65,7 +66,6 @@ import google.registry.model.host.HostHistory;
import google.registry.model.reporting.IcannReportingTypes.ActivityReportField;
import google.registry.persistence.VKey;
import google.registry.request.Action.Service;
import google.registry.util.CloudTasksUtils;
import java.util.Objects;
import java.util.Optional;
import javax.inject.Inject;
@@ -124,7 +124,6 @@ public final class HostUpdateFlow implements TransactionalFlow {
@Inject @Superuser boolean isSuperuser;
@Inject HostHistory.Builder historyBuilder;
@Inject AsyncTaskEnqueuer asyncTaskEnqueuer;
@Inject DnsQueue dnsQueue;
@Inject EppResponse.Builder responseBuilder;
@Inject CloudTasksUtils cloudTasksUtils;
@@ -241,7 +240,7 @@ public final class HostUpdateFlow implements TransactionalFlow {
verifyNoDisallowedStatuses(existingHost, DISALLOWED_STATUSES);
}
private void verifyHasIpsIffIsExternal(Update command, Host existingHost, Host newHost)
private static void verifyHasIpsIffIsExternal(Update command, Host existingHost, Host newHost)
throws EppException {
boolean wasSubordinate = existingHost.isSubordinate();
boolean willBeSubordinate = newHost.isSubordinate();
@@ -266,21 +265,21 @@ public final class HostUpdateFlow implements TransactionalFlow {
// Only update DNS for subordinate hosts. External hosts have no glue to write, so they
// are only written as NS records from the referencing domain.
if (existingHost.isSubordinate()) {
dnsQueue.addHostRefreshTask(existingHost.getHostName());
requestHostDnsRefresh(existingHost.getHostName());
}
// In case of a rename, there are many updates we need to queue up.
if (((Update) resourceCommand).getInnerChange().getHostName() != null) {
// If the renamed host is also subordinate, then we must enqueue an update to write the new
// glue.
if (newHost.isSubordinate()) {
dnsQueue.addHostRefreshTask(newHost.getHostName());
requestHostDnsRefresh(newHost.getHostName());
}
// We must also enqueue updates for all domains that use this host as their nameserver so
// that their NS records can be updated to point at the new name.
Task task =
cloudTasksUtils.createPostTask(
RefreshDnsOnHostRenameAction.PATH,
Service.BACKEND.toString(),
Service.BACKEND,
ImmutableMultimap.of(PARAM_HOST_KEY, existingHost.createVKey().stringify()));
cloudTasksUtils.enqueue(QUEUE_HOST_RENAME, task);
}
@@ -317,14 +316,14 @@ public final class HostUpdateFlow implements TransactionalFlow {
/** Host with specified name already exists. */
static class HostAlreadyExistsException extends ObjectAlreadyExistsException {
public HostAlreadyExistsException(String hostName) {
HostAlreadyExistsException(String hostName) {
super(String.format("Object with given ID (%s) already exists", hostName));
}
}
/** Cannot add IP addresses to an external host. */
static class CannotAddIpToExternalHostException extends ParameterValueRangeErrorException {
public CannotAddIpToExternalHostException() {
CannotAddIpToExternalHostException() {
super("Cannot add IP addresses to external hosts");
}
}
@@ -332,14 +331,14 @@ public final class HostUpdateFlow implements TransactionalFlow {
/** Cannot remove all IP addresses from a subordinate host. */
static class CannotRemoveSubordinateHostLastIpException
extends StatusProhibitsOperationException {
public CannotRemoveSubordinateHostLastIpException() {
CannotRemoveSubordinateHostLastIpException() {
super("Cannot remove all IP addresses from a subordinate host");
}
}
/** Cannot rename an external host. */
static class CannotRenameExternalHostException extends StatusProhibitsOperationException {
public CannotRenameExternalHostException() {
CannotRenameExternalHostException() {
super("Cannot rename an external host");
}
}

View File

@@ -31,7 +31,7 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.util.GoogleCredentialsBundle;
import java.io.IOException;
import java.io.InputStream;
@@ -64,7 +64,7 @@ public class GcsUtils implements Serializable {
}
@Inject
public GcsUtils(@DefaultCredential GoogleCredentialsBundle credentialsBundle) {
public GcsUtils(@ApplicationDefaultCredential GoogleCredentialsBundle credentialsBundle) {
this(
StorageOptions.newBuilder()
.setCredentials(credentialsBundle.getGoogleCredentials())

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +1,23 @@
# Registry: Charleston Road Registry Inc.
#
# Script: Latn
#
# Version: 1.0
#
# Effective Date: 04-12-2012
#
# Address: 1600 Amphitheatre Parkway Mountain View, CA 94043, USA
#
# Version: 2.0
# Effective Date: 2023-04-04
# URL: https://www.iana.org/domains/idn-tables/tables/google_latn_2.0.txt
# Policy: https://www.registry.google/about/policies/domainabuse/
# Contact Name: CRR Tech
# Email address: crr-tech@google.com
# Telephone: +1 (650) 253-0000
#
# Website: www.charlestonroadregistry.com
# Code points requiring context rules
#
# Code point Description of rule/Reference
#
# U+002D Label must neither start nor end with U+002D. Label
# HYPHEN-MINUS must not have U+002D in both third and fourth
# position. RFC 5891 (sec 4.2.3.1)
#
# Notes: This table describes codepoints allowed for the Latin script.
U+002D # HYPHEN-MINUS
#
U+0030 # DIGIT ZERO
U+0031 # DIGIT ONE
U+0032 # DIGIT TWO
@@ -26,12 +28,7 @@ U+0036 # DIGIT SIX
U+0037 # DIGIT SEVEN
U+0038 # DIGIT EIGHT
U+0039 # DIGIT NINE
#
# The following code points are listed according to the
# European Ordering Rules (ENV 13710).
#
U+0061 # LATIN SMALL LETTER A
U+00E1 # LATIN SMALL LETTER A WITH ACUTE
U+00E0 # LATIN SMALL LETTER A WITH GRAVE
U+0103 # LATIN SMALL LETTER A WITH BREVE
U+00E2 # LATIN SMALL LETTER A WITH CIRCUMFLEX
@@ -39,14 +36,11 @@ U+00E5 # LATIN SMALL LETTER A WITH RING ABOVE
U+00E4 # LATIN SMALL LETTER A WITH DIAERESIS
U+00E3 # LATIN SMALL LETTER A WITH TILDE
U+0105 # LATIN SMALL LETTER A WITH OGONEK
U+0101 # LATIN SMALL LETTER A WITH MACRON
U+01CE # LATIN SMALL LETTER A WITH CARON
U+00E6 # LATIN SMALL LETTER AE
U+0062 # LATIN SMALL LETTER B
U+0063 # LATIN SMALL LETTER C
U+0107 # LATIN SMALL LETTER C WITH ACUTE
U+010D # LATIN SMALL LETTER C WITH CARON
U+010B # LATIN SMALL LETTER C WITH DOT ABOVE
U+00E7 # LATIN SMALL LETTER C WITH CEDILLA
U+0064 # LATIN SMALL LETTER D
U+010F # LATIN SMALL LETTER D WITH CARON
@@ -65,20 +59,14 @@ U+0259 # LATIN SMALL LETTER SCHWA
U+0066 # LATIN SMALL LETTER F
U+0067 # LATIN SMALL LETTER G
U+011F # LATIN SMALL LETTER G WITH BREVE
U+01E7 # LATIN SMALL LETTER G WITH CARON
U+0121 # LATIN SMALL LETTER G WITH DOT ABOVE
U+0123 # LATIN SMALL LETTER G WITH CEDILLA
U+0068 # LATIN SMALL LETTER H
U+0127 # LATIN SMALL LETTER H WITH STROKE
U+0069 # LATIN SMALL LETTER I
U+0131 # LATIN SMALL LETTER DOTLESS I
U+00ED # LATIN SMALL LETTER I WITH ACUTE
U+00EC # LATIN SMALL LETTER I WITH GRAVE
U+00EE # LATIN SMALL LETTER I WITH CIRCUMFLEX
U+00EF # LATIN SMALL LETTER I WITH DIAERESIS
U+012F # LATIN SMALL LETTER I WITH OGONEK
U+012B # LATIN SMALL LETTER I WITH MACRON
U+01D0 # LATIN SMALL LETTER I WITH CARON
U+006A # LATIN SMALL LETTER J
U+006B # LATIN SMALL LETTER K
U+01E9 # LATIN SMALL LETTER K WITH CARON
@@ -90,20 +78,15 @@ U+013C # LATIN SMALL LETTER L WITH CEDILLA
U+0142 # LATIN SMALL LETTER L WITH STROKE
U+006D # LATIN SMALL LETTER M
U+006E # LATIN SMALL LETTER N
U+0144 # LATIN SMALL LETTER N WITH ACUTE
U+0148 # LATIN SMALL LETTER N WITH CARON
U+00F1 # LATIN SMALL LETTER N WITH TILDE
U+0146 # LATIN SMALL LETTER N WITH CEDILLA
U+014B # LATIN SMALL LETTER ENG
U+006F # LATIN SMALL LETTER O
U+00F3 # LATIN SMALL LETTER O WITH ACUTE
U+00F2 # LATIN SMALL LETTER O WITH GRAVE
U+00F4 # LATIN SMALL LETTER O WITH CIRCUMFLEX
U+00F6 # LATIN SMALL LETTER O WITH DIAERESIS
U+0151 # LATIN SMALL LETTER O WITH DOUBLE ACUTE
U+00F5 # LATIN SMALL LETTER O WITH TILDE
U+014D # LATIN SMALL LETTER O WITH MACRON
U+01D2 # LATIN SMALL LETTER O WITH CARON
U+00F8 # LATIN SMALL LETTER O WITH STROKE
U+0153 # LATIN SMALL LIGATURE OE
U+0070 # LATIN SMALL LETTER P
@@ -111,41 +94,31 @@ U+0071 # LATIN SMALL LETTER Q
U+0072 # LATIN SMALL LETTER R
U+0155 # LATIN SMALL LETTER R WITH ACUTE
U+0159 # LATIN SMALL LETTER R WITH CARON
U+0157 # LATIN SMALL LETTER R WITH CEDILLA
U+0073 # LATIN SMALL LETTER S
U+015B # LATIN SMALL LETTER S WITH ACUTE
U+0161 # LATIN SMALL LETTER S WITH CARON
U+015F # LATIN SMALL LETTER S WITH CEDILLA
U+0074 # LATIN SMALL LETTER T
U+0165 # LATIN SMALL LETTER T WITH CARON
U+0163 # LATIN SMALL LETTER T WITH CEDILLA
U+0167 # LATIN SMALL LETTER T WITH STROKE
U+0075 # LATIN SMALL LETTER U
U+00FA # LATIN SMALL LETTER U WITH ACUTE
U+00F9 # LATIN SMALL LETTER U WITH GRAVE
U+00FB # LATIN SMALL LETTER U WITH CIRCUMFLEX
U+016F # LATIN SMALL LETTER U WITH RING ABOVE
U+00FC # LATIN SMALL LETTER U WITH DIAERESIS
U+0171 # LATIN SMALL LETTER U WITH DOUBLE ACUTE
U+0173 # LATIN SMALL LETTER U WITH OGONEK
U+016B # LATIN SMALL LETTER U WITH MACRON
U+01D4 # LATIN SMALL LETTER U WITH CARON
U+0076 # LATIN SMALL LETTER V
U+0077 # LATIN SMALL LETTER W
U+1E83 # LATIN SMALL LETTER W WITH ACUTE
U+1E81 # LATIN SMALL LETTER W WITH GRAVE
U+0175 # LATIN SMALL LETTER W WITH CIRCUMFLEX
U+1E85 # LATIN SMALL LETTER W WITH DIAERESIS
U+0078 # LATIN SMALL LETTER X
U+0079 # LATIN SMALL LETTER Y
U+00FD # LATIN SMALL LETTER Y WITH ACUTE
U+1EF3 # LATIN SMALL LETTER Y WITH GRAVE
U+0177 # LATIN SMALL LETTER Y WITH CIRCUMFLEX
U+00FF # LATIN SMALL LETTER Y WITH DIAERESIS
U+007A # LATIN SMALL LETTER Z
U+017A # LATIN SMALL LETTER Z WITH ACUTE
U+017E # LATIN SMALL LETTER Z WITH CARON
U+017C # LATIN SMALL LETTER Z WITH DOT ABOVE
U+0292 # LATIN SMALL LETTER EZH
U+01EF # LATIN SMALL LETTER EZH WITH CARON
U+00FE # LATIN SMALL LETTER THORN

View File

@@ -0,0 +1,7 @@
# IDN tables
This directory contains the most recent version of our approved IDN tables. They
should match what is published (or will be published soon) by IANA. In the case
of IDN tables that have had multiple revisions over time, this directory should
contain only the latest version, not previous ones, even when said previous ones
might still be in use by previously launched TLDs.

View File

@@ -28,13 +28,13 @@ import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.Iterators;
import com.google.common.flogger.FluentLogger;
import com.google.protobuf.Timestamp;
import google.registry.batch.CloudTasksUtils;
import google.registry.config.RegistryEnvironment;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.security.XsrfTokenManager;
import google.registry.util.CloudTasksUtils;
import java.time.Instant;
import java.util.Arrays;
import java.util.Iterator;
@@ -337,7 +337,7 @@ public class LoadTestAction implements Runnable {
cloudTasksUtils
.createPostTask(
"/_dr/epptool",
Service.TOOLS.toString(),
Service.TOOLS,
ImmutableMultimap.of(
"clientId",
registrarId,

View File

@@ -403,7 +403,7 @@ public abstract class EppResource extends UpdateAutoTimestampEntity implements B
public static ImmutableMap<VKey<? extends EppResource>, EppResource> loadCached(
Iterable<VKey<? extends EppResource>> keys) {
if (!RegistryConfig.isEppResourceCachingEnabled()) {
return tm().loadByKeys(keys);
return tm().transact(() -> tm().loadByKeys(keys));
}
return ImmutableMap.copyOf(cacheEppResources.getAll(keys));
}
@@ -416,7 +416,7 @@ public abstract class EppResource extends UpdateAutoTimestampEntity implements B
*/
public static <T extends EppResource> T loadCached(VKey<T> key) {
if (!RegistryConfig.isEppResourceCachingEnabled()) {
return tm().loadByKey(key);
return tm().transact(() -> tm().loadByKey(key));
}
// Safe to cast because loading a Key<T> returns an entity of type T.
@SuppressWarnings("unchecked")

View File

@@ -35,7 +35,7 @@ import google.registry.model.eppcommon.StatusValue;
import google.registry.model.host.Host;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.reporting.HistoryEntryDao;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.DomainTransferData;
import google.registry.model.transfer.TransferData;
import google.registry.model.transfer.TransferStatus;
@@ -73,7 +73,7 @@ public final class EppResourceUtils {
/** Returns the full domain repoId in the format HEX-TLD for the specified long id and tld. */
public static String createDomainRepoId(long repoId, String tld) {
return createRepoId(repoId, Registry.get(tld).getRoidSuffix());
return createRepoId(repoId, Tld.get(tld).getRoidSuffix());
}
/** Returns the full repoId in the format HEX-TLD for the specified long id and ROID suffix. */

View File

@@ -17,8 +17,8 @@ package google.registry.model;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.model.tld.Registry.TldState.GENERAL_AVAILABILITY;
import static google.registry.model.tld.Registry.TldState.START_DATE_SUNRISE;
import static google.registry.model.tld.Tld.TldState.GENERAL_AVAILABILITY;
import static google.registry.model.tld.Tld.TldState.START_DATE_SUNRISE;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
@@ -33,9 +33,9 @@ import google.registry.model.pricing.StaticPremiumListPricingEngine;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarAddress;
import google.registry.model.registrar.RegistrarPoc;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Registry.TldState;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld;
import google.registry.model.tld.Tld.TldState;
import google.registry.model.tld.Tld.TldType;
import google.registry.model.tld.label.PremiumList;
import google.registry.model.tld.label.PremiumListDao;
import google.registry.persistence.VKey;
@@ -121,9 +121,9 @@ public final class OteAccountBuilder {
ImmutableMap.of(CurrencyUnit.USD, "123", CurrencyUnit.JPY, "456");
private final ImmutableMap<String, String> registrarIdToTld;
private final Registry sunriseTld;
private final Registry gaTld;
private final Registry eapTld;
private final Tld sunriseTld;
private final Tld gaTld;
private final Tld eapTld;
private final ImmutableList.Builder<RegistrarPoc> contactsBuilder = new ImmutableList.Builder<>();
private ImmutableList<Registrar> registrars;
@@ -247,7 +247,7 @@ public final class OteAccountBuilder {
/** Saves all the OT&amp;E entities we created. */
private void saveAllEntities() {
// use ImmutableObject instead of Registry so that the Key generation doesn't break
ImmutableList<Registry> registries = ImmutableList.of(sunriseTld, gaTld, eapTld);
ImmutableList<Tld> registries = ImmutableList.of(sunriseTld, gaTld, eapTld);
ImmutableList<RegistrarPoc> contacts = contactsBuilder.build();
tm().transact(
@@ -255,8 +255,7 @@ public final class OteAccountBuilder {
if (!replaceExisting) {
ImmutableList<VKey<? extends ImmutableObject>> keys =
Streams.concat(
registries.stream()
.map(registry -> Registry.createVKey(registry.getTldStr())),
registries.stream().map(tld -> Tld.createVKey(tld.getTldStr())),
registrars.stream().map(Registrar::createVKey),
contacts.stream().map(RegistrarPoc::createVKey))
.collect(toImmutableList());
@@ -291,16 +290,13 @@ public final class OteAccountBuilder {
.build();
}
private static Registry createTld(
String tldName,
TldState initialTldState,
boolean isEarlyAccess,
int roidSuffix) {
private static Tld createTld(
String tldName, TldState initialTldState, boolean isEarlyAccess, int roidSuffix) {
String tldNameAlphaNumerical = tldName.replaceAll("[^a-z\\d]", "");
Optional<PremiumList> premiumList = PremiumListDao.getLatestRevision(DEFAULT_PREMIUM_LIST);
checkState(premiumList.isPresent(), "Couldn't find premium list %s.", DEFAULT_PREMIUM_LIST);
Registry.Builder builder =
new Registry.Builder()
Tld.Builder builder =
new Tld.Builder()
.setTldStr(tldName)
.setPremiumPricingEngine(StaticPremiumListPricingEngine.NAME)
.setTldStateTransitions(ImmutableSortedMap.of(START_OF_TIME, initialTldState))

View File

@@ -0,0 +1,264 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.model.billing;
import static com.google.common.base.Preconditions.checkNotNull;
import static google.registry.util.CollectionUtils.forceEmptyToNull;
import static google.registry.util.CollectionUtils.nullToEmptyImmutableCopy;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import com.google.common.collect.ImmutableSet;
import google.registry.model.Buildable;
import google.registry.model.ImmutableObject;
import google.registry.model.UnsafeSerializable;
import google.registry.model.annotations.IdAllocation;
import google.registry.model.domain.DomainHistory;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.transfer.TransferData.TransferServerApproveEntity;
import google.registry.persistence.VKey;
import java.util.Set;
import javax.annotation.Nullable;
import javax.persistence.Column;
import javax.persistence.EnumType;
import javax.persistence.Enumerated;
import javax.persistence.Id;
import javax.persistence.MappedSuperclass;
import org.joda.time.DateTime;
/** A billable event in a domain's lifecycle. */
@MappedSuperclass
public abstract class BillingBase extends ImmutableObject
implements Buildable, TransferServerApproveEntity, UnsafeSerializable {
/** The reason for the bill, which maps 1:1 to skus in go/registry-billing-skus. */
public enum Reason {
CREATE(true),
@Deprecated // DO NOT USE THIS REASON. IT REMAINS BECAUSE OF HISTORICAL DATA. SEE b/31676071.
ERROR(false),
FEE_EARLY_ACCESS(true),
RENEW(true),
RESTORE(true),
SERVER_STATUS(false),
TRANSFER(true);
private final boolean requiresPeriod;
Reason(boolean requiresPeriod) {
this.requiresPeriod = requiresPeriod;
}
/**
* Returns whether billing events with this reason have a period years associated with them.
*
* <p>Note that this is an "if an only if" condition.
*/
public boolean hasPeriodYears() {
return requiresPeriod;
}
}
/** Set of flags that can be applied to billing events. */
public enum Flag {
ALLOCATION,
ANCHOR_TENANT,
AUTO_RENEW,
/** Landrush billing events are historical only and are no longer created. */
LANDRUSH,
/**
* This flag is used on create {@link BillingEvent} billing events for domains that were
* reserved.
*
* <p>This can happen when allocation tokens are used or superusers override a domain
* reservation. These cases can need special handling in billing/invoicing. Anchor tenants will
* never have this flag applied; they will have ANCHOR_TENANT instead.
*/
RESERVED,
SUNRISE,
/**
* This flag will be added to any {@link BillingEvent} events that are created via, e.g., an
* automated process to expand {@link BillingRecurrence} events.
*/
SYNTHETIC
}
/**
* Sets of renewal price behaviors that can be applied to billing recurrences.
*
* <p>When a client renews a domain, they could be charged differently, depending on factors such
* as the client type and the domain itself.
*/
public enum RenewalPriceBehavior {
/**
* This indicates the renewal price is the default price.
*
* <p>By default, if the domain is premium, then premium price will be used. Otherwise, the
* standard price of the TLD will be used.
*/
DEFAULT,
/**
* This indicates the domain will be renewed at standard price even if it's a premium domain.
*
* <p>We chose to name this "NONPREMIUM" rather than simply "STANDARD" to avoid confusion
* between "STANDARD" and "DEFAULT".
*
* <p>This price behavior is used with anchor tenants.
*/
NONPREMIUM,
/**
* This indicates that the renewalPrice in {@link BillingRecurrence} will be used for domain
* renewal.
*
* <p>The renewalPrice has a non-null value iff the price behavior is set to "SPECIFIED". This
* behavior is used with internal registrations.
*/
SPECIFIED
}
/** Entity id. */
@IdAllocation @Id Long id;
/** The registrar to bill. */
@Column(name = "registrarId", nullable = false)
String clientId;
/** Revision id of the entry in DomainHistory table that ths bill belongs to. */
@Column(nullable = false)
Long domainHistoryRevisionId;
/** ID of the EPP resource that the bill is for. */
@Column(nullable = false)
String domainRepoId;
/** When this event was created. For recurrence events, this is also the recurrence start time. */
@Column(nullable = false)
DateTime eventTime;
/** The reason for the bill. */
@Enumerated(EnumType.STRING)
@Column(nullable = false)
Reason reason;
/** The fully qualified domain name of the domain that the bill is for. */
@Column(name = "domain_name", nullable = false)
String targetId;
@Nullable Set<Flag> flags;
public String getRegistrarId() {
return clientId;
}
public long getDomainHistoryRevisionId() {
return domainHistoryRevisionId;
}
public String getDomainRepoId() {
return domainRepoId;
}
public DateTime getEventTime() {
return eventTime;
}
public long getId() {
return id;
}
public Reason getReason() {
return reason;
}
public String getTargetId() {
return targetId;
}
public HistoryEntryId getHistoryEntryId() {
return new HistoryEntryId(domainRepoId, domainHistoryRevisionId);
}
public ImmutableSet<Flag> getFlags() {
return nullToEmptyImmutableCopy(flags);
}
@Override
public abstract VKey<? extends BillingBase> createVKey();
/** Override Buildable.asBuilder() to give this method stronger typing. */
@Override
public abstract Builder<?, ?> asBuilder();
/** An abstract builder for {@link BillingBase}. */
public abstract static class Builder<T extends BillingBase, B extends Builder<?, ?>>
extends GenericBuilder<T, B> {
protected Builder() {}
protected Builder(T instance) {
super(instance);
}
public B setReason(Reason reason) {
getInstance().reason = reason;
return thisCastToDerived();
}
public B setId(long id) {
getInstance().id = id;
return thisCastToDerived();
}
public B setRegistrarId(String registrarId) {
getInstance().clientId = registrarId;
return thisCastToDerived();
}
public B setEventTime(DateTime eventTime) {
getInstance().eventTime = eventTime;
return thisCastToDerived();
}
public B setTargetId(String targetId) {
getInstance().targetId = targetId;
return thisCastToDerived();
}
public B setFlags(ImmutableSet<Flag> flags) {
getInstance().flags = forceEmptyToNull(checkArgumentNotNull(flags, "flags"));
return thisCastToDerived();
}
public B setDomainHistoryId(HistoryEntryId domainHistoryId) {
getInstance().domainHistoryRevisionId = domainHistoryId.getRevisionId();
getInstance().domainRepoId = domainHistoryId.getRepoId();
return thisCastToDerived();
}
public B setDomainHistory(DomainHistory domainHistory) {
return setDomainHistoryId(domainHistory.getHistoryEntryId());
}
@Override
public T build() {
T instance = getInstance();
checkNotNull(instance.reason, "Reason must be set");
checkNotNull(instance.clientId, "Registrar ID must be set");
checkNotNull(instance.eventTime, "Event time must be set");
checkNotNull(instance.targetId, "Target ID must be set");
checkNotNull(instance.domainHistoryRevisionId, "Domain History Revision ID must be set");
checkNotNull(instance.domainRepoId, "Domain Repo ID must be set");
return super.build();
}
}
}

View File

@@ -0,0 +1,166 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.model.billing;
import static com.google.common.base.MoreObjects.firstNonNull;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import com.google.common.collect.ImmutableMap;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.rgp.GracePeriodStatus;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.persistence.VKey;
import google.registry.persistence.WithVKey;
import javax.persistence.AttributeOverride;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Index;
import javax.persistence.Table;
import org.joda.time.DateTime;
/**
* An event representing a cancellation of one of the other two billable event types.
*
* <p>This is implemented as a separate event rather than a bit on BillingEvent in order to preserve
* the immutability of billing events.
*/
@Entity
@Table(
indexes = {
@Index(columnList = "registrarId"),
@Index(columnList = "eventTime"),
@Index(columnList = "domainRepoId"),
@Index(columnList = "billingTime"),
@Index(columnList = "billing_event_id"),
@Index(columnList = "billing_recurrence_id")
})
@AttributeOverride(name = "id", column = @Column(name = "billing_cancellation_id"))
@WithVKey(Long.class)
public class BillingCancellation extends BillingBase {
/** The billing time of the charge that is being cancelled. */
DateTime billingTime;
/** The one-time billing event to cancel, or null for autorenew cancellations. */
@Column(name = "billing_event_id")
VKey<BillingEvent> billingEvent;
/** The Recurrence to cancel, or null for non-autorenew cancellations. */
@Column(name = "billing_recurrence_id")
VKey<BillingRecurrence> billingRecurrence;
public DateTime getBillingTime() {
return billingTime;
}
public VKey<? extends BillingBase> getEventKey() {
return firstNonNull(billingEvent, billingRecurrence);
}
/** The mapping from billable grace period types to originating billing event reasons. */
static final ImmutableMap<GracePeriodStatus, Reason> GRACE_PERIOD_TO_REASON =
ImmutableMap.of(
GracePeriodStatus.ADD, Reason.CREATE,
GracePeriodStatus.AUTO_RENEW, Reason.RENEW,
GracePeriodStatus.RENEW, Reason.RENEW,
GracePeriodStatus.TRANSFER, Reason.TRANSFER);
/**
* Creates a cancellation billing event (parented on the provided history key, and with the
* corresponding event time) that will cancel out the provided grace period's billing event, using
* the supplied targetId and deriving other metadata (clientId, billing time, and the cancellation
* reason) from the grace period.
*/
public static google.registry.model.billing.BillingCancellation forGracePeriod(
GracePeriod gracePeriod,
DateTime eventTime,
HistoryEntryId domainHistoryId,
String targetId) {
checkArgument(
gracePeriod.hasBillingEvent(),
"Cannot create cancellation for grace period without billing event");
Builder builder =
new Builder()
.setReason(checkNotNull(GRACE_PERIOD_TO_REASON.get(gracePeriod.getType())))
.setTargetId(targetId)
.setRegistrarId(gracePeriod.getRegistrarId())
.setEventTime(eventTime)
// The charge being cancelled will take place at the grace period's expiration time.
.setBillingTime(gracePeriod.getExpirationTime())
.setDomainHistoryId(domainHistoryId);
// Set the grace period's billing event using the appropriate Cancellation builder method.
if (gracePeriod.getBillingEvent() != null) {
builder.setBillingEvent(gracePeriod.getBillingEvent());
} else if (gracePeriod.getBillingRecurrence() != null) {
builder.setBillingRecurrence(gracePeriod.getBillingRecurrence());
}
return builder.build();
}
@Override
public VKey<google.registry.model.billing.BillingCancellation> createVKey() {
return createVKey(getId());
}
public static VKey<google.registry.model.billing.BillingCancellation> createVKey(long id) {
return VKey.create(google.registry.model.billing.BillingCancellation.class, id);
}
@Override
public Builder asBuilder() {
return new Builder(clone(this));
}
/**
* A builder for {@link google.registry.model.billing.BillingCancellation} since it is immutable.
*/
public static class Builder
extends BillingBase.Builder<google.registry.model.billing.BillingCancellation, Builder> {
public Builder() {}
private Builder(google.registry.model.billing.BillingCancellation instance) {
super(instance);
}
public Builder setBillingTime(DateTime billingTime) {
getInstance().billingTime = billingTime;
return this;
}
public Builder setBillingEvent(VKey<BillingEvent> billingEvent) {
getInstance().billingEvent = billingEvent;
return this;
}
public Builder setBillingRecurrence(VKey<BillingRecurrence> billingRecurrence) {
getInstance().billingRecurrence = billingRecurrence;
return this;
}
@Override
public google.registry.model.billing.BillingCancellation build() {
google.registry.model.billing.BillingCancellation instance = getInstance();
checkNotNull(instance.billingTime, "Must set billing time");
checkNotNull(instance.reason, "Must set reason");
checkState(
(instance.billingEvent == null) != (instance.billingRecurrence == null),
"Cancellations must have exactly one billing event key set");
return super.build();
}
}
}

View File

@@ -1,4 +1,4 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -14,740 +14,197 @@
package google.registry.model.billing;
import static com.google.common.base.MoreObjects.firstNonNull;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.util.CollectionUtils.forceEmptyToNull;
import static google.registry.util.CollectionUtils.nullToEmptyImmutableCopy;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import google.registry.model.Buildable;
import google.registry.model.ImmutableObject;
import google.registry.model.UnsafeSerializable;
import google.registry.model.annotations.IdAllocation;
import google.registry.model.common.TimeOfYear;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.rgp.GracePeriodStatus;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.model.transfer.TransferData.TransferServerApproveEntity;
import google.registry.persistence.VKey;
import google.registry.persistence.WithVKey;
import google.registry.persistence.converter.JodaMoneyType;
import java.util.Optional;
import java.util.Set;
import javax.annotation.Nullable;
import javax.persistence.AttributeOverride;
import javax.persistence.AttributeOverrides;
import javax.persistence.Column;
import javax.persistence.Embedded;
import javax.persistence.Entity;
import javax.persistence.EnumType;
import javax.persistence.Enumerated;
import javax.persistence.Id;
import javax.persistence.Index;
import javax.persistence.MappedSuperclass;
import javax.persistence.Table;
import org.hibernate.annotations.Columns;
import org.hibernate.annotations.Type;
import org.joda.money.Money;
import org.joda.time.DateTime;
/** A billable event in a domain's lifecycle. */
@MappedSuperclass
public abstract class BillingEvent extends ImmutableObject
implements Buildable, TransferServerApproveEntity, UnsafeSerializable {
/** A one-time billable event. */
@Entity
@Table(
indexes = {
@Index(columnList = "registrarId"),
@Index(columnList = "eventTime"),
@Index(columnList = "billingTime"),
@Index(columnList = "syntheticCreationTime"),
@Index(columnList = "domainRepoId"),
@Index(columnList = "allocationToken"),
@Index(columnList = "cancellation_matching_billing_recurrence_id")
})
@AttributeOverride(name = "id", column = @Column(name = "billing_event_id"))
@WithVKey(Long.class)
public class BillingEvent extends BillingBase {
/** The reason for the bill, which maps 1:1 to skus in go/registry-billing-skus. */
public enum Reason {
CREATE(true),
@Deprecated // DO NOT USE THIS REASON. IT REMAINS BECAUSE OF HISTORICAL DATA. SEE b/31676071.
ERROR(false),
FEE_EARLY_ACCESS(true),
RENEW(true),
RESTORE(true),
SERVER_STATUS(false),
TRANSFER(true);
/** The billable value. */
@Type(type = JodaMoneyType.TYPE_NAME)
@Columns(columns = {@Column(name = "cost_amount"), @Column(name = "cost_currency")})
Money cost;
private final boolean requiresPeriod;
Reason(boolean requiresPeriod) {
this.requiresPeriod = requiresPeriod;
}
/**
* Returns whether billing events with this reason have a period years associated with them.
*
* <p>Note that this is an "if an only if" condition.
*/
public boolean hasPeriodYears() {
return requiresPeriod;
}
}
/** Set of flags that can be applied to billing events. */
public enum Flag {
ALLOCATION,
ANCHOR_TENANT,
AUTO_RENEW,
/** Landrush billing events are historical only and are no longer created. */
LANDRUSH,
/**
* This flag is used on create {@link OneTime} billing events for domains that were reserved.
*
* <p>This can happen when allocation tokens are used or superusers override a domain
* reservation. These cases can need special handling in billing/invoicing. Anchor tenants will
* never have this flag applied; they will have ANCHOR_TENANT instead.
*/
RESERVED,
SUNRISE,
/**
* This flag will be added to any {@link OneTime} events that are created via, e.g., an
* automated process to expand {@link Recurring} events.
*/
SYNTHETIC
}
/** When the cost should be billed. */
DateTime billingTime;
/**
* Sets of renewal price behaviors that can be applied to billing recurrences.
*
* <p>When a client renews a domain, they could be charged differently, depending on factors such
* as the client type and the domain itself.
* The period in years of the action being billed for, if applicable, otherwise null. Used for
* financial reporting.
*/
public enum RenewalPriceBehavior {
/**
* This indicates the renewal price is the default price.
*
* <p>By default, if the domain is premium, then premium price will be used. Otherwise, the
* standard price of the TLD will be used.
*/
DEFAULT,
/**
* This indicates the domain will be renewed at standard price even if it's a premium domain.
*
* <p>We chose to name this "NONPREMIUM" rather than simply "STANDARD" to avoid confusion
* between "STANDARD" and "DEFAULT".
*
* <p>This price behavior is used with anchor tenants.
*/
NONPREMIUM,
/**
* This indicates that the renewalPrice in {@link Recurring} will be used for domain renewal.
*
* <p>The renewalPrice has a non-null value iff the price behavior is set to "SPECIFIED". This
* behavior is used with internal registrations.
*/
SPECIFIED
Integer periodYears;
/**
* For {@link Flag#SYNTHETIC} events, when this event was persisted to the database (i.e. the
* cursor position at the time the recurrence expansion job was last run). In the event a job
* needs to be undone, a query on this field will return the complete set of potentially bad
* events.
*/
DateTime syntheticCreationTime;
/**
* For {@link Flag#SYNTHETIC} events, a {@link VKey} to the {@link BillingRecurrence} from which
* this {@link google.registry.model.billing.BillingEvent} was created. This is needed in order to
* properly match billing events against {@link BillingCancellation}s.
*/
@Column(name = "cancellation_matching_billing_recurrence_id")
VKey<BillingRecurrence> cancellationMatchingBillingEvent;
/**
* For {@link Flag#SYNTHETIC} events, the {@link DomainHistory} revision ID of the {@link
* BillingRecurrence} from which this {@link google.registry.model.billing.BillingEvent} was
* created. This is needed in order to recreate the {@link VKey} when reading from SQL.
*/
@Column(name = "recurrence_history_revision_id")
Long recurrenceHistoryRevisionId;
/**
* The {@link AllocationToken} used in the creation of this event, or null if one was not used.
*/
@Nullable VKey<AllocationToken> allocationToken;
public Money getCost() {
return cost;
}
/** Entity id. */
@IdAllocation @Id Long id;
/** The registrar to bill. */
@Column(name = "registrarId", nullable = false)
String clientId;
/** Revision id of the entry in DomainHistory table that ths bill belongs to. */
@Column(nullable = false)
Long domainHistoryRevisionId;
/** ID of the EPP resource that the bill is for. */
@Column(nullable = false)
String domainRepoId;
/** When this event was created. For recurring events, this is also the recurrence start time. */
@Column(nullable = false)
DateTime eventTime;
/** The reason for the bill. */
@Enumerated(EnumType.STRING)
@Column(nullable = false)
Reason reason;
/** The fully qualified domain name of the domain that the bill is for. */
@Column(name = "domain_name", nullable = false)
String targetId;
@Nullable Set<Flag> flags;
public String getRegistrarId() {
return clientId;
public DateTime getBillingTime() {
return billingTime;
}
public long getDomainHistoryRevisionId() {
return domainHistoryRevisionId;
public Integer getPeriodYears() {
return periodYears;
}
public String getDomainRepoId() {
return domainRepoId;
public DateTime getSyntheticCreationTime() {
return syntheticCreationTime;
}
public DateTime getEventTime() {
return eventTime;
public VKey<BillingRecurrence> getCancellationMatchingBillingEvent() {
return cancellationMatchingBillingEvent;
}
public long getId() {
return id;
public Long getRecurrenceHistoryRevisionId() {
return recurrenceHistoryRevisionId;
}
public Reason getReason() {
return reason;
}
public String getTargetId() {
return targetId;
}
public HistoryEntryId getHistoryEntryId() {
return new HistoryEntryId(domainRepoId, domainHistoryRevisionId);
}
public ImmutableSet<Flag> getFlags() {
return nullToEmptyImmutableCopy(flags);
public Optional<VKey<AllocationToken>> getAllocationToken() {
return Optional.ofNullable(allocationToken);
}
@Override
public abstract VKey<? extends BillingEvent> createVKey();
public VKey<google.registry.model.billing.BillingEvent> createVKey() {
return createVKey(getId());
}
public static VKey<google.registry.model.billing.BillingEvent> createVKey(long id) {
return VKey.create(google.registry.model.billing.BillingEvent.class, id);
}
/** Override Buildable.asBuilder() to give this method stronger typing. */
@Override
public abstract Builder<?, ?> asBuilder();
public Builder asBuilder() {
return new Builder(clone(this));
}
/** An abstract builder for {@link BillingEvent}. */
public abstract static class Builder<T extends BillingEvent, B extends Builder<?, ?>>
extends GenericBuilder<T, B> {
/** A builder for {@link google.registry.model.billing.BillingEvent} since it is immutable. */
public static class Builder
extends BillingBase.Builder<google.registry.model.billing.BillingEvent, Builder> {
protected Builder() {}
public Builder() {}
protected Builder(T instance) {
private Builder(google.registry.model.billing.BillingEvent instance) {
super(instance);
}
public B setReason(Reason reason) {
getInstance().reason = reason;
return thisCastToDerived();
public Builder setCost(Money cost) {
getInstance().cost = cost;
return this;
}
public B setId(long id) {
getInstance().id = id;
return thisCastToDerived();
public Builder setPeriodYears(Integer periodYears) {
checkNotNull(periodYears);
checkArgument(periodYears > 0);
getInstance().periodYears = periodYears;
return this;
}
public B setRegistrarId(String registrarId) {
getInstance().clientId = registrarId;
return thisCastToDerived();
public Builder setBillingTime(DateTime billingTime) {
getInstance().billingTime = billingTime;
return this;
}
public B setEventTime(DateTime eventTime) {
getInstance().eventTime = eventTime;
return thisCastToDerived();
public Builder setSyntheticCreationTime(DateTime syntheticCreationTime) {
getInstance().syntheticCreationTime = syntheticCreationTime;
return this;
}
public B setTargetId(String targetId) {
getInstance().targetId = targetId;
return thisCastToDerived();
public Builder setCancellationMatchingBillingEvent(
BillingRecurrence cancellationMatchingBillingEvent) {
getInstance().cancellationMatchingBillingEvent =
cancellationMatchingBillingEvent.createVKey();
getInstance().recurrenceHistoryRevisionId =
cancellationMatchingBillingEvent.getDomainHistoryRevisionId();
return this;
}
public B setFlags(ImmutableSet<Flag> flags) {
getInstance().flags = forceEmptyToNull(checkArgumentNotNull(flags, "flags"));
return thisCastToDerived();
}
public B setDomainHistoryId(HistoryEntryId domainHistoryId) {
getInstance().domainHistoryRevisionId = domainHistoryId.getRevisionId();
getInstance().domainRepoId = domainHistoryId.getRepoId();
return thisCastToDerived();
}
public B setDomainHistory(DomainHistory domainHistory) {
return setDomainHistoryId(domainHistory.getHistoryEntryId());
public Builder setAllocationToken(@Nullable VKey<AllocationToken> allocationToken) {
getInstance().allocationToken = allocationToken;
return this;
}
@Override
public T build() {
T instance = getInstance();
checkNotNull(instance.reason, "Reason must be set");
checkNotNull(instance.clientId, "Registrar ID must be set");
checkNotNull(instance.eventTime, "Event time must be set");
checkNotNull(instance.targetId, "Target ID must be set");
checkNotNull(instance.domainHistoryRevisionId, "Domain History Revision ID must be set");
checkNotNull(instance.domainRepoId, "Domain Repo ID must be set");
public google.registry.model.billing.BillingEvent build() {
google.registry.model.billing.BillingEvent instance = getInstance();
checkNotNull(instance.billingTime);
checkNotNull(instance.cost);
checkState(!instance.cost.isNegative(), "Costs should be non-negative.");
// TODO(mcilwain): Enforce this check on all billing events (not just more recent ones)
// post-migration after we add the missing period years values in SQL.
if (instance.eventTime.isAfter(DateTime.parse("2019-01-01T00:00:00Z"))) {
checkState(
instance.reason.hasPeriodYears() == (instance.periodYears != null),
"Period years must be set if and only if reason is "
+ "CREATE, FEE_EARLY_ACCESS, RENEW, RESTORE or TRANSFER.");
}
checkState(
instance.getFlags().contains(Flag.SYNTHETIC) == (instance.syntheticCreationTime != null),
"Synthetic creation time must be set if and only if the SYNTHETIC flag is set.");
checkState(
instance.getFlags().contains(Flag.SYNTHETIC)
== (instance.cancellationMatchingBillingEvent != null),
"Cancellation matching billing event must be set if and only if the SYNTHETIC flag "
+ "is set.");
return super.build();
}
}
/** A one-time billable event. */
@Entity(name = "BillingEvent")
@Table(
indexes = {
@Index(columnList = "registrarId"),
@Index(columnList = "eventTime"),
@Index(columnList = "billingTime"),
@Index(columnList = "syntheticCreationTime"),
@Index(columnList = "domainRepoId"),
@Index(columnList = "allocationToken"),
@Index(columnList = "cancellation_matching_billing_recurrence_id")
})
@AttributeOverride(name = "id", column = @Column(name = "billing_event_id"))
@WithVKey(Long.class)
public static class OneTime extends BillingEvent {
/** The billable value. */
@Type(type = JodaMoneyType.TYPE_NAME)
@Columns(columns = {@Column(name = "cost_amount"), @Column(name = "cost_currency")})
Money cost;
/** When the cost should be billed. */
DateTime billingTime;
/**
* The period in years of the action being billed for, if applicable, otherwise null. Used for
* financial reporting.
*/
Integer periodYears;
/**
* For {@link Flag#SYNTHETIC} events, when this event was persisted to the database (i.e. the
* cursor position at the time the recurrence expansion job was last run). In the event a job
* needs to be undone, a query on this field will return the complete set of potentially bad
* events.
*/
DateTime syntheticCreationTime;
/**
* For {@link Flag#SYNTHETIC} events, a {@link VKey} to the {@link Recurring} from which this
* {@link OneTime} was created. This is needed in order to properly match billing events against
* {@link Cancellation}s.
*/
@Column(name = "cancellation_matching_billing_recurrence_id")
VKey<Recurring> cancellationMatchingBillingEvent;
/**
* For {@link Flag#SYNTHETIC} events, the {@link DomainHistory} revision ID of the {@link
* Recurring} from which this {@link OneTime} was created. This is needed in order to recreate
* the {@link VKey} when reading from SQL.
*/
@Column(name = "recurrence_history_revision_id")
Long recurringEventHistoryRevisionId;
/**
* The {@link AllocationToken} used in the creation of this event, or null if one was not used.
*/
@Nullable VKey<AllocationToken> allocationToken;
public Money getCost() {
return cost;
}
public DateTime getBillingTime() {
return billingTime;
}
public Integer getPeriodYears() {
return periodYears;
}
public DateTime getSyntheticCreationTime() {
return syntheticCreationTime;
}
public VKey<Recurring> getCancellationMatchingBillingEvent() {
return cancellationMatchingBillingEvent;
}
public Long getRecurringEventHistoryRevisionId() {
return recurringEventHistoryRevisionId;
}
public Optional<VKey<AllocationToken>> getAllocationToken() {
return Optional.ofNullable(allocationToken);
}
@Override
public VKey<OneTime> createVKey() {
return createVKey(getId());
}
public static VKey<OneTime> createVKey(long id) {
return VKey.create(OneTime.class, id);
}
@Override
public Builder asBuilder() {
return new Builder(clone(this));
}
/** A builder for {@link OneTime} since it is immutable. */
public static class Builder extends BillingEvent.Builder<OneTime, Builder> {
public Builder() {}
private Builder(OneTime instance) {
super(instance);
}
public Builder setCost(Money cost) {
getInstance().cost = cost;
return this;
}
public Builder setPeriodYears(Integer periodYears) {
checkNotNull(periodYears);
checkArgument(periodYears > 0);
getInstance().periodYears = periodYears;
return this;
}
public Builder setBillingTime(DateTime billingTime) {
getInstance().billingTime = billingTime;
return this;
}
public Builder setSyntheticCreationTime(DateTime syntheticCreationTime) {
getInstance().syntheticCreationTime = syntheticCreationTime;
return this;
}
public Builder setCancellationMatchingBillingEvent(
Recurring cancellationMatchingBillingEvent) {
getInstance().cancellationMatchingBillingEvent =
cancellationMatchingBillingEvent.createVKey();
getInstance().recurringEventHistoryRevisionId =
cancellationMatchingBillingEvent.getDomainHistoryRevisionId();
return this;
}
public Builder setAllocationToken(@Nullable VKey<AllocationToken> allocationToken) {
getInstance().allocationToken = allocationToken;
return this;
}
@Override
public OneTime build() {
OneTime instance = getInstance();
checkNotNull(instance.billingTime);
checkNotNull(instance.cost);
checkState(!instance.cost.isNegative(), "Costs should be non-negative.");
// TODO(mcilwain): Enforce this check on all billing events (not just more recent ones)
// post-migration after we add the missing period years values in SQL.
if (instance.eventTime.isAfter(DateTime.parse("2019-01-01T00:00:00Z"))) {
checkState(
instance.reason.hasPeriodYears() == (instance.periodYears != null),
"Period years must be set if and only if reason is "
+ "CREATE, FEE_EARLY_ACCESS, RENEW, RESTORE or TRANSFER.");
}
checkState(
instance.getFlags().contains(Flag.SYNTHETIC)
== (instance.syntheticCreationTime != null),
"Synthetic creation time must be set if and only if the SYNTHETIC flag is set.");
checkState(
instance.getFlags().contains(Flag.SYNTHETIC)
== (instance.cancellationMatchingBillingEvent != null),
"Cancellation matching billing event must be set if and only if the SYNTHETIC flag "
+ "is set.");
return super.build();
}
}
}
/**
* A recurring billable event.
*
* <p>Unlike {@link OneTime} events, these do not store an explicit cost, since the cost of the
* recurring event might change and each time we bill for it, we need to bill at the current cost,
* not the value that was in use at the time the recurrence was created.
*/
@Entity(name = "BillingRecurrence")
@Table(
indexes = {
@Index(columnList = "registrarId"),
@Index(columnList = "eventTime"),
@Index(columnList = "domainRepoId"),
@Index(columnList = "recurrenceEndTime"),
@Index(columnList = "recurrenceLastExpansion"),
@Index(columnList = "recurrence_time_of_year")
})
@AttributeOverride(name = "id", column = @Column(name = "billing_recurrence_id"))
@WithVKey(Long.class)
public static class Recurring extends BillingEvent {
/**
* The billing event recurs every year between {@link #eventTime} and this time on the [month,
* day, time] specified in {@link #recurrenceTimeOfYear}.
*/
DateTime recurrenceEndTime;
/**
* The most recent {@link DateTime} when this recurrence was expanded.
*
* <p>We only bother checking recurrences for potential expansion if this is at least one year
* in the past. If it's more recent than that, it means that the recurrence was already expanded
* too recently to need to be checked again (as domains autorenew each year).
*/
@Column(nullable = false)
DateTime recurrenceLastExpansion;
/**
* The eventTime recurs every year on this [month, day, time] between {@link #eventTime} and
* {@link #recurrenceEndTime}, inclusive of the start but not of the end.
*
* <p>This field is denormalized from {@link #eventTime} to allow for an efficient index, but it
* always has the same data as that field.
*
* <p>Note that this is a recurrence of the event time, not the billing time. The billing time
* can be calculated by adding the relevant grace period length to this date. The reason for
* this requirement is that the event time recurs on a {@link org.joda.time.Period} schedule
* (same day of year, which can be 365 or 366 days later) which is what {@link TimeOfYear} can
* model, whereas the billing time is a fixed {@link org.joda.time.Duration} later.
*/
@Embedded
@AttributeOverrides(
@AttributeOverride(name = "timeString", column = @Column(name = "recurrence_time_of_year")))
TimeOfYear recurrenceTimeOfYear;
/**
* The renewal price for domain renewal if and only if it's specified.
*
* <p>This price column remains null except when the renewal price behavior of the billing is
* SPECIFIED. This column is used for internal registrations.
*/
@Nullable
@Type(type = JodaMoneyType.TYPE_NAME)
@Columns(
columns = {@Column(name = "renewalPriceAmount"), @Column(name = "renewalPriceCurrency")})
Money renewalPrice;
@Enumerated(EnumType.STRING)
@Column(name = "renewalPriceBehavior", nullable = false)
RenewalPriceBehavior renewalPriceBehavior = RenewalPriceBehavior.DEFAULT;
public DateTime getRecurrenceEndTime() {
return recurrenceEndTime;
}
public DateTime getRecurrenceLastExpansion() {
return recurrenceLastExpansion;
}
public TimeOfYear getRecurrenceTimeOfYear() {
return recurrenceTimeOfYear;
}
public RenewalPriceBehavior getRenewalPriceBehavior() {
return renewalPriceBehavior;
}
public Optional<Money> getRenewalPrice() {
return Optional.ofNullable(renewalPrice);
}
@Override
public VKey<Recurring> createVKey() {
return createVKey(getId());
}
public static VKey<Recurring> createVKey(Long id) {
return VKey.create(Recurring.class, id);
}
@Override
public Builder asBuilder() {
return new Builder(clone(this));
}
/** A builder for {@link Recurring} since it is immutable. */
public static class Builder extends BillingEvent.Builder<Recurring, Builder> {
public Builder() {}
private Builder(Recurring instance) {
super(instance);
}
public Builder setRecurrenceEndTime(DateTime recurrenceEndTime) {
getInstance().recurrenceEndTime = recurrenceEndTime;
return this;
}
public Builder setRecurrenceLastExpansion(DateTime recurrenceLastExpansion) {
getInstance().recurrenceLastExpansion = recurrenceLastExpansion;
return this;
}
public Builder setRenewalPriceBehavior(RenewalPriceBehavior renewalPriceBehavior) {
getInstance().renewalPriceBehavior = renewalPriceBehavior;
return this;
}
public Builder setRenewalPrice(@Nullable Money renewalPrice) {
getInstance().renewalPrice = renewalPrice;
return this;
}
@Override
public Recurring build() {
Recurring instance = getInstance();
checkNotNull(instance.eventTime);
checkNotNull(instance.reason);
// Don't require recurrenceLastExpansion to be individually set on every new Recurrence.
// The correct default value if not otherwise set is the event time of the recurrence minus
// 1 year.
instance.recurrenceLastExpansion =
Optional.ofNullable(instance.recurrenceLastExpansion)
.orElse(instance.eventTime.minusYears(1));
checkArgument(
instance.renewalPriceBehavior == RenewalPriceBehavior.SPECIFIED
^ instance.renewalPrice == null,
"Renewal price can have a value if and only if the renewal price behavior is"
+ " SPECIFIED");
instance.recurrenceTimeOfYear = TimeOfYear.fromDateTime(instance.eventTime);
instance.recurrenceEndTime =
Optional.ofNullable(instance.recurrenceEndTime).orElse(END_OF_TIME);
return super.build();
}
}
}
/**
* An event representing a cancellation of one of the other two billable event types.
*
* <p>This is implemented as a separate event rather than a bit on BillingEvent in order to
* preserve the immutability of billing events.
*/
@Entity(name = "BillingCancellation")
@Table(
indexes = {
@Index(columnList = "registrarId"),
@Index(columnList = "eventTime"),
@Index(columnList = "domainRepoId"),
@Index(columnList = "billingTime"),
@Index(columnList = "billing_event_id"),
@Index(columnList = "billing_recurrence_id")
})
@AttributeOverride(name = "id", column = @Column(name = "billing_cancellation_id"))
@WithVKey(Long.class)
public static class Cancellation extends BillingEvent {
/** The billing time of the charge that is being cancelled. */
DateTime billingTime;
/**
* The one-time billing event to cancel, or null for autorenew cancellations.
*
* <p>Although the type is {@link VKey} the name "ref" is preserved for historical reasons.
*/
@Column(name = "billing_event_id")
VKey<OneTime> refOneTime;
/**
* The recurring billing event to cancel, or null for non-autorenew cancellations.
*
* <p>Although the type is {@link VKey} the name "ref" is preserved for historical reasons.
*/
@Column(name = "billing_recurrence_id")
VKey<Recurring> refRecurring;
public DateTime getBillingTime() {
return billingTime;
}
public VKey<? extends BillingEvent> getEventKey() {
return firstNonNull(refOneTime, refRecurring);
}
/** The mapping from billable grace period types to originating billing event reasons. */
static final ImmutableMap<GracePeriodStatus, Reason> GRACE_PERIOD_TO_REASON =
ImmutableMap.of(
GracePeriodStatus.ADD, Reason.CREATE,
GracePeriodStatus.AUTO_RENEW, Reason.RENEW,
GracePeriodStatus.RENEW, Reason.RENEW,
GracePeriodStatus.TRANSFER, Reason.TRANSFER);
/**
* Creates a cancellation billing event (parented on the provided history key, and with the
* corresponding event time) that will cancel out the provided grace period's billing event,
* using the supplied targetId and deriving other metadata (clientId, billing time, and the
* cancellation reason) from the grace period.
*/
public static Cancellation forGracePeriod(
GracePeriod gracePeriod,
DateTime eventTime,
HistoryEntryId domainHistoryId,
String targetId) {
checkArgument(
gracePeriod.hasBillingEvent(),
"Cannot create cancellation for grace period without billing event");
Builder builder =
new Builder()
.setReason(checkNotNull(GRACE_PERIOD_TO_REASON.get(gracePeriod.getType())))
.setTargetId(targetId)
.setRegistrarId(gracePeriod.getRegistrarId())
.setEventTime(eventTime)
// The charge being cancelled will take place at the grace period's expiration time.
.setBillingTime(gracePeriod.getExpirationTime())
.setDomainHistoryId(domainHistoryId);
// Set the grace period's billing event using the appropriate Cancellation builder method.
if (gracePeriod.getOneTimeBillingEvent() != null) {
builder.setOneTimeEventKey(gracePeriod.getOneTimeBillingEvent());
} else if (gracePeriod.getRecurringBillingEvent() != null) {
builder.setRecurringEventKey(gracePeriod.getRecurringBillingEvent());
}
return builder.build();
}
@Override
public VKey<Cancellation> createVKey() {
return createVKey(getId());
}
public static VKey<Cancellation> createVKey(long id) {
return VKey.create(Cancellation.class, id);
}
@Override
public Builder asBuilder() {
return new Builder(clone(this));
}
/** A builder for {@link Cancellation} since it is immutable. */
public static class Builder extends BillingEvent.Builder<Cancellation, Builder> {
public Builder() {}
private Builder(Cancellation instance) {
super(instance);
}
public Builder setBillingTime(DateTime billingTime) {
getInstance().billingTime = billingTime;
return this;
}
public Builder setOneTimeEventKey(VKey<OneTime> eventKey) {
getInstance().refOneTime = eventKey;
return this;
}
public Builder setRecurringEventKey(VKey<Recurring> eventKey) {
getInstance().refRecurring = eventKey;
return this;
}
@Override
public Cancellation build() {
Cancellation instance = getInstance();
checkNotNull(instance.billingTime, "Must set billing time");
checkNotNull(instance.reason, "Must set reason");
checkState(
(instance.refOneTime == null) != (instance.refRecurring == null),
"Cancellations must have exactly one billing event key set");
return super.build();
}
}
}
}

View File

@@ -0,0 +1,199 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.model.billing;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import google.registry.model.common.TimeOfYear;
import google.registry.persistence.VKey;
import google.registry.persistence.WithVKey;
import google.registry.persistence.converter.JodaMoneyType;
import java.util.Optional;
import javax.annotation.Nullable;
import javax.persistence.AttributeOverride;
import javax.persistence.AttributeOverrides;
import javax.persistence.Column;
import javax.persistence.Embedded;
import javax.persistence.Entity;
import javax.persistence.EnumType;
import javax.persistence.Enumerated;
import javax.persistence.Index;
import javax.persistence.Table;
import org.hibernate.annotations.Columns;
import org.hibernate.annotations.Type;
import org.joda.money.Money;
import org.joda.time.DateTime;
/**
* A recurring billable event.
*
* <p>Unlike {@link BillingEvent} events, these do not store an explicit cost, since the cost of the
* recurring event might change and each time we bill for it, we need to bill at the current cost,
* not the value that was in use at the time the recurrence was created.
*/
@Entity
@Table(
indexes = {
@Index(columnList = "registrarId"),
@Index(columnList = "eventTime"),
@Index(columnList = "domainRepoId"),
@Index(columnList = "recurrenceEndTime"),
@Index(columnList = "recurrenceLastExpansion"),
@Index(columnList = "recurrence_time_of_year")
})
@AttributeOverride(name = "id", column = @Column(name = "billing_recurrence_id"))
@WithVKey(Long.class)
public class BillingRecurrence extends BillingBase {
/**
* The billing event recurs every year between {@link #eventTime} and this time on the [month,
* day, time] specified in {@link #recurrenceTimeOfYear}.
*/
DateTime recurrenceEndTime;
/**
* The most recent {@link DateTime} when this recurrence was expanded.
*
* <p>We only bother checking recurrences for potential expansion if this is at least one year in
* the past. If it's more recent than that, it means that the recurrence was already expanded too
* recently to need to be checked again (as domains autorenew each year).
*/
@Column(nullable = false)
DateTime recurrenceLastExpansion;
/**
* The eventTime recurs every year on this [month, day, time] between {@link #eventTime} and
* {@link #recurrenceEndTime}, inclusive of the start but not of the end.
*
* <p>This field is denormalized from {@link #eventTime} to allow for an efficient index, but it
* always has the same data as that field.
*
* <p>Note that this is a recurrence of the event time, not the billing time. The billing time can
* be calculated by adding the relevant grace period length to this date. The reason for this
* requirement is that the event time recurs on a {@link org.joda.time.Period} schedule (same day
* of year, which can be 365 or 366 days later) which is what {@link TimeOfYear} can model,
* whereas the billing time is a fixed {@link org.joda.time.Duration} later.
*/
@Embedded
@AttributeOverrides(
@AttributeOverride(name = "timeString", column = @Column(name = "recurrence_time_of_year")))
TimeOfYear recurrenceTimeOfYear;
/**
* The renewal price for domain renewal if and only if it's specified.
*
* <p>This price column remains null except when the renewal price behavior of the billing is
* SPECIFIED. This column is used for internal registrations.
*/
@Nullable
@Type(type = JodaMoneyType.TYPE_NAME)
@Columns(columns = {@Column(name = "renewalPriceAmount"), @Column(name = "renewalPriceCurrency")})
Money renewalPrice;
@Enumerated(EnumType.STRING)
@Column(name = "renewalPriceBehavior", nullable = false)
RenewalPriceBehavior renewalPriceBehavior = RenewalPriceBehavior.DEFAULT;
public DateTime getRecurrenceEndTime() {
return recurrenceEndTime;
}
public DateTime getRecurrenceLastExpansion() {
return recurrenceLastExpansion;
}
public TimeOfYear getRecurrenceTimeOfYear() {
return recurrenceTimeOfYear;
}
public RenewalPriceBehavior getRenewalPriceBehavior() {
return renewalPriceBehavior;
}
public Optional<Money> getRenewalPrice() {
return Optional.ofNullable(renewalPrice);
}
@Override
public VKey<google.registry.model.billing.BillingRecurrence> createVKey() {
return createVKey(getId());
}
public static VKey<google.registry.model.billing.BillingRecurrence> createVKey(Long id) {
return VKey.create(google.registry.model.billing.BillingRecurrence.class, id);
}
@Override
public Builder asBuilder() {
return new Builder(clone(this));
}
/**
* A builder for {@link google.registry.model.billing.BillingRecurrence} since it is immutable.
*/
public static class Builder
extends BillingBase.Builder<google.registry.model.billing.BillingRecurrence, Builder> {
public Builder() {}
private Builder(google.registry.model.billing.BillingRecurrence instance) {
super(instance);
}
public Builder setRecurrenceEndTime(DateTime recurrenceEndTime) {
getInstance().recurrenceEndTime = recurrenceEndTime;
return this;
}
public Builder setRecurrenceLastExpansion(DateTime recurrenceLastExpansion) {
getInstance().recurrenceLastExpansion = recurrenceLastExpansion;
return this;
}
public Builder setRenewalPriceBehavior(RenewalPriceBehavior renewalPriceBehavior) {
getInstance().renewalPriceBehavior = renewalPriceBehavior;
return this;
}
public Builder setRenewalPrice(@Nullable Money renewalPrice) {
getInstance().renewalPrice = renewalPrice;
return this;
}
@Override
public google.registry.model.billing.BillingRecurrence build() {
google.registry.model.billing.BillingRecurrence instance = getInstance();
checkNotNull(instance.eventTime);
checkNotNull(instance.reason);
// Don't require recurrenceLastExpansion to be individually set on every new Recurrence.
// The correct default value if not otherwise set is the event time of the recurrence minus
// 1 year.
instance.recurrenceLastExpansion =
Optional.ofNullable(instance.recurrenceLastExpansion)
.orElse(instance.eventTime.minusYears(1));
checkArgument(
instance.renewalPriceBehavior == RenewalPriceBehavior.SPECIFIED
^ instance.renewalPrice == null,
"Renewal price can have a value if and only if the renewal price behavior is"
+ " SPECIFIED");
instance.recurrenceTimeOfYear = TimeOfYear.fromDateTime(instance.eventTime);
instance.recurrenceEndTime =
Optional.ofNullable(instance.recurrenceEndTime).orElse(END_OF_TIME);
return super.build();
}
}
}

View File

@@ -22,7 +22,7 @@ import google.registry.model.ImmutableObject;
import google.registry.model.UnsafeSerializable;
import google.registry.model.UpdateAutoTimestampEntity;
import google.registry.model.common.Cursor.CursorId;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.persistence.VKey;
import java.util.Optional;
import javax.persistence.AttributeOverride;
@@ -134,7 +134,7 @@ public class Cursor extends UpdateAutoTimestampEntity {
return createVKey(type, GLOBAL);
}
public static VKey<Cursor> createScopedVKey(CursorType type, Registry tld) {
public static VKey<Cursor> createScopedVKey(CursorType type, Tld tld) {
return createVKey(type, tld.getTldStr());
}
@@ -172,8 +172,8 @@ public class Cursor extends UpdateAutoTimestampEntity {
return create(cursorType, cursorTime, GLOBAL);
}
/** Creates a new cursor instance with a given {@link Registry} scope. */
public static Cursor createScoped(CursorType cursorType, DateTime cursorTime, Registry scope) {
/** Creates a new cursor instance with a given {@link Tld} scope. */
public static Cursor createScoped(CursorType cursorType, DateTime cursorTime, Tld scope) {
checkNotNull(scope, "Cursor scope cannot be null");
return create(cursorType, cursorTime, scope.getTldStr());
}

View File

@@ -1,268 +0,0 @@
// Copyright 2021 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.model.common;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import com.github.benmanes.caffeine.cache.LoadingCache;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSortedMap;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryEnvironment;
import google.registry.model.CacheUtils;
import google.registry.model.annotations.DeleteAfterMigration;
import java.time.Duration;
import java.util.Arrays;
import javax.persistence.Entity;
import javax.persistence.PersistenceException;
import org.joda.time.DateTime;
/**
* A wrapper object representing the stage-to-time mapping of the Registry 3.0 Cloud SQL migration.
*
* <p>The entity is stored in SQL throughout the entire migration so as to have a single point of
* access.
*/
@DeleteAfterMigration
@Entity
public class DatabaseMigrationStateSchedule extends CrossTldSingleton {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static boolean useUncachedForTest = false;
public enum PrimaryDatabase {
CLOUD_SQL,
DATASTORE
}
public enum ReplayDirection {
NO_REPLAY,
DATASTORE_TO_SQL,
SQL_TO_DATASTORE
}
/**
* The current phase of the migration plus information about which database to use and whether or
* not the phase is read-only.
*/
public enum MigrationState {
/** Datastore is the only DB being used. */
DATASTORE_ONLY(PrimaryDatabase.DATASTORE, false, ReplayDirection.NO_REPLAY),
/** Datastore is the primary DB, with changes replicated to Cloud SQL. */
DATASTORE_PRIMARY(PrimaryDatabase.DATASTORE, false, ReplayDirection.DATASTORE_TO_SQL),
/** Datastore is the primary DB, with replication, and async actions are disallowed. */
DATASTORE_PRIMARY_NO_ASYNC(PrimaryDatabase.DATASTORE, false, ReplayDirection.DATASTORE_TO_SQL),
/** Datastore is the primary DB, with replication, and all mutating actions are disallowed. */
DATASTORE_PRIMARY_READ_ONLY(PrimaryDatabase.DATASTORE, true, ReplayDirection.DATASTORE_TO_SQL),
/**
* Cloud SQL is the primary DB, with replication back to Datastore, and all mutating actions are
* disallowed.
*/
SQL_PRIMARY_READ_ONLY(PrimaryDatabase.CLOUD_SQL, true, ReplayDirection.SQL_TO_DATASTORE),
/** Cloud SQL is the primary DB, with changes replicated to Datastore. */
SQL_PRIMARY(PrimaryDatabase.CLOUD_SQL, false, ReplayDirection.SQL_TO_DATASTORE),
/** Cloud SQL is the only DB being used. */
SQL_ONLY(PrimaryDatabase.CLOUD_SQL, false, ReplayDirection.NO_REPLAY),
/** Toggles SQL Sequence based allocateId */
SEQUENCE_BASED_ALLOCATE_ID(PrimaryDatabase.CLOUD_SQL, false, ReplayDirection.NO_REPLAY),
/** Use SQL-based Nordn upload flow instead of the pull queue-based one. */
NORDN_SQL(PrimaryDatabase.CLOUD_SQL, false, ReplayDirection.NO_REPLAY);
private final PrimaryDatabase primaryDatabase;
private final boolean isReadOnly;
private final ReplayDirection replayDirection;
public PrimaryDatabase getPrimaryDatabase() {
return primaryDatabase;
}
public boolean isReadOnly() {
return isReadOnly;
}
public ReplayDirection getReplayDirection() {
return replayDirection;
}
MigrationState(
PrimaryDatabase primaryDatabase, boolean isReadOnly, ReplayDirection replayDirection) {
this.primaryDatabase = primaryDatabase;
this.isReadOnly = isReadOnly;
this.replayDirection = replayDirection;
}
}
/**
* Cache of the current migration schedule. The key is meaningless; this is essentially a memoized
* Supplier that can be reset for testing purposes and after writes.
*/
@VisibleForTesting
public static final LoadingCache<
Class<DatabaseMigrationStateSchedule>, TimedTransitionProperty<MigrationState>>
// Each instance should cache the migration schedule for five minutes before reloading
CACHE =
CacheUtils.newCacheBuilder(Duration.ofMinutes(5))
.build(singletonClazz -> DatabaseMigrationStateSchedule.getUncached());
// Restrictions on the state transitions, e.g. no going from DATASTORE_ONLY to SQL_ONLY
private static final ImmutableMultimap<MigrationState, MigrationState> VALID_STATE_TRANSITIONS =
createValidStateTransitions();
/**
* The valid state transitions. Generally, one can advance the state one step or move backward any
* number of steps, as long as the step we're moving back to has the same primary database as the
* one we're in. Otherwise, we must move to the corresponding READ_ONLY stage first.
*/
private static ImmutableMultimap<MigrationState, MigrationState> createValidStateTransitions() {
ImmutableMultimap.Builder<MigrationState, MigrationState> builder =
new ImmutableMultimap.Builder<MigrationState, MigrationState>()
.put(MigrationState.DATASTORE_ONLY, MigrationState.DATASTORE_PRIMARY)
.putAll(
MigrationState.DATASTORE_PRIMARY,
MigrationState.DATASTORE_ONLY,
MigrationState.DATASTORE_PRIMARY_NO_ASYNC)
.putAll(
MigrationState.DATASTORE_PRIMARY_NO_ASYNC,
MigrationState.DATASTORE_ONLY,
MigrationState.DATASTORE_PRIMARY,
MigrationState.DATASTORE_PRIMARY_READ_ONLY)
.putAll(
MigrationState.DATASTORE_PRIMARY_READ_ONLY,
MigrationState.DATASTORE_ONLY,
MigrationState.DATASTORE_PRIMARY,
MigrationState.DATASTORE_PRIMARY_NO_ASYNC,
MigrationState.SQL_PRIMARY_READ_ONLY,
MigrationState.SQL_PRIMARY)
.putAll(
MigrationState.SQL_PRIMARY_READ_ONLY,
MigrationState.DATASTORE_PRIMARY_READ_ONLY,
MigrationState.SQL_PRIMARY)
.putAll(
MigrationState.SQL_PRIMARY,
MigrationState.SQL_PRIMARY_READ_ONLY,
MigrationState.SQL_ONLY)
.putAll(
MigrationState.SQL_ONLY,
MigrationState.SQL_PRIMARY_READ_ONLY,
MigrationState.SQL_PRIMARY)
.putAll(MigrationState.SQL_ONLY, MigrationState.SEQUENCE_BASED_ALLOCATE_ID)
.putAll(MigrationState.SEQUENCE_BASED_ALLOCATE_ID, MigrationState.NORDN_SQL)
.putAll(MigrationState.NORDN_SQL, MigrationState.SEQUENCE_BASED_ALLOCATE_ID);
// In addition, we can always transition from a state to itself (useful when updating the map).
Arrays.stream(MigrationState.values()).forEach(state -> builder.put(state, state));
return builder.build();
}
// Default map to return if we have never saved any -- only use Datastore.
@VisibleForTesting
public static final TimedTransitionProperty<MigrationState> DEFAULT_TRANSITION_MAP =
TimedTransitionProperty.fromValueMap(
ImmutableSortedMap.of(START_OF_TIME, MigrationState.DATASTORE_ONLY));
@VisibleForTesting
public TimedTransitionProperty<MigrationState> migrationTransitions =
TimedTransitionProperty.withInitialValue(MigrationState.DATASTORE_ONLY);
// Required for Hibernate initialization
protected DatabaseMigrationStateSchedule() {}
@VisibleForTesting
public DatabaseMigrationStateSchedule(
TimedTransitionProperty<MigrationState> migrationTransitions) {
this.migrationTransitions = migrationTransitions;
}
/** Sets and persists to SQL the provided migration transition schedule. */
public static void set(ImmutableSortedMap<DateTime, MigrationState> migrationTransitionMap) {
tm().assertInTransaction();
TimedTransitionProperty<MigrationState> transitions =
TimedTransitionProperty.make(
migrationTransitionMap,
VALID_STATE_TRANSITIONS,
"validStateTransitions",
MigrationState.DATASTORE_ONLY,
"migrationTransitionMap must start with DATASTORE_ONLY");
validateTransitionAtCurrentTime(transitions);
tm().put(new DatabaseMigrationStateSchedule(transitions));
CACHE.invalidateAll();
}
@VisibleForTesting
public static void useUncachedForTest() {
useUncachedForTest = true;
}
/** Loads the currently-set migration schedule from the cache, or the default if none exists. */
public static TimedTransitionProperty<MigrationState> get() {
return CACHE.get(DatabaseMigrationStateSchedule.class);
}
/** Returns the database migration status at the given time. */
public static MigrationState getValueAtTime(DateTime dateTime) {
return useUncachedForTest
? getUncached().getValueAtTime(dateTime)
: get().getValueAtTime(dateTime);
}
/** Loads the currently-set migration schedule from SQL, or the default if none exists. */
@VisibleForTesting
static TimedTransitionProperty<MigrationState> getUncached() {
return tm().transact(
() -> {
try {
return tm().loadSingleton(DatabaseMigrationStateSchedule.class)
.map(s -> s.migrationTransitions)
.orElse(DEFAULT_TRANSITION_MAP);
} catch (PersistenceException e) {
if (!RegistryEnvironment.get().equals(RegistryEnvironment.UNITTEST)) {
throw e;
}
logger.atWarning().withCause(e).log(
"Error when retrieving migration schedule; this should only happen in tests.");
return DEFAULT_TRANSITION_MAP;
}
});
}
/**
* A provided map of transitions may be valid by itself (i.e. it shifts states properly, doesn't
* skip states, and doesn't backtrack incorrectly) while still being invalid. In addition to the
* transitions in the map being valid, the single transition from the current map at the current
* time to the new map at the current time must also be valid.
*/
private static void validateTransitionAtCurrentTime(
TimedTransitionProperty<MigrationState> newTransitions) {
MigrationState currentValue = getUncached().getValueAtTime(tm().getTransactionTime());
MigrationState nextCurrentValue = newTransitions.getValueAtTime(tm().getTransactionTime());
checkArgument(
VALID_STATE_TRANSITIONS.get(currentValue).contains(nextCurrentValue),
"Cannot transition from current state-as-of-now %s to new state-as-of-now %s",
currentValue,
nextCurrentValue);
}
}

View File

@@ -0,0 +1,139 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.model.common;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.dns.PublishDnsUpdatesAction;
import google.registry.model.ImmutableObject;
import google.registry.persistence.VKey;
import javax.annotation.Nullable;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.EnumType;
import javax.persistence.Enumerated;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Index;
import javax.persistence.Table;
import org.joda.time.DateTime;
@Entity
@Table(indexes = {@Index(columnList = "requestTime"), @Index(columnList = "lastProcessTime")})
public class DnsRefreshRequest extends ImmutableObject {
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Id
@SuppressWarnings("unused")
protected long id;
@Column(nullable = false)
@Enumerated(EnumType.STRING)
private TargetType type;
@Column(nullable = false)
private String name;
@Column(nullable = false)
private String tld;
@Column(nullable = false)
private DateTime requestTime;
@Column(nullable = false)
private DateTime lastProcessTime;
public TargetType getType() {
return type;
}
public String getName() {
return name;
}
public String getTld() {
return tld;
}
public DateTime getRequestTime() {
return requestTime;
}
/**
* The time at which the entity was last processed.
*
* <p>Note that "processed" means that it was read, not necessarily that the DNS request was
* processed successfully. The subsequent steps to bundle requests together and enqueue them in a
* Cloud Tasks queue for {@link PublishDnsUpdatesAction} to process can still fail.
*
* <p>This value allows us to control if a row is just recently read and should be skipped, should
* there are concurrent reads that all attempt to read the rows with oldest {@link #requestTime},
* or another read that comes too early after the previous read.
*/
public DateTime getLastProcessTime() {
return lastProcessTime;
}
@Override
public VKey<DnsRefreshRequest> createVKey() {
return VKey.create(DnsRefreshRequest.class, id);
}
protected DnsRefreshRequest() {}
private DnsRefreshRequest(
@Nullable Long id,
TargetType type,
String name,
String tld,
DateTime requestTime,
DateTime lastProcessTime) {
checkNotNull(type, "Target type cannot be null");
checkNotNull(name, "Domain/host name cannot be null");
checkNotNull(tld, "TLD cannot be null");
checkNotNull(requestTime, "Request time cannot be null");
checkNotNull(lastProcessTime, "Last process time cannot be null");
if (id != null) {
this.id = id;
}
this.type = type;
this.name = name;
this.tld = tld;
this.requestTime = requestTime;
this.lastProcessTime = lastProcessTime;
}
public DnsRefreshRequest(TargetType type, String name, String tld, DateTime requestTime) {
this(null, type, name, tld, requestTime, START_OF_TIME);
}
public DnsRefreshRequest updateProcessTime(DateTime processTime) {
checkArgument(
processTime.isAfter(getRequestTime()),
"Process time %s must be later than request time %s",
processTime,
getRequestTime());
checkArgument(
processTime.isAfter(getLastProcessTime()),
"New process time %s must be later than the old one %s",
processTime,
getLastProcessTime());
return new DnsRefreshRequest(id, getType(), getName(), getTld(), getRequestTime(), processTime);
}
}

View File

@@ -16,6 +16,10 @@ package google.registry.model.console;
/** Permissions that users may have in the UI, either per-registrar or globally. */
public enum ConsolePermission {
/** View basic information about a registrar. */
VIEW_REGISTRAR_DETAILS,
/** Edit basic information about a registrar. */
EDIT_REGISTRAR_DETAILS,
/** Add, update, or remove other console users. */
MANAGE_USERS,
/** Add, update, or remove registrars. */

View File

@@ -27,6 +27,8 @@ public class ConsoleRoleDefinitions {
/** Permissions for a registry support agent. */
static final ImmutableSet<ConsolePermission> SUPPORT_AGENT_PERMISSIONS =
ImmutableSet.of(
ConsolePermission.VIEW_REGISTRAR_DETAILS,
ConsolePermission.EDIT_REGISTRAR_DETAILS,
ConsolePermission.MANAGE_USERS,
ConsolePermission.MANAGE_ACCREDITATION,
ConsolePermission.CONFIGURE_EPP_CONNECTION,
@@ -69,6 +71,7 @@ public class ConsoleRoleDefinitions {
/** Permissions for a registrar partner account manager. */
static final ImmutableSet<ConsolePermission> ACCOUNT_MANAGER_PERMISSIONS =
ImmutableSet.of(
ConsolePermission.VIEW_REGISTRAR_DETAILS,
ConsolePermission.DOWNLOAD_DOMAINS,
ConsolePermission.VIEW_TLD_PORTFOLIO,
ConsolePermission.CONTACT_SUPPORT,
@@ -89,6 +92,7 @@ public class ConsoleRoleDefinitions {
new ImmutableSet.Builder<ConsolePermission>()
.addAll(ACCOUNT_MANAGER_WITH_REGISTRY_LOCK_PERMISSIONS)
.add(
ConsolePermission.EDIT_REGISTRAR_DETAILS,
ConsolePermission.MANAGE_ACCREDITATION,
ConsolePermission.CONFIGURE_EPP_CONNECTION,
ConsolePermission.CHANGE_NOMULUS_PASSWORD,

View File

@@ -44,7 +44,7 @@ import com.google.gson.annotations.Expose;
import google.registry.flows.ResourceFlowUtils;
import google.registry.model.EppResource;
import google.registry.model.EppResource.ResourceWithTransferData;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.contact.Contact;
import google.registry.model.domain.launch.LaunchNotice;
import google.registry.model.domain.rgp.GracePeriodStatus;
@@ -56,7 +56,7 @@ import google.registry.model.host.Host;
import google.registry.model.poll.PollMessage;
import google.registry.model.poll.PollMessage.Autorenew;
import google.registry.model.poll.PollMessage.OneTime;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.model.transfer.DomainTransferData;
import google.registry.model.transfer.TransferStatus;
import google.registry.persistence.VKey;
@@ -200,7 +200,7 @@ public class DomainBase extends EppResource
* should be created, and this field should be updated to point to the new one.
*/
@Column(name = "billing_recurrence_id")
VKey<Recurring> autorenewBillingEvent;
VKey<BillingRecurrence> autorenewBillingEvent;
/**
* The recurring poll message associated with this domain's autorenewals.
@@ -286,7 +286,7 @@ public class DomainBase extends EppResource
return deletePollMessage;
}
public VKey<Recurring> getAutorenewBillingEvent() {
public VKey<BillingRecurrence> getAutorenewBillingEvent() {
return autorenewBillingEvent;
}
@@ -488,7 +488,7 @@ public class DomainBase extends EppResource
GracePeriodStatus.TRANSFER,
domain.getRepoId(),
transferExpirationTime.plus(
Registry.get(domain.getTld()).getTransferGracePeriodLength()),
Tld.get(domain.getTld()).getTransferGracePeriodLength()),
transferData.getGainingRegistrarId(),
transferData.getServerApproveBillingEvent())));
} else {
@@ -520,11 +520,10 @@ public class DomainBase extends EppResource
builder
.setRegistrationExpirationTime(newExpirationTime)
.addGracePeriod(
GracePeriod.createForRecurring(
GracePeriod.createForRecurrence(
GracePeriodStatus.AUTO_RENEW,
domain.getRepoId(),
lastAutorenewTime.plus(
Registry.get(domain.getTld()).getAutoRenewGracePeriodLength()),
lastAutorenewTime.plus(Tld.get(domain.getTld()).getAutoRenewGracePeriodLength()),
domain.getCurrentSponsorRegistrarId(),
domain.getAutorenewBillingEvent()));
newLastEppUpdateTime = Optional.of(lastAutorenewTime);
@@ -848,7 +847,7 @@ public class DomainBase extends EppResource
return thisCastToDerived();
}
public B setAutorenewBillingEvent(VKey<Recurring> autorenewBillingEvent) {
public B setAutorenewBillingEvent(VKey<BillingRecurrence> autorenewBillingEvent) {
getInstance().autorenewBillingEvent = autorenewBillingEvent;
return thisCastToDerived();
}

Some files were not shown because too many files have changed in this diff Show More