1
0
mirror of https://github.com/google/nomulus synced 2026-02-02 19:12:27 +00:00

Compare commits

...

120 Commits

Author SHA1 Message Date
sarahcaseybot
acdbc65c51 Change Registry object reference to Tld in configuration.md (#2021) 2023-05-12 12:32:02 -04:00
Weimin Yu
d510531f65 Remove the deprecatd DefaultCredential (#2032)
Use the ApplicationDefaultCredential annotation instead.

The new annotation has been verified in sandbox and production using the
'executeCannedScript' endpoint. The verification code is removed in this
PR too.
2023-05-11 13:46:36 -04:00
Lai Jiang
0d4dd57fe7 Fix a typo (#2031) 2023-05-11 13:26:07 -04:00
Pavlo Tkach
2667a0e977 Expand nomulus get_domain command to load up deleted domain data too (#2018) 2023-05-10 16:05:03 -04:00
gbrodman
1aef31efff Allow usage of standard HTTP requests in CloudTasksUtils (#2013)
This adds a possible configuration point "defaultServiceAccount" (which
in GAE will be the standard GAE service account). If this is configured,
CloudTasksUtils can create tasks with standard HTTP requests with an
OIDC token corresponding to that service account, as opposed to using
the AppEngine-specific request methods.

This also works with IAP, in that if IAP is on and we specify the IAP
client ID in the config, CloudTasksUtils will use the IAP client ID as
the token audience and the request will successfully be passed through
the IAP layer.

Tetsted in QA.
2023-05-09 16:02:12 -04:00
Lai Jiang
4d19245c29 Change usage grouping key in the invoice CSV (#2024)
This column is used by the billing team to create invoices. Registrars
have asked that a single invoice be created for a given registrar,
instead of one per registrar-tld pair. This should have no other effect
on the billing pipeline as the invoice grouping key has a description
field that also contains the TLD, so the granularity as a whole does not
change.
2023-05-09 11:25:11 -04:00
Lai Jiang
4b34307a6e Delete DatabaseMigrationStateSchedule (#2001)
We have been using it as a poor man's timed flag that triggers a system
behavior change after a certain time. We have no foreseeable future use
for it now that the DNS pull queue related code is deleted. If in the
future a need for such a flag arises, we are better off implementing a
proper flag system than hijacking this class any way.
2023-05-08 14:36:28 -04:00
Pavlo Tkach
55243e7cf6 Adds cloud scheduler and tasks deployer (#1999) 2023-05-04 15:57:32 -04:00
Lai Jiang
e14764b4c8 Remove DNS pull queue (#2000)
This is the last dependency on GAE pull queue, therefore we can delete
the pull queue config from queue.xml as well.
2023-05-04 13:21:53 -04:00
dependabot[bot]
68810f7a30 Bump engine.io and socket.io in /console-webapp (#2022)
Bumps [engine.io](https://github.com/socketio/engine.io) and [socket.io](https://github.com/socketio/socket.io). These dependencies needed to be updated together.

Updates `engine.io` from 6.2.1 to 6.4.2
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.1...6.4.2)

Updates `socket.io` from 4.5.2 to 4.6.1
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/4.5.2...4.6.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
- dependency-name: socket.io
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-04 12:50:19 -04:00
Ben McIlwain
14d245b1e3 Remove duplicate info from create/update reserved list command output (#2020)
It was repeating the domain label twice for every reserved list entry. It used
to look like this:

baddies=baddies,FULLY_BLOCKED
2023-05-03 17:31:23 -04:00
Weimin Yu
61ab29ae9e Prober ssl cert update automation (#2019)
Defined CloudBuild script and docker image that automatically
updates probers' SSL certs
2023-05-03 15:57:50 -04:00
Weimin Yu
6742e5bf23 Remove CloudSql wipeout cron job in crash (#2017)
No more production data in crash. This allows us to repopulate crash
with test data.
2023-05-02 14:44:09 -04:00
Weimin Yu
c7f69eba1d Prepare switch of credential annotation (#2014)
* Prepare switch of credential annotation

Prepare the switch from DefaultCredential to ApplicationCredential.

In nomulus tools, start using the new annotation. This is tested by
successfully using the nomulus curl command, which actually needs a
valid credential to work.

For remaining use cases of the old annotation in Nomulus server, add
some code that relies on the new credential to work. Once these code
are tested in sandbox and production, we will switch the annotations.
2023-05-01 11:23:19 -04:00
gbrodman
578988d5ea Don't allow a list of the empty string in List<String> fields (#2011)
If the user does, e.g. `--allowed_nameservers=` (or contact ids) that
shouldn't mean a list consisting solely of the empty string.

Using this parameter / converter allows us to ensure that lists of
strings look reasonable.
2023-04-28 17:59:17 -04:00
sarahcaseybot
c17b8285f9 Don't apply non-premium default tokens to premium names (#2007)
* Don't apply non-premium default tokens to premium names

* Add test for renew

* Remove premium check from try/catch block

* Add check in validateToken

* Update docs

* Add validateForPremiums

* Better method name

* Shorten error message to fit as reason

* Add missing extension catch

* Remove extra javadoc

* Fix merge conflicts and change error message

* Update flow docs
2023-04-28 17:56:15 -04:00
gbrodman
ff8a08f40e Fix typo in pipeline name (#2016) 2023-04-28 17:05:24 -04:00
gbrodman
a341058282 Refactor / rename Billing object classes (#1993)
This includes renaming the billing classes to match the SQL table names,
as well as splitting them out into their own separate top-level classes.
The rest of the changes are mostly renaming variables and comments etc.

We now use `BillingBase` as the name of the common billing superclass,
because one-time events are called BillingEvents
2023-04-28 14:27:37 -04:00
Weimin Yu
16758879f0 Allow rotation when updating registrar cert (#2012)
* Allow rotation when updating registrar cert

When updating a registrar's primary cert, add a flag to activate
rotation of previous primary cert to failover.

This functionality is part of the prober ssl cert renewal automation.
2023-04-27 14:42:11 -04:00
Lai Jiang
2021247ab4 Update README on how to manually push schema (#2009) 2023-04-26 16:32:15 -04:00
Lai Jiang
4fc7038690 Make a few minor changes to make the linter happy (#2010)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/2010)
<!-- Reviewable:end -->
2023-04-26 15:49:32 -04:00
Weimin Yu
9272e7fd14 Add a test of failover certificate (#2008)
Verifies that client can log in with correct failover certificate.
2023-04-26 15:47:47 -04:00
sarahcaseybot
e1afe00758 Require token transition schedules for default tokens (#2005) 2023-04-21 17:38:10 -04:00
sarahcaseybot
203c20c040 Use a TLD's configured TTLs if they are present (#1992)
* Use tld's configured TTLs if they are present

* Change to optional

* Use optionals better
2023-04-21 13:47:10 -04:00
Lai Jiang
bd0cea0d87 Remove AppEngineServiceUtils (#2003)
The only method that is called from this class is setNumInstances. However we
don't current use `nomulus set_num_instances` anywhere. If we need to change
the number of instances, it is either done by updating appengine-web.xml, which
is deployed by Spinnaker, or doing it manually as a break-glass fix via gcloud
or on Pantheon.
2023-04-21 10:11:12 -04:00
sarahcaseybot
23fb69a682 Fix parameter description for type in GenerateAllocationTokensCommand (#1998) 2023-04-19 17:32:09 -04:00
Lai Jiang
597f63a603 Fix URL parameter to the DNS refresh fanout job (#1997)
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1997)
<!-- Reviewable:end -->
2023-04-19 15:32:41 -04:00
Lai Jiang
5ec73f3809 Refactor contact history PII wipeout logic into a Beam pipeline (#1994)
Because we need to check if a contact history is the most recent for its
underlying contact resource, the query-wipe out-repeat loop no longer works
ideally due to the added overhead with the query.

Instead, we refactor the logic into a Beam pipeline where the query only
needs to be performed once and history entries eligible for wipe out are
handled individually in their own transforms. Because history entries
are otherwise immutable, we can run the pipeline in relatively relaxed
repeatable read isolation level. We also do not worry about batching for
performance, as we do not anticipate this operation to put a lot of
strains on the particular table.
2023-04-19 13:04:45 -04:00
Ben McIlwain
b474e50e87 Update IDN tables with latest approved by ICANN (#1995)
This also adds README files to explain the two different IDN table locations
(which have different purposes). See http://b/278565478 for more information.
2023-04-18 17:23:12 -04:00
sarahcaseybot
6f3d062c32 Change Registry class name to Tld (#1991)
* Change Registry class name to Tld

* Fix merge conflict

* Some capitalization fixes
2023-04-18 12:26:14 -04:00
gbrodman
371d83b4cc Add a command to update Recurrence objects' behavior (#1987)
We want to basically be able to change the renewal behavior, either
setting the behavior type (e.g. NONPREMIUM) or the specified renewal
price.
2023-04-17 11:36:12 -04:00
Lai Jiang
e1f29a8103 Add routing for ReadDnsRefreshRequestsAction (#1990)
It looks like we forgot this crucial part to actually add the necessary
routing the new action...

Also fixes a linter warning.
2023-04-12 15:17:21 -04:00
Pavlo Tkach
055a52f67e Trim cloud scheduler config url value before submitting (#1988) 2023-04-10 19:05:32 -04:00
sarahcaseybot
d17678959c Add tool commands to modify TTLs on a TLD (#1985)
* Add tool commands to modify TTLs on a TLD

* Small changes

* Add an example to the parameter description
2023-04-10 14:43:56 -04:00
Lai Jiang
79ba1b94c4 Add SQL-based DNS refresh processing mechanism (#1971) 2023-04-07 17:31:28 -04:00
gbrodman
33a771b13e Add Java code for storing and using IDN tables per-TLD (#1977)
This includes changes to make sure that we use the proper per-TLD IDN
tables as well as setting/updating/removing them via the Create/Update
TLD commands.
2023-04-06 17:33:23 -04:00
gbrodman
bd65c6eee6 Allow a credit of 0 when deleting a domain during a grace period (#1984)
There can be situations (anchor tenants, test tokens, other ways of
getting a domain to cost $0) where we may want to delete a domain during
the add grace period but the credit applied is 0. We should not fail on
those cases.

See b/277115241 for an example.
2023-04-06 15:58:53 -04:00
Ben McIlwain
20c673840e Add a new Unconfusable Latin table (#1981)
This new table has just been approved by ICANN. It is the same as our existing
Extended Latin table, except with the removal of some lesser-used characters
with diacritic marks that are confusable variants.

The filenames for the IDN tables are made explicit to improve code readability.

And this reverses the removal of G with stroke from the existing Extended Latin
table (see PR #1938), so that that table continues to accurately reflect the
state of our previously launched TLDs.

This is the full list of removed characters:

U+00E1                         # LATIN SMALL LETTER A WITH ACUTE
U+0101                         # LATIN SMALL LETTER A WITH MACRON
U+01CE                         # LATIN SMALL LETTER A WITH CARON
U+010B                         # LATIN SMALL LETTER C WITH DOT ABOVE
U+01E7                         # LATIN SMALL LETTER G WITH CARON
U+0123                         # LATIN SMALL LETTER G WITH CEDILLA
U+01E5                         # LATIN SMALL LETTER G WITH STROKE
U+0131                         # LATIN SMALL LETTER DOTLESS I
U+00ED                         # LATIN SMALL LETTER I WITH ACUTE
U+00EF                         # LATIN SMALL LETTER I WITH DIAERESIS
U+01D0                         # LATIN SMALL LETTER I WITH CARON
U+0144                         # LATIN SMALL LETTER N WITH ACUTE
U+014B                         # LATIN SMALL LETTER ENG
U+00F3                         # LATIN SMALL LETTER O WITH ACUTE
U+014D                         # LATIN SMALL LETTER O WITH MACRON
U+01D2                         # LATIN SMALL LETTER O WITH CARON
U+0157                         # LATIN SMALL LETTER R WITH CEDILLA
U+0163                         # LATIN SMALL LETTER T WITH CEDILLA
U+00FA                         # LATIN SMALL LETTER U WITH ACUTE
U+00FC                         # LATIN SMALL LETTER U WITH DIAERESIS
U+01D4                         # LATIN SMALL LETTER U WITH CARON
U+1E83                         # LATIN SMALL LETTER W WITH ACUTE
U+1E81                         # LATIN SMALL LETTER W WITH GRAVE
U+1E85                         # LATIN SMALL LETTER W WITH DIAERESIS
U+1EF3                         # LATIN SMALL LETTER Y WITH GRAVE
U+017C                         # LATIN SMALL LETTER Z WITH DOT ABOVE
2023-04-06 15:49:36 -04:00
Lai Jiang
11c60b8c8f Temporarily disable contact history wipeout (#1982)
Makes the next run at the first Monday of December, which should give us
plenty of time to fix the issue with it wiping out PII in the most recent
contact history.

<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1982)
<!-- Reviewable:end -->
2023-04-06 13:41:51 -04:00
Lai Jiang
e330fd1c66 Remove cron.xml from sandbox (#1979)
It is somehow missed in #1965.
2023-04-06 11:30:07 -04:00
Pavlo Tkach
57c17042b6 Transaction manager to not retry inner transactions (#1974) 2023-04-05 16:46:36 -04:00
sarahcaseybot
8623fce119 Check for default tokens in the renew flow (#1978)
* Check for default tokens in the renew flow

* Remove extra check

* Add allowed action
2023-04-05 12:25:09 -04:00
Lai Jiang
7243575433 Remove unused GAE dependencies from NordnUploadAction (#1980) 2023-04-04 16:53:35 -04:00
sarahcaseybot
8eab43d371 Check allowedEppActions when validating tokens (#1972)
* Check allowedEppActions when validating tokens

* Reflect failed tokens in the fee check

* Add tests for domainCheckFlow

* Add hyphens to fee class name

* Add clarifying comment to catch block

* Add specific exception types
2023-04-04 14:29:50 -04:00
sarahcaseybot
34d329c158 Add tool changes to modify allowedEppActions on allocation tokens (#1970)
* Add tool changes to modify allowedEppActions on allocation tokens

* Change enum value error message

* Remove unnecessary variable

* Prevent UNKNOWN command

* Check command name instead of string
2023-03-31 14:37:19 -04:00
Pavlo Tkach
425ecdcd87 Add disable_runner_v2 to pipeline options (#1976) 2023-03-30 17:10:37 -04:00
gbrodman
77ee124374 Add SQL change for per-TLD IDN tables (#1975) 2023-03-28 17:03:22 -04:00
Lai Jiang
b9742adc0b Delete cron.xml (#1965)
We've successfully migrated to using Cloud Scheduler.
2023-03-23 14:29:06 -04:00
sarahcaseybot
d4cd25c4ae Add pricing logic for allocation tokens in domain renew (#1961)
* Add pricing logic for allocation tokens in domain renew

* Add clarifying comment

* Several fixes

* Add test for renewalPriceBehavior not changing
2023-03-23 14:00:36 -04:00
sarahcaseybot
8b7e938ed6 Add TTL configs to Registry object (#1968)
* Add TTL configs to Registry object

* Change A and AAAA records TTL field name
2023-03-22 13:56:11 -04:00
Pavlo Tkach
c216c874b4 Remove app engine deps from Lock flyway change (#1911) 2023-03-20 12:25:12 -04:00
Pavlo Tkach
0ab9471c8d Make cloud scheduler deployment part of gradle deploy (alpha, qa and crash only) (#1969) 2023-03-20 11:10:00 -04:00
sarahcaseybot
d482754f66 Implement default tokens for the fee extension in domain check flow (#1950)
* Implement default tokens for the fee extension in domain check

* Add test for expired token

* Add test for alloc token and default token

* Fix formatting

* Always check for default tokens

* Change transaction time to passed in DateTime
2023-03-17 15:41:17 -04:00
sarahcaseybot
fe086b43f5 Add TTL columns to the Tld table (#1964)
* Add TTL columns to Tld table

* Change A and AAAA records column name
2023-03-17 11:54:14 -04:00
Lai Jiang
95f1bca3fb Remove Nordn pull queue code (#1966)
The SQL-based flow is verified to work on production.
2023-03-16 17:37:48 -04:00
sarahcaseybot
178a2323d9 Add allowedEppActions to AllocationToken Java classes (#1958)
* Add allowedEppActions field to AllocationToken Java class and converter

* Add getter and setter
2023-03-16 15:45:34 -04:00
Lai Jiang
a44aa1378f Create a DnsRefreshRequest entity backed by the corresponding table (#1941)
Also adds a DnsUtils class to deal with adding, polling, and removing
DNS refresh requests (only adding is implemented for now). The class
also takes care of choosing which mechanism to use (pull queue vs. SQL)
based on the current time and the database migration schedule map.
2023-03-16 13:02:20 -04:00
Pavlo Tkach
d0f625f70e angular version update 15.1.0 -> 15.2.2 (#1967) 2023-03-16 11:56:38 -04:00
gbrodman
fb59874234 Allow for multiple service accounts in authentication (#1963)
When submitting tasks to Cloud Tasks, we will use the built-in OIDC
authentication which runs under the default service account (not the
cloud scheduler service account). We want either to work for app-level
auth.
2023-03-15 10:20:58 -04:00
gbrodman
b6083e227f Move CloudTasksUtils to core/ project (#1956)
This does nothing for now, but in the future this will allow us to refer
to the RegistryConfig and/or Service objects from the core project. This
will be necessary when changing CloudTasksUtils to not use the AppEngine
built-in connection (it will need to use a standard HTTP request
instead).
2023-03-14 15:15:05 -04:00
Lai Jiang
5805b6859e Rename process_time column in DnsRefreshRequest (#1962)
Make it explicit that this is the last process time, not a scheduled
future process time.
2023-03-14 14:03:12 -04:00
Pavlo Tkach
3108e8a871 Use builder image as a base for schema-deployer and schema-verifier (#1955) 2023-03-13 15:37:02 -04:00
Pavlo Tkach
ec142caf9c Expand ID Token Auth verifier to catch all exceptions (#1960) 2023-03-13 12:12:47 -04:00
Pavlo Tkach
e60ad58098 Restore resaveAllEppResourcesPipeline as a cloud task (#1953) 2023-03-13 10:44:25 -04:00
sarahcaseybot
83e9e7fb5c Add allowedEppActions field to AllocationToken (#1957) 2023-03-10 14:14:47 -05:00
Pavlo Tkach
438c523fcb Remove app engine deps from Lock (#1910) 2023-03-09 10:47:48 -05:00
Lai Jiang
025a2faff2 Drop the indexs and columns for dns_refresh_request_time (#1949) 2023-03-09 10:29:31 -05:00
gbrodman
fd822dd333 Add create/delete/update commands for User objects (#1936)
This also includes the change of allowing the Java User object to have a
null GAIA ID (when creating user objects, we don't know what the GAIA ID
is).
2023-03-07 17:18:48 -05:00
Ben McIlwain
9b93749d43 Double the number of frontend instances from 12 to 24 (#1954)
It seems like we're hitting App Engine capacity issues resulting in actual pages
now (for whatever reason, but likely one customer), and we obviously don't want
that.
2023-03-06 16:04:28 -05:00
Pavlo Tkach
71a8579ece Move App Engine cron jobs to cloud scheduler (#1939) 2023-03-01 13:40:56 -05:00
Lai Jiang
cda51f13dc Remove dnsRefreshRequestTime from EppResources (#1943)
We have decided to use a separate table (#1940) to track DNS refresh requests
due to performance reasons.

See: go/registry-pull-queue-redesign
2023-03-01 13:40:30 -05:00
Lai Jiang
1de5b5dcc1 Add a process time column to DnsRefreshRequest (#1948)
The value of the column would be set to START_OF_TIME for new entries.
Every time a row is read, the value is updated to the current time. This
allows concurrent reads to not repeatedly read the same entry that has the
earliest request time, because they would only look for rows that have a value
of process time that is before current time - some padding time.

This basically fulfills the same function that LEASE_PADDING gives us
when using a pull queue, whereas a task would be leased for a certain
time, during which time they would not be leased by anyone else.

See: https://cs.opensource.google/nomulus/nomulus/+/master:core/src/main/java/google/registry/dns/ReadDnsQueueAction.java;l=99?q=readdnsqueue&ss=nomulus%2Fnomulus
2023-02-28 16:52:02 -05:00
sarahcaseybot
32279e42e4 Allow incorrect fee extensions on domain creates with default tokens (#1927)
* Modify fee extension to accept larger costs on creates with default tokens

* Add tests

* Add some comments to tests
2023-02-28 14:24:03 -05:00
Lai Jiang
ba0f90bdaf Add support for Nordn upload without using pull queues. (#1925)
This PR adds an alternative method to upload Lordn to Nordn server without
using App Engine pull queue. A new database migration stage is added to control
whether a new task is scheduled with the old or new method. The
NordnUploadAction is configured to process both kind of tasks. Once the tasks
scheduled for the old tasks are all processed, we can start using the
new method exclusively.

See: go/registry-pull-queue-redesign
2023-02-28 12:57:27 -05:00
Lai Jiang
85308eb975 Ignore invalid old CRL when performing update. (#1946)
There is no point comparing the old CRL to the new ones when the old one
is invalid. This could happen when the CA cert rotates, after which the
old CRL stop being valid as it fails signature verification against the
new cert.

This change will allow us to keep updating the CRL after a CA rotation without
having to manually delete the old CRL from the database.

See b/270983553.

<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1946)
<!-- Reviewable:end -->
2023-02-28 10:00:18 -05:00
Lai Jiang
ed62f27a4a Update kythe vnames mapping (#1944) 2023-02-27 17:09:57 -05:00
Ben McIlwain
75851399ba Remove "letter G with stroke" from Extended Latin IDN table (#1938)
ICANN doesn't like this character because it's confusable with a normal G (the
stroke tends to get lost in the visual clutter of the descender), and .com's
Extended Latin table doesn't use it either. Best to get rid of it.
2023-02-23 16:27:15 -05:00
Lai Jiang
6d54c8d113 Add allowed license for json (#1942)
For some reason `./gradlew clean build` on master is failing for me on
multiple machines due to a new org.json:json version triggering license
violations, even though the lock files are not changing.

Note that the old versions are still present because if I remove
"The JSON license", which the old versions use, the check also fails...
2023-02-23 11:37:31 -05:00
Lai Jiang
34dfa2760e Add a table to record EPP resources needing DNS refresh (#1940) 2023-02-22 14:18:28 -05:00
Lai Jiang
ff39a4a763 Change default beam job region (#1937)
For reasons that I cannot explain, the same expand recurring billing
event pipeline would fail in us-east1 but succeed in us-central1.

See:

https://pantheon.corp.google.com/dataflow/jobs/us-central1/2023-02-09_14_52_24-162498476138221714;graphView=0?project=domain-registry

https://pantheon.corp.google.com/dataflow/jobs/us-east1/2023-02-09_14_26_07-4564782062878841960;graphView=1?project=domain-registry

Also improved how the accuracy of the metrics:

It is observed that both counters are consistently higher for the same
start and end times when running in dry run mode. There is no way to
test for consistency when not running in dry run, for obviously reasons.

I can make the recurrings in scope counter consistent by not updating it
in a side-effect-causing transaction, but there is no way around the
other counter. It can only be trusted when running in dry run mode,
unfortunately.
2023-02-13 15:57:32 -05:00
gbrodman
b1cd8c5a6f Add a frontend endpoint for retrieving a domain in JSON form (#1916)
We might (likely will) modify some of the fiddly bits around this (maybe
the GSON serialization, where we do the actual authorization, etc) but
this should be a decent basic shell structure for endpoints that the new
registrar console can call to retrieve JSON results.
2023-02-09 15:09:42 -05:00
gbrodman
28c7bc3085 Generate and use an IAP-enabled ID token in the proxy (#1926)
This is only generated and used if "iapClientId" is set in the proxy
config. If so, we use code similar to
https://cloud.google.com/iap/docs/authentication-howto#obtaining_an_oidc_token_for_the_default_service_account
to generate an ID token that is valid for IAP. We set the token on the
Proxy-Authorization header so that we can keep using the pre-existing
access token as well -- IAP allows for us to use either the
Authorization header or the Proxy-Authorization header.
2023-02-09 14:50:35 -05:00
gbrodman
f36d22f4b1 Allow null GAIA IDs for User objects (#1933)
We were under the mistaken impression before that there was a reliable
way to, out-of-band, get a GAIA ID for a particular email address.
Unfortunately, that isn't the case (at least, not in a scalable way or
one that support agents could use). As a result, we have to allow null
GAIA IDs in the database.

When we (or the support team) create new users, we will only specify the
email address and not the GAIA ID. Then, when the user logs in for the
first time, we will have the GAIA ID from the provided ID token, and we
can populate it then.
2023-02-08 16:10:34 -05:00
Lai Jiang
ef3ce79b8a Install procps in schema-deployer image (#1934)
It turns out this one uses pgrep and pkill as well, go figure...

<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1934)
<!-- Reviewable:end -->
2023-02-08 09:59:47 -05:00
Lai Jiang
85317e3982 Update TMCH root certificate (#1918)
See b/260945047.

Also refactored the corresponding tests, which should future updates easier.

This change should be deployed at or around 2023-02-15T16:00:00Z.
2023-02-06 22:39:54 -05:00
Lai Jiang
a53b71ecd5 Install procps (#1932)
The schema verifier script needs pgrep and pkill, which do not come with
Debian.
2023-02-06 19:45:04 -05:00
Lai Jiang
fc9446876f Install curl (#1931)
Tested by running "docker build .".
2023-02-06 16:45:52 -05:00
dependabot[bot]
654b165dff Bump http-cache-semantics from 4.1.0 to 4.1.1 in /console-webapp (#1929)
Bumps [http-cache-semantics](https://github.com/kornelski/http-cache-semantics) from 4.1.0 to 4.1.1.
- [Release notes](https://github.com/kornelski/http-cache-semantics/releases)
- [Commits](https://github.com/kornelski/http-cache-semantics/compare/v4.1.0...v4.1.1)

---
updated-dependencies:
- dependency-name: http-cache-semantics
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-06 13:22:50 -05:00
Lai Jiang
14d68d4cb2 Change base image for schema-verifier and schema-deployer (#1930)
Ubuntu 18.04 is entering EOL and the Cloud Build jobs are failing,
seemingly due to connection error to 18.04 repos:

https://pantheon.corp.google.com/cloud-build/builds;region=global/126a7c90-4322-41f1-ba1c-a10e38a32dab;step=5?project=domain-registry-dev

We use Debian 10 for the main builder, so it's better to keep everything
on the same schedule:

https://cs.opensource.google/nomulus/nomulus/+/master:release/builder/Dockerfile

Debian 10 is supported till June 2024:

https://wiki.debian.org/LTS
2023-02-06 13:09:37 -05:00
Lai Jiang
bbf405d566 Fix expand recurring billing event pipeline (#1928) 2023-02-06 11:33:57 -05:00
sarahcaseybot
356f7d0099 Modify DomainCreateFlow to check for an applicable defaultPromoToken (#1904)
* Modify DomainCreateFlow to check for an applicable defaultPromoToken

* Add handling for deleted tokens

* Change cache to allocation token cache

* Abstract away cache methods

* Use AllocationToken.getAll in create flow

* Filter out empty tokens
2023-02-01 14:53:51 -05:00
dependabot[bot]
70509cfe46 Bump ua-parser-js from 0.7.31 to 0.7.33 in /console-webapp (#1924)
Bumps [ua-parser-js](https://github.com/faisalman/ua-parser-js) from 0.7.31 to 0.7.33.
- [Release notes](https://github.com/faisalman/ua-parser-js/releases)
- [Changelog](https://github.com/faisalman/ua-parser-js/blob/master/changelog.md)
- [Commits](https://github.com/faisalman/ua-parser-js/compare/0.7.31...0.7.33)

---
updated-dependencies:
- dependency-name: ua-parser-js
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lai Jiang <jianglai@google.com>
2023-01-31 14:52:26 -05:00
sarahcaseybot
5e081f4692 Prevent ending package tokens with active domains (#1919)
* Prevent ending package tokens with active domains

* Fix bad formatting in comments

* Fix lots of nits
2023-01-30 16:13:23 -05:00
Lai Jiang
07b87bbb4d Remove @IdAllocation annotation from repoId (#1923)
This annotation only works for Long or long field.
2023-01-30 15:40:40 -05:00
gbrodman
6fabbb62d2 Use the Proxy-Authorization header when using nomulus + IAP (#1921) 2023-01-26 15:16:32 -05:00
Lai Jiang
d8a882daa0 Add fields needed to implement pull queue alternative (#1915) 2023-01-25 15:26:00 -05:00
Pavlo Tkach
de8c6fd316 Add a condition update precaution to validateNewState (#1920) 2023-01-25 14:53:12 -05:00
Weimin Yu
ae68917bdd Upgrade to Gradle 7.3.2 (#1922)
This is an 'easy' upgrade that requires a minor change in
common/build.gradle and the removal of an unnecessary import in buildSrc.

Gradle 7.4 and above has breaking changes that break the latest nebula lint plugin. We may have to wait a while.
2023-01-25 12:47:35 -05:00
Lai Jiang
0736137a22 Update ExpandRecurringBillingEventsAction to use the beam pipeline (#1907)
Due to the way the beam pipeline is designed, it will expand an
recurring billing event when its event time is in scope for expansion,
instead of billing time. This means that the one time will be generated
45 days earlier. This would negate the need to check if the expansion is
finished when generating monthly invoices.

We will need to backfill the past 45 days of onetimes before the new
code is deployed. As an illustration, with the old code, a cursor time
of 2023-01-17 means that all auto-renewals whose billing time is before
2023-01-17 were created, which corresponds to an effective cursor time
of 2022-12-03 (45 days before 2023-01-17) for event time. This cursor
will need to be brought to 2023-01-17 to ensure that there is no gap in
generated event times when switching to use the new code.
2023-01-23 19:08:04 -05:00
Pavlo Tkach
c4b7929506 Remove not null constraint request_log_id column (#1917) 2023-01-23 09:37:20 -05:00
Lai Jiang
e6974a98bc Add columns needed to implement pull queue alternative (#1914) 2023-01-20 14:17:06 -05:00
Lai Jiang
630ae1f802 Delete TaskQueueUtils (#1908)
For push queues, use CloudTasksUtils. Pull queues for now directly calls
the GAE task queue APIs. The usage of pull queues will be soon replaced.
2023-01-19 14:45:18 -05:00
Lai Jiang
925c9ba9e8 Remove datastore related code (#1906) 2023-01-19 14:44:11 -05:00
Lai Jiang
ac14688a4f Do not deploy datastore index file (#1913)
The index was deleted in #1905.

<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1913)
<!-- Reviewable:end -->
2023-01-18 16:31:35 -05:00
Lai Jiang
7ab572188a Use a fake instance id in metric (#1912)
Currently we synthesize a instance id which requires the use of App
Engine Module API. Given that we only have one version of code running
at one time, and HTTP is stateless, there is no point tracking exactly
which GAE "instance" is. We do lose information on which service (default,
backend, etc) is writing the metric, but that does not seem very
important.

Using a constant fake instance ID allows us to get rid of another GAE
dependency.
2023-01-18 16:24:59 -05:00
Lai Jiang
2f438b1d3a Fix flaky tests with TaskQueueExtension (#1909)
The temporary queue.xml file is not deleted in the afterEach() method,
likely causing some flaky tests that we saw due to overwriting of the
file by concurrent tests.
2023-01-18 12:04:47 -05:00
sarahcaseybot
0d3c0f7b76 Only email support for package non-compliance (#1900)
* Only email support for package non-compliance

* Fix import

* Always use longs
2023-01-17 14:22:15 -05:00
Pavlo Tkach
5e4f8495d6 Add tasks and deployment info to console docs (#1901) 2023-01-12 17:54:08 -05:00
Lai Jiang
6042f77d1f Remove AppEngineExtnesion (#1905)
Most of its usage can be replaced by JpaIntegrationTestExtension. In
places where specific GAE APIs are still needed, namely when pull queue
or the User service is used, two simplifed extensions are used, which
makes them much easier to identify when the APIs are no longer used.
2023-01-12 17:02:44 -05:00
Pavlo Tkach
8d180f535f Angular v14 -> v15 update (#1903) 2023-01-11 14:46:48 -05:00
Lai Jiang
99a31423e0 Always use SQL based ID allocation (#1899)
We've been using it in production for three weeks now. Everything seems
to be working fine. Removing the code related to checking the migration
state and using the override.
2023-01-10 09:22:01 -05:00
Lai Jiang
9dab1e86ec Add a beam pipeline to expand recurring billing event (#1881)
This will replace the ExpandRecurringBillingEventsAction, which has a
couple of issues:

1) The action starts with too many Recurrings that are later filtered out
   because their expanded OneTimes are not actually in scope. This is due
   to the Recurrings not recording its latest expanded event time, and
   therefore many Recurrings that are not yet due for renewal get included
   in the initial query.

2) The action works in sequence, which exacerbated the issue in 1) and
   makes it very slow to run if the window of operation is wider than
   one day, which in turn makes it impossible to run any catch-up
   expansions with any significant gap to fill.

3) The action only expands the recurrence when the billing times because
   due, but most of its logic works on event time, which is 45 days
   before billing time, making the code hard to reason about and
   error-prone.  This has led to b/258822640 where a premature
   optimization intended to fix 1) caused some autorenwals to not be
   expanded correctly when subsequent manual renews within the autorenew
   grace period closed the original recurrece.

As a result, the new pipeline addresses the above issues in the
following way:

1) Update the recurrenceLastExpansion field on the Recurring when a new
   expansion occurs, and narrow down the Recurrings in scope for
   expansion by only looking for the ones that have not been expanded for
   more than a year.

2) Make it a Beam pipeline so expansions can happen in parallel. The
   Recurrings are grouped into batches in order to not overwhelm the
   database with writes for each expansion.

3) Create new expansions when the event time, as opposed to billing
   time, is within the operation window. This streamlines the logic and
   makes it clearer and easier to reason about. This also aligns with
   how other (cancelllable) operations for which there are accompanying
   grace periods are handled, when the corresponding data is always
   speculatively created at event time. Lastly, doing this negates the
   need to check if the expansion has finished running before generating
   the monthly invoices, because the billing events are now created not
   just-in-time, but 45 days in advance.

Note that this PR only adds the pipeline. It does not switch the default
behavior to using the pipeline, which is still done by
ExpandRecurringBillingEventsAction. We will first use this pipeline to
generate missing billing events and domain histories caused by
b/258822640. This also allows us to test it in production, as it
backfills data that will not affect ongoing invoice generation. If
anything goes wrong, we can always delete the generated billing events
and domain histories, based on the unique "reason" in them.

This pipeline can only run after we switch to use SQL sequence based ID
allocation, introduced in #1831.
2023-01-09 17:41:56 -05:00
dependabot[bot]
60cbebd007 Bump json5 from 2.2.1 to 2.2.3 in /console-webapp (#1896)
Bumps [json5](https://github.com/json5/json5) from 2.2.1 to 2.2.3.
- [Release notes](https://github.com/json5/json5/releases)
- [Changelog](https://github.com/json5/json5/blob/main/CHANGELOG.md)
- [Commits](https://github.com/json5/json5/compare/v2.2.1...v2.2.3)

---
updated-dependencies:
- dependency-name: json5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-06 15:06:54 -05:00
dependabot[bot]
722bf3fcb8 Bump engine.io from 6.2.0 to 6.2.1 in /console-webapp (#1895)
Bumps [engine.io](https://github.com/socketio/engine.io) from 6.2.0 to 6.2.1.
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.0...6.2.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-05 21:47:11 -05:00
Pavlo Tkach
274ae57385 Fix billing pipeline first month scheduling (#1891)
* Fix billing pipeline first month scheduling

* compare to expansion next month

* use yoda date comparison

* update cursor time to be mid of day
2023-01-05 21:45:56 -05:00
dependabot[bot]
ecd1dd81a2 Bump loader-utils from 2.0.2 to 2.0.4 in /console-webapp (#1894)
Bumps [loader-utils](https://github.com/webpack/loader-utils) from 2.0.2 to 2.0.4.
- [Release notes](https://github.com/webpack/loader-utils/releases)
- [Changelog](https://github.com/webpack/loader-utils/blob/v2.0.4/CHANGELOG.md)
- [Commits](https://github.com/webpack/loader-utils/compare/v2.0.2...v2.0.4)

---
updated-dependencies:
- dependency-name: loader-utils
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-05 21:41:03 -05:00
Pavlo Tkach
8f844cb437 Add new console backbone (#1876)
* Create console webapp, add material ui, initialize tlds and home pages

* Add servlet for serving console static files

* Add console tasks to nomulus tasks routine

* Fix for base console GCP base usr

* Add jetty dep and update_dependency.sh

* Update console servlet url

* verified fix for static url handler

* Another deps update

* Add Copyright

* Remove unused variable

* Update titles to Nomulus Console
2023-01-05 16:23:40 -05:00
Weimin Yu
e1864bee4e Disable id preassignment when writing to sql (#1893)
* Disable id preassignment when writing to sql

See b/264416932 for details.
2023-01-05 11:04:38 -05:00
sarahcaseybot
18641327de Add default tokens to TLD using nomulus tool (#1888)
* Add defualt tokens to TLD using nomulus tool

* add test
2023-01-04 13:25:25 -05:00
gbrodman
db9525903d Add an optional IAP-enabled ID token when using the Nomulus tool (#1887)
We can use the saved refresh token associated with the nomulus tool to
request an ID token with an audience of the IAP client in order to
satisfy IAP with with the Nomulus tool.

Note: this requires that the user of the Nomulus tool, e.g.
"gbrodman@google.com" has a User object stored in SQL.

Tested on QA
2023-01-04 11:43:31 -05:00
880 changed files with 59200 additions and 36373 deletions

View File

@@ -47,6 +47,10 @@ war {
if (project.path == ":services:default") {
war {
from("${rootDir}/console-webapp/dist/console-webapp") {
include "**/*"
into("console")
}
from("${coreResourcesDir}/google/registry/ui") {
include "registrar_bin.js"
if (environment != "production") {
@@ -99,8 +103,10 @@ explodeWar.doLast {
file("${it.explodedAppDirectory}/WEB-INF/lib/tools.jar").setWritable(true)
}
appengineDeployAll.finalizedBy ':deployCloudSchedulerAndQueue'
rootProject.deploy.dependsOn appengineDeployAll
rootProject.stage.dependsOn appengineStage
tasks['war'].dependsOn ':console-webapp:buildConsoleWebappProd'
tasks['war'].dependsOn ':core:compileProdJS'
tasks['war'].dependsOn ':core:processResources'
tasks['war'].dependsOn ':core:jar'

View File

@@ -551,12 +551,34 @@ task coreDev {
dependsOn 'javadoc'
dependsOn 'checkDependenciesDotGradle'
dependsOn 'checkLicense'
dependsOn ':console-webapp:runConsoleWebappUnitTests'
dependsOn ':core:check'
dependsOn 'assemble'
}
javadocDependentTasks.each { tasks.javadoc.dependsOn(it) }
// Runs the script, which deploys cloud scheduler and tasks based on the config
task deployCloudSchedulerAndQueue {
doLast {
def env = environment
if (!prodOrSandboxEnv) {
exec {
commandLine 'go', 'run',
"${rootDir}/release/builder/deployCloudSchedulerAndQueue.go",
"${rootDir}/core/src/main/java/google/registry/env/${env}/default/WEB-INF/cloud-scheduler-tasks.xml",
"domain-registry-${env}"
}
exec {
commandLine 'go', 'run',
"${rootDir}/release/builder/deployCloudSchedulerAndQueue.go",
"${rootDir}/core/src/main/java/google/registry/env/common/default/WEB-INF/cloud-tasks-queue.xml",
"domain-registry-${env}"
}
}
}
}
// disable javadoc in subprojects, these will break because they don't have
// the correct classpath (see above).
gradle.taskGraph.whenReady { graph ->

View File

@@ -9,33 +9,32 @@ com.fasterxml.jackson:jackson-bom:2.14.1=compileClasspath,testCompileClasspath,t
com.github.ben-manes.caffeine:caffeine:2.7.0=annotationProcessor,testAnnotationProcessor
com.github.kevinstern:software-and-algorithms:1.0=annotationProcessor,testAnnotationProcessor
com.google.android:annotations:4.1.1.4=testRuntimeClasspath
com.google.api-client:google-api-client:2.1.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:gapic-google-cloud-storage-v2:2.16.0-alpha=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-storage-v2:2.16.0-alpha=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-iam-v1:1.6.22=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-storage-v2:2.16.0-alpha=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-common-protos:2.11.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-iam-v1:1.6.22=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:api-common:2.2.2=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-grpc:2.20.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-httpjson:0.105.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax:2.20.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api-client:google-api-client:2.1.2=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:gapic-google-cloud-storage-v2:2.17.2-alpha=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-storage-v2:2.17.2-alpha=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-storage-v2:2.17.2-alpha=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-common-protos:2.13.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-iam-v1:1.8.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:api-common:2.5.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-grpc:2.22.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-httpjson:0.107.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax:2.22.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-storage:v1-rev20220705-2.0.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-credentials:1.13.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-oauth2-http:1.13.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-credentials:1.14.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-oauth2-http:1.14.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto.value:auto-value-annotations:1.10.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto.value:auto-value:1.10.1=annotationProcessor
com.google.auto.value:auto-value:1.10.1=annotationProcessor,compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto:auto-common:0.10=annotationProcessor,testAnnotationProcessor
com.google.cloud:google-cloud-core-grpc:2.9.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-http:2.9.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core:2.9.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-storage:2.16.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-grpc:2.9.4=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-http:2.9.4=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core:2.9.4=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-storage:2.17.2=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.code.findbugs:jFormatString:3.0.0=annotationProcessor,testAnnotationProcessor
com.google.code.findbugs:jsr305:3.0.2=annotationProcessor,checkstyle,compileClasspath,testAnnotationProcessor,testCompileClasspath,testRuntimeClasspath
com.google.code.gson:gson:2.10=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.code.gson:gson:2.10.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.common.html.types:types:1.0.6=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.errorprone:error_prone_annotation:2.3.4=annotationProcessor,testAnnotationProcessor
com.google.errorprone:error_prone_annotations:2.16=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.errorprone:error_prone_annotations:2.18.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.errorprone:error_prone_annotations:2.3.4=annotationProcessor,checkstyle,testAnnotationProcessor
com.google.errorprone:error_prone_check_api:2.3.4=annotationProcessor,testAnnotationProcessor
com.google.errorprone:error_prone_core:2.3.4=annotationProcessor,testAnnotationProcessor
@@ -57,8 +56,8 @@ com.google.j2objc:j2objc-annotations:1.1=annotationProcessor,testAnnotationProce
com.google.j2objc:j2objc-annotations:1.3=checkstyle,compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.jsinterop:jsinterop-annotations:1.0.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.oauth-client:google-oauth-client:1.34.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java-util:3.21.10=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java:3.21.10=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java-util:3.21.12=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java:3.21.12=compileClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java:3.4.0=annotationProcessor,testAnnotationProcessor
com.google.re2j:re2j:1.6=testRuntimeClasspath
com.google.template:soy:2021-02-01=compileClasspath,testCompileClasspath,testRuntimeClasspath
@@ -72,19 +71,19 @@ commons-codec:commons-codec:1.15=compileClasspath,testCompileClasspath,testRunti
commons-collections:commons-collections:3.2.2=checkstyle
commons-logging:commons-logging:1.2=compileClasspath,testCompileClasspath,testRuntimeClasspath
info.picocli:picocli:4.5.2=checkstyle
io.grpc:grpc-alts:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-api:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-auth:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-context:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-core:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-googleapis:1.51.0=testRuntimeClasspath
io.grpc:grpc-grpclb:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-netty-shaded:1.51.0=testRuntimeClasspath
io.grpc:grpc-protobuf-lite:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-protobuf:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-services:1.51.0=testRuntimeClasspath
io.grpc:grpc-stub:1.51.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-xds:1.51.0=testRuntimeClasspath
io.grpc:grpc-alts:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-api:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-auth:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-context:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-core:1.52.1=testRuntimeClasspath
io.grpc:grpc-googleapis:1.52.1=testRuntimeClasspath
io.grpc:grpc-grpclb:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-netty-shaded:1.52.1=testRuntimeClasspath
io.grpc:grpc-protobuf-lite:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-protobuf:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-services:1.52.1=testRuntimeClasspath
io.grpc:grpc-stub:1.52.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-xds:1.52.1=testRuntimeClasspath
io.opencensus:opencensus-api:0.31.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.opencensus:opencensus-contrib-http-util:0.31.1=compileClasspath,testCompileClasspath,testRuntimeClasspath
io.opencensus:opencensus-proto:0.2.0=testRuntimeClasspath
@@ -93,8 +92,8 @@ javax.annotation:javax.annotation-api:1.3.2=compileClasspath,testCompileClasspat
javax.annotation:jsr250-api:1.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
javax.inject:javax.inject:1=compileClasspath,testCompileClasspath,testRuntimeClasspath
junit:junit:4.13.2=testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy-agent:1.12.16=testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy:1.12.16=testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy-agent:1.12.22=testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy:1.12.22=testCompileClasspath,testRuntimeClasspath
net.sf.saxon:Saxon-HE:10.3=checkstyle
org.antlr:antlr4-runtime:4.8-1=checkstyle
org.apache.commons:commons-lang3:3.12.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
@@ -104,41 +103,40 @@ org.apache.httpcomponents:httpcore:4.4.15=compileClasspath,testCompileClasspath,
org.apiguardian:apiguardian-api:1.1.2=testCompileClasspath
org.checkerframework:checker-qual:2.11.1=checkstyle
org.checkerframework:checker-qual:3.0.0=annotationProcessor,testAnnotationProcessor
org.checkerframework:checker-qual:3.28.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.checkerframework:checker-qual:3.29.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.checkerframework:dataflow:3.0.0=annotationProcessor,testAnnotationProcessor
org.checkerframework:javacutil:3.0.0=annotationProcessor,testAnnotationProcessor
org.codehaus.mojo:animal-sniffer-annotations:1.17=annotationProcessor,testAnnotationProcessor
org.codehaus.mojo:animal-sniffer-annotations:1.22=testRuntimeClasspath
org.conscrypt:conscrypt-openjdk-uber:2.5.2=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.hamcrest:hamcrest-core:1.3=testCompileClasspath,testRuntimeClasspath
org.jacoco:org.jacoco.agent:0.8.6=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.6=jacocoAnt
org.jacoco:org.jacoco.core:0.8.6=jacocoAnt
org.jacoco:org.jacoco.report:0.8.6=jacocoAnt
org.jacoco:org.jacoco.agent:0.8.7=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.7=jacocoAnt
org.jacoco:org.jacoco.core:0.8.7=jacocoAnt
org.jacoco:org.jacoco.report:0.8.7=jacocoAnt
org.javassist:javassist:3.26.0-GA=checkstyle
org.json:json:20160212=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-api:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-engine:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-commons:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-engine:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit:junit-bom:5.9.1=testCompileClasspath,testRuntimeClasspath
org.mockito:mockito-core:4.9.0=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-api:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-engine:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-commons:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-engine:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit:junit-bom:5.9.2=testCompileClasspath,testRuntimeClasspath
org.mockito:mockito-core:5.0.0=testCompileClasspath,testRuntimeClasspath
org.objenesis:objenesis:3.3=testRuntimeClasspath
org.opentest4j:opentest4j:1.2.0=testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-analysis:7.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-analysis:8.0.1=jacocoAnt
org.ow2.asm:asm-analysis:9.1=jacocoAnt
org.ow2.asm:asm-commons:7.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-commons:8.0.1=jacocoAnt
org.ow2.asm:asm-commons:9.1=jacocoAnt
org.ow2.asm:asm-tree:7.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-tree:8.0.1=jacocoAnt
org.ow2.asm:asm-tree:9.1=jacocoAnt
org.ow2.asm:asm-util:7.0=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm:7.0=compileClasspath
org.ow2.asm:asm:8.0.1=jacocoAnt
org.ow2.asm:asm:9.1=testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm:9.1=jacocoAnt,testCompileClasspath,testRuntimeClasspath
org.pcollections:pcollections:2.1.2=annotationProcessor,testAnnotationProcessor
org.plumelib:plume-util:1.0.6=annotationProcessor,testAnnotationProcessor
org.plumelib:reflection-util:0.0.2=annotationProcessor,testAnnotationProcessor
org.plumelib:require-javadoc:0.1.0=annotationProcessor,testAnnotationProcessor
org.reflections:reflections:0.9.12=checkstyle
org.threeten:threetenbp:1.6.4=compileClasspath,testCompileClasspath,testRuntimeClasspath
org.threeten:threetenbp:1.6.5=compileClasspath,testCompileClasspath,testRuntimeClasspath
empty=

View File

@@ -17,7 +17,6 @@ package google.registry.gradle.plugin;
import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import google.registry.gradle.plugin.ProjectData.TaskData;
import java.util.Map;
import java.util.Optional;
import java.util.function.Supplier;

View File

@@ -38,6 +38,7 @@ configurations {
// All testing util classes. Other projects may declare dependency as:
// testImplementation project(path: 'common', configuration: 'testing')
create("testing")
testing.extendsFrom testingCompileOnly
}

View File

@@ -45,22 +45,21 @@ org.checkerframework:dataflow:3.0.0=annotationProcessor,errorprone,testAnnotatio
org.checkerframework:javacutil:3.0.0=annotationProcessor,errorprone,testAnnotationProcessor,testingAnnotationProcessor
org.codehaus.mojo:animal-sniffer-annotations:1.17=annotationProcessor,errorprone,testAnnotationProcessor,testingAnnotationProcessor
org.hamcrest:hamcrest-core:1.3=default,testCompileClasspath,testRuntimeClasspath,testing,testingCompileClasspath
org.jacoco:org.jacoco.agent:0.8.6=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.6=jacocoAnt
org.jacoco:org.jacoco.core:0.8.6=jacocoAnt
org.jacoco:org.jacoco.report:0.8.6=jacocoAnt
org.jacoco:org.jacoco.agent:0.8.7=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.7=jacocoAnt
org.jacoco:org.jacoco.core:0.8.7=jacocoAnt
org.jacoco:org.jacoco.report:0.8.7=jacocoAnt
org.javassist:javassist:3.26.0-GA=checkstyle
org.junit.jupiter:junit-jupiter-api:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-engine:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-commons:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-engine:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit:junit-bom:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-api:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-engine:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-commons:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-engine:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit:junit-bom:5.9.2=testCompileClasspath,testRuntimeClasspath
org.opentest4j:opentest4j:1.2.0=testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-analysis:8.0.1=jacocoAnt
org.ow2.asm:asm-commons:8.0.1=jacocoAnt
org.ow2.asm:asm-tree:8.0.1=jacocoAnt
org.ow2.asm:asm:8.0.1=jacocoAnt
org.ow2.asm:asm:9.1=compileClasspath,default,deploy_jar,runtimeClasspath,testCompileClasspath,testRuntimeClasspath,testing,testingCompileClasspath
org.ow2.asm:asm-analysis:9.1=jacocoAnt
org.ow2.asm:asm-commons:9.1=jacocoAnt
org.ow2.asm:asm-tree:9.1=jacocoAnt
org.ow2.asm:asm:9.1=compileClasspath,default,deploy_jar,jacocoAnt,runtimeClasspath,testCompileClasspath,testRuntimeClasspath,testing,testingCompileClasspath
org.pcollections:pcollections:2.1.2=annotationProcessor,errorprone,testAnnotationProcessor,testingAnnotationProcessor
org.plumelib:plume-util:1.0.6=annotationProcessor,errorprone,testAnnotationProcessor,testingAnnotationProcessor
org.plumelib:reflection-util:0.0.2=annotationProcessor,errorprone,testAnnotationProcessor,testingAnnotationProcessor

View File

@@ -35,7 +35,7 @@ public abstract class DateTimeUtils {
*
* <p>This value is (2^63-1)/1000 rounded down. AppEngine stores dates as 64 bit microseconds, but
* Java uses milliseconds, so this is the largest representable date that will survive a
* round-trip through Datastore.
* round-trip through the database.
*/
public static final DateTime END_OF_TIME = new DateTime(Long.MAX_VALUE / 1000, DateTimeZone.UTC);

View File

@@ -270,6 +270,10 @@
"moduleLicense": "Public Domain",
"moduleName": "org.tukaani:xz"
},
{
"moduleLicense": "Public Domain",
"moduleName": "org.json:json"
},
{
// "Apache License, Version 2.0".
"moduleLicense": null,

View File

@@ -25,7 +25,7 @@ import textwrap
import re
# We should never analyze any generated files
UNIVERSALLY_SKIPPED_PATTERNS = {"/build/", "cloudbuild-caches", "/out/", ".git/", ".gradle/"}
UNIVERSALLY_SKIPPED_PATTERNS = {"/build/", "cloudbuild-caches", "/out/", ".git/", ".gradle/", "/dist/", "karma.conf.js", "polyfills.ts", "test.ts"}
# We can't rely on CI to have the Enum package installed so we do this instead.
FORBIDDEN = 1
REQUIRED = 2
@@ -86,7 +86,7 @@ PRESUBMITS = {
# License check
PresubmitCheck(
r".*Copyright 20\d{2} The Nomulus Authors\. All Rights Reserved\.",
("java", "js", "soy", "sql", "py", "sh", "gradle"), {
("java", "js", "soy", "sql", "py", "sh", "gradle", "ts"), {
".git", "/build/", "/generated/", "/generated_tests/",
"node_modules/", "LoggerConfig.java", "registrar_bin.",
"registrar_dbg.", "google-java-format-diff.py",
@@ -95,7 +95,7 @@ PRESUBMITS = {
"File did not include the license header.",
# Files must end in a newline
PresubmitCheck(r".*\n$", ("java", "js", "soy", "sql", "py", "sh", "gradle"),
PresubmitCheck(r".*\n$", ("java", "js", "soy", "sql", "py", "sh", "gradle", "ts"),
{"node_modules/"}, REQUIRED):
"Source files must end in a newline.",

View File

@@ -0,0 +1,16 @@
# Editor configuration, see https://editorconfig.org
root = true
[*]
charset = utf-8
indent_style = space
indent_size = 2
insert_final_newline = true
trim_trailing_whitespace = true
[*.ts]
quote_type = single
[*.md]
max_line_length = off
trim_trailing_whitespace = false

42
console-webapp/.gitignore vendored Normal file
View File

@@ -0,0 +1,42 @@
# See http://help.github.com/ignore-files/ for more about ignoring files.
# Compiled output
/dist
/tmp
/out-tsc
/bazel-out
# Node
/node_modules
npm-debug.log
yarn-error.log
# IDEs and editors
.idea/
.project
.classpath
.c9/
*.launch
.settings/
*.sublime-workspace
# Visual Studio Code
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
.history/*
# Miscellaneous
/.angular/cache
.sass-cache/
/connect.lock
/coverage
/libpeerconnection.log
testem.log
/typings
# System files
.DS_Store
Thumbs.db

52
console-webapp/README.md Normal file
View File

@@ -0,0 +1,52 @@
# ConsoleWebapp
A web application for managing [Nomulus](https://github.com/google/nomulus).
## Status
Console webapp is currently under active development and some parts of it are
expected to change.
## Deployment
Webapp is deployed with the nomulus default service war to Google App Engine.
During nomulus default service war build task, gradle script triggers the
following:
1) Console webapp build script `buildConsoleWebappProd`, which installs
dependencies, assembles a compiled ts -> js, minified, optimized static
artifact (html, css, js)
2) Artifact assembled in step 1 then gets copied to core project web artifact
location, so that it can be deployed with the rest of the core webapp
## Development server
Run `npm run start:dev` to start both webapp dev server and API server instance.
Navigate to `http://localhost:4200/`. The application will automatically reload
if you change any of the source files.
## Code scaffolding
Run `ng generate component component-name` to generate a new component. You can
also use `ng generate directive|pipe|service|class|guard|interface|enum|module`.
## Build
Run `ng build` to build the project. The build artifacts will be stored in
the `dist/` directory.
## Running unit tests
Run `ng test` to execute the unit tests
via [Karma](https://karma-runner.github.io).
## Running end-to-end tests
Run `ng e2e` to execute the end-to-end tests via a platform of your choice. To
use this command, you need to first add a package that implements end-to-end
testing capabilities.
## Further help
To get more help on the Angular CLI use `ng help` or go check out
the [Angular CLI Overview and Command Reference](https://angular.io/cli) page.

109
console-webapp/angular.json Normal file
View File

@@ -0,0 +1,109 @@
{
"$schema": "./node_modules/@angular/cli/lib/config/schema.json",
"version": 1,
"newProjectRoot": "projects",
"projects": {
"console-webapp": {
"projectType": "application",
"schematics": {
"@schematics/angular:component": {
"style": "less"
}
},
"root": "",
"sourceRoot": "src",
"prefix": "app",
"architect": {
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist/console-webapp",
"index": "src/index.html",
"main": "src/main.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.app.json",
"inlineStyleLanguage": "less",
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"./node_modules/@angular/material/prebuilt-themes/indigo-pink.css",
"src/styles.less"
],
"scripts": []
},
"configurations": {
"production": {
"budgets": [
{
"type": "initial",
"maximumWarning": "500kb",
"maximumError": "1mb"
},
{
"type": "anyComponentStyle",
"maximumWarning": "2kb",
"maximumError": "4kb"
}
],
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"outputHashing": "all"
},
"development": {
"buildOptimizer": false,
"optimization": false,
"vendorChunk": true,
"extractLicenses": false,
"sourceMap": true,
"namedChunks": true
}
},
"defaultConfiguration": "production"
},
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"configurations": {
"production": {
"browserTarget": "console-webapp:build:production"
},
"development": {
"browserTarget": "console-webapp:build:development"
}
},
"defaultConfiguration": "development"
},
"extract-i18n": {
"builder": "@angular-devkit/build-angular:extract-i18n",
"options": {
"browserTarget": "console-webapp:build"
}
},
"test": {
"builder": "@angular-devkit/build-angular:karma",
"options": {
"main": "src/test.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.spec.json",
"karmaConfig": "karma.conf.js",
"inlineStyleLanguage": "less",
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"./node_modules/@angular/material/prebuilt-themes/indigo-pink.css",
"src/styles.less"
],
"scripts": []
}
}
}
}
}
}

View File

@@ -0,0 +1,54 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
def consoleDir = "${rootDir}/console-webapp"
clean {
delete "${consoleDir}/node_modules"
delete "${consoleDir}/dist"
}
task npmInstallDeps(type: Exec) {
workingDir "${consoleDir}/"
executable 'npm'
args 'i', '--no-audit', '--no-fund', '--loglevel=error'
}
task runConsoleWebappLocally(type: Exec) {
workingDir "${consoleDir}/"
executable 'npm'
args 'run', 'start:dev'
}
task runConsoleWebappUnitTests(type: Exec) {
workingDir "${consoleDir}/"
executable 'npm'
args 'run', 'test'
}
task buildConsoleWebappNonProd(type: Exec) {
workingDir "${consoleDir}/"
executable 'npm'
args 'run', 'build'
}
// Keeping the same as non prod for now before we figure out optimization we want to include
task buildConsoleWebappProd(type: Exec) {
workingDir "${consoleDir}/"
executable 'npm'
args 'run', 'build'
}
tasks.runConsoleWebappUnitTests.dependsOn(tasks.npmInstallDeps)
tasks.buildConsoleWebappProd.dependsOn(tasks.npmInstallDeps)

View File

@@ -0,0 +1,4 @@
# This is a Gradle generated file for dependency locking.
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
empty=classpath

View File

@@ -0,0 +1,7 @@
{
"/registrar":
{
"target": "http://localhost:8080/registrar",
"secure": false
}
}

View File

@@ -0,0 +1,48 @@
# This is a Gradle generated file for dependency locking.
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
antlr:antlr:2.7.7=checkstyle
com.github.ben-manes.caffeine:caffeine:2.7.0=annotationProcessor,errorprone,testAnnotationProcessor
com.github.kevinstern:software-and-algorithms:1.0=annotationProcessor,errorprone,testAnnotationProcessor
com.google.auto:auto-common:0.10=annotationProcessor,errorprone,testAnnotationProcessor
com.google.code.findbugs:jFormatString:3.0.0=annotationProcessor,errorprone,testAnnotationProcessor
com.google.code.findbugs:jsr305:3.0.2=annotationProcessor,checkstyle,errorprone,testAnnotationProcessor
com.google.errorprone:error_prone_annotation:2.3.4=annotationProcessor,errorprone,testAnnotationProcessor
com.google.errorprone:error_prone_annotations:2.3.4=annotationProcessor,checkstyle,errorprone,testAnnotationProcessor
com.google.errorprone:error_prone_check_api:2.3.4=annotationProcessor,errorprone,testAnnotationProcessor
com.google.errorprone:error_prone_core:2.3.4=annotationProcessor,errorprone,testAnnotationProcessor
com.google.errorprone:error_prone_type_annotations:2.3.4=annotationProcessor,errorprone,testAnnotationProcessor
com.google.guava:failureaccess:1.0.1=annotationProcessor,checkstyle,errorprone,testAnnotationProcessor
com.google.guava:guava:27.0.1-jre=annotationProcessor,errorprone,testAnnotationProcessor
com.google.guava:guava:29.0-jre=checkstyle
com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava=annotationProcessor,checkstyle,errorprone,testAnnotationProcessor
com.google.j2objc:j2objc-annotations:1.1=annotationProcessor,errorprone,testAnnotationProcessor
com.google.j2objc:j2objc-annotations:1.3=checkstyle
com.google.protobuf:protobuf-java:3.4.0=annotationProcessor,errorprone,testAnnotationProcessor
com.googlecode.java-diff-utils:diffutils:1.3.0=annotationProcessor,errorprone,testAnnotationProcessor
com.puppycrawl.tools:checkstyle:8.37=checkstyle
commons-beanutils:commons-beanutils:1.9.4=checkstyle
commons-collections:commons-collections:3.2.2=checkstyle
info.picocli:picocli:4.5.2=checkstyle
net.sf.saxon:Saxon-HE:10.3=checkstyle
org.antlr:antlr4-runtime:4.8-1=checkstyle
org.checkerframework:checker-qual:2.11.1=checkstyle
org.checkerframework:checker-qual:3.0.0=annotationProcessor,errorprone,testAnnotationProcessor
org.checkerframework:dataflow:3.0.0=annotationProcessor,errorprone,testAnnotationProcessor
org.checkerframework:javacutil:3.0.0=annotationProcessor,errorprone,testAnnotationProcessor
org.codehaus.mojo:animal-sniffer-annotations:1.17=annotationProcessor,errorprone,testAnnotationProcessor
org.jacoco:org.jacoco.agent:0.8.7=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.7=jacocoAnt
org.jacoco:org.jacoco.core:0.8.7=jacocoAnt
org.jacoco:org.jacoco.report:0.8.7=jacocoAnt
org.javassist:javassist:3.26.0-GA=checkstyle
org.ow2.asm:asm-analysis:9.1=jacocoAnt
org.ow2.asm:asm-commons:9.1=jacocoAnt
org.ow2.asm:asm-tree:9.1=jacocoAnt
org.ow2.asm:asm:9.1=jacocoAnt
org.pcollections:pcollections:2.1.2=annotationProcessor,errorprone,testAnnotationProcessor
org.plumelib:plume-util:1.0.6=annotationProcessor,errorprone,testAnnotationProcessor
org.plumelib:reflection-util:0.0.2=annotationProcessor,errorprone,testAnnotationProcessor
org.plumelib:require-javadoc:0.1.0=annotationProcessor,errorprone,testAnnotationProcessor
org.reflections:reflections:0.9.12=checkstyle
empty=archives,compileClasspath,default,deploy_jar,errorproneJavac,runtimeClasspath,testCompileClasspath,testRuntimeClasspath

View File

@@ -0,0 +1,44 @@
// Karma configuration file, see link for more information
// https://karma-runner.github.io/1.0/config/configuration-file.html
module.exports = function (config) {
config.set({
basePath: '',
frameworks: ['jasmine', '@angular-devkit/build-angular'],
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-coverage'),
require('@angular-devkit/build-angular/plugins/karma')
],
client: {
jasmine: {
// you can add configuration options for Jasmine here
// the possible options are listed at https://jasmine.github.io/api/edge/Configuration.html
// for example, you can disable the random execution with `random: false`
// or set a specific seed with `seed: 4321`
},
clearContext: false // leave Jasmine Spec Runner output visible in browser
},
jasmineHtmlReporter: {
suppressAll: true // removes the duplicated traces
},
coverageReporter: {
dir: require('path').join(__dirname, './coverage/console-webapp'),
subdir: '.',
reporters: [
{ type: 'html' },
{ type: 'text-summary' }
]
},
reporters: ['progress', 'kjhtml'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'],
singleRun: false,
restartOnFileChange: true
});
};

21568
console-webapp/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,45 @@
{
"name": "console-webapp",
"version": "0.0.0",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build --base-href=/console/",
"build:local": "ng build --base-href=/default/console/",
"watch": "ng build --watch --configuration development",
"test": "ng test --browsers=ChromeHeadless --watch=false",
"run:dev": "",
"start:dev": "concurrently \"./../gradlew :core:runTestServer\" \"ng serve --proxy-config dev-proxy.config.json\""
},
"private": true,
"dependencies": {
"@angular/animations": "^15.2.2",
"@angular/cdk": "^15.2.2",
"@angular/common": "^15.2.2",
"@angular/compiler": "^15.2.2",
"@angular/core": "^15.2.2",
"@angular/forms": "^15.2.2",
"@angular/material": "^15.2.2",
"@angular/platform-browser": "^15.2.2",
"@angular/platform-browser-dynamic": "^15.2.2",
"@angular/router": "^15.2.2",
"rxjs": "~7.5.0",
"tslib": "^2.3.0",
"zone.js": "~0.11.4"
},
"devDependencies": {
"@angular-devkit/build-angular": "^15.2.4",
"@angular/cli": "~15.2.4",
"@angular/compiler-cli": "^15.2.2",
"@types/jasmine": "~4.0.0",
"@types/node": "^18.11.18",
"concurrently": "^7.6.0",
"jasmine-core": "~4.3.0",
"karma": "~6.4.0",
"karma-chrome-launcher": "~3.1.0",
"karma-coverage": "~2.2.0",
"karma-jasmine": "~5.1.0",
"karma-jasmine-html-reporter": "~2.0.0",
"typescript": "~4.9.4"
}
}

View File

@@ -0,0 +1,29 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { NgModule } from '@angular/core';
import { RouterModule, Routes } from '@angular/router';
import {TldsComponent} from './tlds/tlds.component';
import {HomeComponent} from './home/home.component';
const routes: Routes = [
{ path: 'home', component: HomeComponent },
{ path: 'tlds', component: TldsComponent },
];
@NgModule({
imports: [RouterModule.forRoot(routes)],
exports: [RouterModule]
})
export class AppRoutingModule { }

View File

@@ -0,0 +1,14 @@
<div class="toolbar" role="banner">
Nomulus Console
</div>
<div class="content" role="main">
<nav>
<ul>
<li><a routerLink="/home" routerLinkActive="active" ariaCurrentWhenActive="page">Home page</a></li>
<li><a routerLink="/tlds" routerLinkActive="active" ariaCurrentWhenActive="page">TLDs</a></li>
</ul>
</nav>
</div>
<router-outlet></router-outlet>

View File

@@ -0,0 +1,35 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
:host {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
font-size: 14px;
color: #333;
box-sizing: border-box;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
h1,
h2,
h3,
h4,
h5,
h6 {
margin: 8px 0;
}
p {
margin: 0;
}

View File

@@ -0,0 +1,37 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { TestBed } from '@angular/core/testing';
import { RouterTestingModule } from '@angular/router/testing';
import { AppComponent } from './app.component';
describe('AppComponent', () => {
beforeEach(async () => {
await TestBed.configureTestingModule({
imports: [
RouterTestingModule
],
declarations: [
AppComponent
],
}).compileComponents();
});
it('should create the app', () => {
const fixture = TestBed.createComponent(AppComponent);
const app = fixture.componentInstance;
expect(app).toBeTruthy();
});
});

View File

@@ -0,0 +1,24 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.less']
})
export class AppComponent {
}

View File

@@ -0,0 +1,41 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import {MaterialModule} from './material.module';
import { HomeComponent } from './home/home.component';
import { TldsComponent } from './tlds/tlds.component';
@NgModule({
declarations: [
AppComponent,
HomeComponent,
TldsComponent,
],
imports: [
MaterialModule,
BrowserModule,
AppRoutingModule,
BrowserAnimationsModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }

View File

@@ -0,0 +1,14 @@
<h3>Recent Activity</h3>
<table mat-table [dataSource]="dataSource" class="mat-elevation-z8 console-home__activity">
<ng-container *ngFor="let column of columns" [matColumnDef]="column.columnDef">
<th mat-header-cell *matHeaderCellDef>
{{column.header}}
</th>
<td mat-cell *matCellDef="let row">
{{column.cell(row)}}
</td>
</ng-container>
<tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
<tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr>
</table>

View File

@@ -0,0 +1,14 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

View File

@@ -0,0 +1,39 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { HomeComponent } from './home.component';
import {MaterialModule} from '../material.module';
describe('HomeComponent', () => {
let component: HomeComponent;
let fixture: ComponentFixture<HomeComponent>;
beforeEach(async () => {
await TestBed.configureTestingModule({
imports: [MaterialModule],
declarations: [ HomeComponent ]
})
.compileComponents();
fixture = TestBed.createComponent(HomeComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});
it('should create', () => {
expect(component).toBeTruthy();
});
});

View File

@@ -0,0 +1,94 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { Component } from '@angular/core';
export interface ActivityRecord {
eventType: string;
userName: string;
registrarName: string;
timestamp: string;
details: string
}
const MOCK_DATA: ActivityRecord[] = [
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Export DUMS", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Update Contact", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
{eventType: "Delete Domain", userName:"user3", registrarName: "registrar1", timestamp: "2022-03-15T19:46:39.007", details: "All Domains under management exported as .csv file" },
];
@Component({
selector: 'app-home',
templateUrl: './home.component.html',
styleUrls: ['./home.component.less']
})
export class HomeComponent {
columns = [
{
columnDef: 'eventType',
header: 'Event Type',
cell:(record: ActivityRecord) => `${record.eventType}`,
},
{
columnDef: 'userName',
header: 'User',
cell: (record: ActivityRecord) => `${record.userName}`,
},
{
columnDef: 'registrarName',
header: 'Registrar',
cell: (record: ActivityRecord) => `${record.registrarName}`,
},
{
columnDef: 'timestamp',
header: 'Timestamp',
cell: (record: ActivityRecord) => `${record.timestamp}`,
},
{
columnDef: 'details',
header: 'Details',
cell: (record: ActivityRecord) => `${record.details}`,
},
];
dataSource = MOCK_DATA;
displayedColumns = this.columns.map(c => c.columnDef);
constructor() {
}
}

View File

@@ -0,0 +1,26 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import {NgModule} from '@angular/core';
import {MatCardModule} from '@angular/material/card';
import {MatTableModule} from '@angular/material/table';
const MATERIAL_MODULES = [
MatCardModule,
MatTableModule,
];
@NgModule({imports: MATERIAL_MODULES, exports: MATERIAL_MODULES})
export class MaterialModule {
}

View File

@@ -0,0 +1,24 @@
<div class="console-tlds__cards">
<mat-card class="console-tlds__card">
<mat-card-title>.how</mat-card-title>
<mat-card-subtitle>A place for thinkers, tinkerers, and knowledge seekers</mat-card-subtitle>
<mat-card-actions class="console-tlds__card-links">
<a title="Onboarding Now" href="#" target="_blank" rel="noopener">Onboarding Now</a>
<a title="Marketing Materials" href="#" target="_blank" rel="noopener">Marketing Materials</a>
<a title="Visit get.how for more information" href="#" target="_blank" rel="noopener">Visit get.how for more information</a>
</mat-card-actions>
</mat-card>
</div>
<div class="console-tlds__cards">
<mat-card class="console-tlds__card">
<mat-card-title>.soy</mat-card-title>
<mat-card-subtitle>A place for thinkers, tinkerers, and knowledge seekers</mat-card-subtitle>
<mat-card-actions class="console-tlds__card-links">
<a title="Onboarding Now" href="#" target="_blank" rel="noopener">Onboarding Now</a>
<a title="Marketing Materials" href="#" target="_blank" rel="noopener">Marketing Materials</a>
<a title="Visit get.how for more information" href="#" target="_blank" rel="noopener">Visit iam.soy for more information</a>
</mat-card-actions>
</mat-card>
</div>

View File

@@ -0,0 +1,28 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
.console-tlds {
&__cards {
display: flex;
border-top: 1px solid #ddd;
padding: 1rem;
}
&__card {
max-width: 300px;
}
&__card-links {
display: flex;
flex-direction: column;
}
}

View File

@@ -0,0 +1,39 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { TldsComponent } from './tlds.component';
import {MaterialModule} from '../material.module';
describe('TldsComponent', () => {
let component: TldsComponent;
let fixture: ComponentFixture<TldsComponent>;
beforeEach(async () => {
await TestBed.configureTestingModule({
imports: [MaterialModule],
declarations: [ TldsComponent ]
})
.compileComponents();
fixture = TestBed.createComponent(TldsComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});
it('should create', () => {
expect(component).toBeTruthy();
});
});

View File

@@ -1,4 +1,4 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -12,23 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.util;
import { Component, OnInit } from '@angular/core';
import java.io.Serializable;
@Component({
selector: 'app-tlds',
templateUrl: './tlds.component.html',
styleUrls: ['./tlds.component.less']
})
export class TldsComponent implements OnInit {
/** Used to query whether requests are still running. */
public interface RequestStatusChecker extends Serializable {
constructor() { }
/**
* Returns the unique log identifier of the current request.
*
* <p>Multiple calls must return the same value during the same Request.
*/
String getLogId();
ngOnInit(): void {
}
/**
* Returns true if the given request is currently running.
*/
boolean isRunning(String requestLogId);
}

View File

@@ -0,0 +1,17 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
export const environment = {
production: true
};

View File

@@ -0,0 +1,30 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This file can be replaced during build by using the `fileReplacements` array.
// `ng build` replaces `environment.ts` with `environment.prod.ts`.
// The list of file replacements can be found in `angular.json`.
export const environment = {
production: false
};
/*
* For easier debugging in development mode, you can import the following file
* to ignore zone related error stack frames such as `zone.run`, `zoneDelegate.invokeTask`.
*
* This import should be commented out in production mode because it will have a negative impact
* on performance if an error is thrown.
*/
// import 'zone.js/plugins/zone-error'; // Included with Angular CLI.

Binary file not shown.

After

Width:  |  Height:  |  Size: 948 B

View File

@@ -0,0 +1,16 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Nomulus Console</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="favicon.ico">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
</head>
<body class="mat-typography">
<app-root></app-root>
</body>
</html>

View File

@@ -0,0 +1,26 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app/app.module';
import { environment } from './environments/environment';
if (environment.production) {
enableProdMode();
}
platformBrowserDynamic().bootstrapModule(AppModule)
.catch(err => console.error(err));

View File

@@ -0,0 +1,53 @@
/**
* This file includes polyfills needed by Angular and is loaded before the app.
* You can add your own extra polyfills to this file.
*
* This file is divided into 2 sections:
* 1. Browser polyfills. These are applied before loading ZoneJS and are sorted by browsers.
* 2. Application imports. Files imported after ZoneJS that should be loaded before your main
* file.
*
* The current setup is for so-called "evergreen" browsers; the last versions of browsers that
* automatically update themselves. This includes recent versions of Safari, Chrome (including
* Opera), Edge on the desktop, and iOS and Chrome on mobile.
*
* Learn more in https://angular.io/guide/browser-support
*/
/***************************************************************************************************
* BROWSER POLYFILLS
*/
/**
* By default, zone.js will patch all possible macroTask and DomEvents
* user can disable parts of macroTask/DomEvents patch by setting following flags
* because those flags need to be set before `zone.js` being loaded, and webpack
* will put import in the top of bundle, so user need to create a separate file
* in this directory (for example: zone-flags.ts), and put the following flags
* into that file, and then add the following code before importing zone.js.
* import './zone-flags';
*
* The flags allowed in zone-flags.ts are listed here.
*
* The following flags will work for all browsers.
*
* (window as any).__Zone_disable_requestAnimationFrame = true; // disable patch requestAnimationFrame
* (window as any).__Zone_disable_on_property = true; // disable patch onProperty such as onclick
* (window as any).__zone_symbol__UNPATCHED_EVENTS = ['scroll', 'mousemove']; // disable patch specified eventNames
*
* in IE/Edge developer tools, the addEventListener will also be wrapped by zone.js
* with the following flag, it will bypass `zone.js` patch for IE/Edge
*
* (window as any).__Zone_enable_cross_context_check = true;
*
*/
/***************************************************************************************************
* Zone JS is required by default for Angular itself.
*/
import 'zone.js'; // Included with Angular CLI.
/***************************************************************************************************
* APPLICATION IMPORTS
*/

View File

@@ -0,0 +1,18 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
html, body { height: 100%; }
body { margin: 0; font-family: Roboto, "Helvetica Neue", sans-serif; }

View File

@@ -0,0 +1,14 @@
// This file is required by karma.conf.js and loads recursively all the .spec and framework files
import 'zone.js/testing';
import { getTestBed } from '@angular/core/testing';
import {
BrowserDynamicTestingModule,
platformBrowserDynamicTesting
} from '@angular/platform-browser-dynamic/testing';
// First, initialize the Angular testing environment.
getTestBed().initTestEnvironment(
BrowserDynamicTestingModule,
platformBrowserDynamicTesting(),
);

View File

@@ -0,0 +1,15 @@
/* To learn more about this file see: https://angular.io/config/tsconfig. */
{
"extends": "./tsconfig.json",
"compilerOptions": {
"outDir": "./out-tsc/app",
"types": []
},
"files": [
"src/main.ts",
"src/polyfills.ts"
],
"include": [
"src/**/*.d.ts"
]
}

View File

@@ -0,0 +1,33 @@
/* To learn more about this file see: https://angular.io/config/tsconfig. */
{
"compileOnSave": false,
"compilerOptions": {
"baseUrl": "./",
"outDir": "./dist/out-tsc",
"forceConsistentCasingInFileNames": true,
"strict": true,
"noImplicitOverride": true,
"noPropertyAccessFromIndexSignature": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"sourceMap": true,
"declaration": false,
"downlevelIteration": true,
"experimentalDecorators": true,
"moduleResolution": "node",
"importHelpers": true,
"target": "ES2022",
"module": "es2020",
"lib": [
"es2020",
"dom"
],
"useDefineForClassFields": false
},
"angularCompilerOptions": {
"enableI18nLegacyMessageIdFormat": false,
"strictInjectionParameters": true,
"strictInputAccessModifiers": true,
"strictTemplates": true
}
}

View File

@@ -0,0 +1,18 @@
/* To learn more about this file see: https://angular.io/config/tsconfig. */
{
"extends": "./tsconfig.json",
"compilerOptions": {
"outDir": "./out-tsc/spec",
"types": [
"jasmine"
]
},
"files": [
"src/test.ts",
"src/polyfills.ts"
],
"include": [
"src/**/*.spec.ts",
"src/**/*.d.ts"
]
}

View File

@@ -68,8 +68,6 @@ def dockerIncompatibleTestPatterns = [
// Nomulus classes, e.g., threads and objects retained by frameworks.
// TODO(weiminyu): identify cause and fix offending tests.
def fragileTestPatterns = [
// Test Datastore inexplicably aborts transaction.
"google/registry/model/tmch/ClaimsListShardTest.*",
// Changes cache timeouts and for some reason appears to have contention
// with other tests.
"google/registry/whois/WhoisCommandFactoryTest.*",
@@ -168,7 +166,6 @@ dependencies {
implementation deps['com.beust:jcommander']
implementation deps['com.github.ben-manes.caffeine:caffeine']
implementation deps['com.google.api:gax']
implementation deps['com.google.api.grpc:proto-google-cloud-datastore-v1']
implementation deps['com.google.api.grpc:proto-google-common-protos']
implementation deps['com.google.api.grpc:proto-google-cloud-secretmanager-v1']
implementation deps['com.google.api-client:google-api-client']
@@ -192,7 +189,6 @@ dependencies {
implementation deps['com.google.auth:google-auth-library-credentials']
implementation deps['com.google.auth:google-auth-library-oauth2-http']
implementation deps['com.google.cloud.bigdataoss:util']
implementation deps['com.google.cloud.datastore:datastore-v1-proto-client']
implementation deps['com.google.cloud.sql:jdbc-socket-factory-core']
runtimeOnly deps['com.google.cloud.sql:postgres-socket-factory']
implementation deps['com.google.cloud:google-cloud-secretmanager']
@@ -270,6 +266,8 @@ dependencies {
implementation deps['org.jsoup:jsoup']
testImplementation deps['org.mortbay.jetty:jetty']
implementation deps['org.postgresql:postgresql']
implementation "org.eclipse.jetty:jetty-server:9.4.49.v20220914"
implementation "org.eclipse.jetty:jetty-servlet:9.4.49.v20220914"
testImplementation deps['org.seleniumhq.selenium:selenium-api']
testImplementation deps['org.seleniumhq.selenium:selenium-chrome-driver']
testImplementation deps['org.seleniumhq.selenium:selenium-java']
@@ -695,10 +693,6 @@ createToolTask(
'google.registry.tools.DevTool',
sourceSets.nonprod)
createToolTask(
'createSyntheticDomainHistories',
'google.registry.tools.javascrap.CreateSyntheticDomainHistoriesPipeline')
project.tasks.create('generateSqlSchema', JavaExec) {
classpath = sourceSets.nonprod.runtimeClasspath
main = 'google.registry.tools.DevTool'
@@ -750,9 +744,14 @@ if (environment == 'alpha') {
],
invoicing :
[
mainClass: 'google.registry.beam.invoicing.InvoicingPipeline',
mainClass: 'google.registry.beam.billing.InvoicingPipeline',
metaData : 'google/registry/beam/invoicing_pipeline_metadata.json'
],
expandBilling :
[
mainClass: 'google.registry.beam.billing.ExpandBillingRecurrencesPipeline',
metaData : 'google/registry/beam/expand_billing_recurrences_pipeline_metadata.json'
],
rde :
[
mainClass: 'google.registry.beam.rde.RdePipeline',
@@ -763,6 +762,11 @@ if (environment == 'alpha') {
mainClass: 'google.registry.beam.resave.ResaveAllEppResourcesPipeline',
metaData: 'google/registry/beam/resave_all_epp_resources_pipeline_metadata.json'
],
wipeOutContactHistoryPii:
[
mainClass: 'google.registry.beam.wipeout.WipeOutContactHistoryPiiPipeline',
metaData: 'google/registry/beam/wipe_out_contact_history_pii_pipeline_metadata.json'
],
]
project.tasks.create("stageBeamPipelines") {
doLast {
@@ -1025,6 +1029,7 @@ test {
// TODO(weiminyu): Remove dependency on sqlIntegrationTest
}.dependsOn(fragileTest, outcastTest, standardTest, registryToolIntegrationTest, sqlIntegrationTest)
// When we override tests, we also break the cleanTest command.
cleanTest.dependsOn(cleanFragileTest, cleanOutcastTest, cleanStandardTest,
cleanRegistryToolIntegrationTest, cleanSqlIntegrationTest)

View File

@@ -22,13 +22,13 @@ com.github.ben-manes.caffeine:caffeine:2.9.3=compileClasspath,default,deploy_jar
com.github.docker-java:docker-java-api:3.2.13=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.docker-java:docker-java-transport-zerodep:3.2.13=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.docker-java:docker-java-transport:3.2.13=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jffi:1.3.9=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jffi:1.3.10=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-a64asm:1.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-constants:0.10.3=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-enxio:0.32.13=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-ffi:2.2.11=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-posix:3.1.15=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-unixsocket:0.38.17=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-constants:0.10.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-enxio:0.32.14=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-ffi:2.2.13=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-posix:3.1.16=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-unixsocket:0.38.19=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.jnr:jnr-x86asm:1.0.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.github.kevinstern:software-and-algorithms:1.0=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
com.google.android:annotations:4.1.1.4=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
@@ -37,48 +37,47 @@ com.google.api-client:google-api-client-jackson2:1.32.2=compileClasspath,default
com.google.api-client:google-api-client-java6:1.35.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api-client:google-api-client-servlet:1.35.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api-client:google-api-client:1.35.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:gapic-google-cloud-storage-v2:2.16.0-alpha=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1:2.23.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta1:0.147.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta2:0.147.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:gapic-google-cloud-storage-v2:2.17.2-alpha=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1:2.25.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta1:0.149.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigquerystorage-v1beta2:0.149.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigtable-admin-v2:1.27.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigtable-v2:2.14.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-pubsub-v1:1.102.20=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-pubsublite-v1:1.7.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-spanner-admin-database-v1:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-spanner-admin-instance-v1:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-spanner-v1:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-storage-v2:2.16.0-alpha=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-common-protos:2.9.6=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-iam-v1:1.6.22=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigquerystorage-v1:2.23.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigquerystorage-v1beta1:0.147.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigquerystorage-v1beta2:0.147.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigtable-admin-v2:2.14.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigtable-v2:2.14.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-datastore-v1:0.102.5=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-firestore-v1:3.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-bigtable-v2:2.16.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-pubsub-v1:1.103.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-pubsublite-v1:1.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-spanner-admin-database-v1:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-spanner-admin-instance-v1:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-spanner-v1:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-cloud-storage-v2:2.17.2-alpha=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:grpc-google-common-protos:2.10.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigquerystorage-v1:2.25.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigquerystorage-v1beta1:0.149.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigquerystorage-v1beta2:0.149.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigtable-admin-v2:2.16.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-bigtable-v2:2.16.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-datastore-v1:0.103.5=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-firestore-v1:3.7.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-monitoring-v3:1.64.0=compileClasspath,nonprodCompileClasspath,testCompileClasspath
com.google.api.grpc:proto-google-cloud-monitoring-v3:3.4.6=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-pubsub-v1:1.102.20=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-pubsublite-v1:1.7.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-secretmanager-v1:2.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:2.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-spanner-v1:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-storage-v2:2.16.0-alpha=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-tasks-v2:2.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.96.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.96.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-common-protos:2.11.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-iam-v1:1.6.22=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:api-common:2.2.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-grpc:2.20.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-httpjson:0.105.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax:2.20.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-monitoring-v3:3.6.0=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-pubsub-v1:1.103.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-pubsublite-v1:1.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-secretmanager-v1:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-secretmanager-v1beta1:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-spanner-admin-database-v1:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-spanner-admin-instance-v1:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-spanner-v1:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-storage-v2:2.17.2-alpha=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-tasks-v2:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-tasks-v2beta2:0.99.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-cloud-tasks-v2beta3:0.99.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-common-protos:2.13.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api.grpc:proto-google-iam-v1:1.8.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:api-common:2.5.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-grpc:2.22.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax-httpjson:0.107.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.api:gax:2.22.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-admin-directory:directory_v1-rev118-1.25.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-appengine:v1-rev20221205-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-appengine:v1-rev20230109-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-bigquery:v2-rev20220924-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-clouddebugger:v2-rev20220318-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-cloudresourcemanager:v1-rev20220828-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
@@ -88,22 +87,21 @@ com.google.apis:google-api-services-drive:v2-rev393-1.25.0=compileClasspath,defa
com.google.apis:google-api-services-groupssettings:v1-rev20210624-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-healthcare:v1-rev20220818-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-iamcredentials:v1-rev20210326-1.32.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-monitoring:v3-rev20221205-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-monitoring:v3-rev20230123-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-pubsub:v1-rev20220904-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-sheets:v4-rev20220927-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-sqladmin:v1beta4-rev20221017-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-sheets:v4-rev20221216-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-sqladmin:v1beta4-rev20230111-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.apis:google-api-services-storage:v1-rev20220705-2.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.appengine:appengine-api-1.0-sdk:1.9.86=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath
com.google.appengine:appengine-api-1.0-sdk:2.0.10=testCompileClasspath,testRuntimeClasspath
com.google.appengine:appengine-api-stubs:2.0.10=testCompileClasspath,testRuntimeClasspath
com.google.appengine:appengine-testing:1.9.86=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-credentials:1.13.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-oauth2-http:1.13.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-credentials:1.14.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auth:google-auth-library-oauth2-http:1.14.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto.service:auto-service-annotations:1.0.1=annotationProcessor,compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto.service:auto-service:1.0.1=annotationProcessor
com.google.auto.value:auto-value-annotations:1.10.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto.value:auto-value:1.10.1=annotationProcessor,testAnnotationProcessor
com.google.auto.value:auto-value:1.9=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.auto.value:auto-value:1.10.1=annotationProcessor,compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testAnnotationProcessor,testCompileClasspath,testRuntimeClasspath
com.google.auto:auto-common:0.10=errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
com.google.auto:auto-common:1.2=annotationProcessor
com.google.closure-stylesheets:closure-stylesheets:1.5.0=css
@@ -112,30 +110,30 @@ com.google.cloud.bigdataoss:util:2.2.6=compileClasspath,default,deploy_jar,nonpr
com.google.cloud.bigtable:bigtable-client-core:1.26.3=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud.bigtable:bigtable-metrics-api:1.26.3=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud.datastore:datastore-v1-proto-client:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud.sql:jdbc-socket-factory-core:1.7.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud.sql:postgres-socket-factory:1.7.2=default,deploy_jar,runtimeClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-bigquerystorage:2.23.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-bigtable-stats:2.14.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-bigtable:2.14.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-grpc:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-http:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-firestore:3.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud.sql:jdbc-socket-factory-core:1.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud.sql:postgres-socket-factory:1.9.0=default,deploy_jar,runtimeClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-bigquerystorage:2.25.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-bigtable-stats:2.16.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-bigtable:2.16.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-grpc:2.9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core-http:2.9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-core:2.9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-firestore:3.7.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-monitoring:1.82.0=compileClasspath,nonprodCompileClasspath,testCompileClasspath
com.google.cloud:google-cloud-monitoring:3.4.6=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-nio:0.125.0=testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-pubsub:1.120.20=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-pubsublite:1.7.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-secretmanager:2.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-spanner:6.31.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-storage:2.16.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-tasks:2.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:grpc-gcp:1.2.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:proto-google-cloud-firestore-bundle-v1:3.6.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-monitoring:3.6.0=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-nio:0.126.3=testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-pubsub:1.121.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-pubsublite:1.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-secretmanager:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-spanner:6.33.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-storage:2.17.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:google-cloud-tasks:2.9.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:grpc-gcp:1.3.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.cloud:proto-google-cloud-firestore-bundle-v1:3.7.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.code.findbugs:jFormatString:3.0.0=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
com.google.code.findbugs:jsr305:3.0.1=css
com.google.code.findbugs:jsr305:3.0.2=annotationProcessor,checkstyle,compileClasspath,default,deploy_jar,errorprone,nonprodAnnotationProcessor,nonprodCompileClasspath,nonprodRuntime,nonprodRuntimeClasspath,runtime,runtimeClasspath,soy,testAnnotationProcessor,testCompileClasspath,testRuntimeClasspath
com.google.code.gson:gson:2.10=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.code.gson:gson:2.10.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.code.gson:gson:2.7=css,soy
com.google.common.html.types:types:1.0.6=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,soy,testCompileClasspath,testRuntimeClasspath
com.google.dagger:dagger-compiler:2.44.2=annotationProcessor,testAnnotationProcessor
@@ -144,7 +142,7 @@ com.google.dagger:dagger-spi:2.44.2=annotationProcessor,testAnnotationProcessor
com.google.dagger:dagger:2.44.2=annotationProcessor,compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testAnnotationProcessor,testCompileClasspath,testRuntimeClasspath
com.google.devtools.ksp:symbol-processing-api:1.7.0-1.0.6=annotationProcessor,testAnnotationProcessor
com.google.errorprone:error_prone_annotation:2.3.4=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
com.google.errorprone:error_prone_annotations:2.16=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.errorprone:error_prone_annotations:2.18.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.errorprone:error_prone_annotations:2.3.4=checkstyle,errorprone,nonprodAnnotationProcessor,soy
com.google.errorprone:error_prone_annotations:2.7.1=annotationProcessor,testAnnotationProcessor
com.google.errorprone:error_prone_check_api:2.3.4=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
@@ -191,9 +189,9 @@ com.google.oauth-client:google-oauth-client-java6:1.34.1=compileClasspath,defaul
com.google.oauth-client:google-oauth-client-jetty:1.34.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.oauth-client:google-oauth-client-servlet:1.34.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.oauth-client:google-oauth-client:1.34.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java-util:3.21.10=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java-util:3.21.12=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java:2.5.0=css
com.google.protobuf:protobuf-java:3.21.10=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java:3.21.12=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
com.google.protobuf:protobuf-java:3.4.0=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
com.google.protobuf:protobuf-java:4.0.0-rc-2=soy
com.google.re2j:re2j:1.6=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
@@ -242,37 +240,37 @@ io.confluent:kafka-schema-registry-client:5.3.2=compileClasspath,default,deploy_
io.dropwizard.metrics:metrics-core:3.1.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.github.classgraph:classgraph:4.8.104=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.github.java-diff-utils:java-diff-utils:4.12=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-alts:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-api:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-auth:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-census:1.49.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-context:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-core:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-googleapis:1.51.0=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.grpc:grpc-grpclb:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-netty-shaded:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-netty:1.49.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-protobuf-lite:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-protobuf:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-rls:1.49.2=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.grpc:grpc-services:1.49.2=compileClasspath,nonprodCompileClasspath,testCompileClasspath
io.grpc:grpc-services:1.51.0=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.grpc:grpc-stub:1.51.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-xds:1.49.2=compileClasspath,nonprodCompileClasspath,testCompileClasspath
io.grpc:grpc-xds:1.51.0=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-buffer:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-codec-http2:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-codec-http:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-codec-socks:4.1.77.Final=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-codec:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-common:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-handler-proxy:4.1.77.Final=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-handler:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-resolver:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-alts:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-api:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-auth:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-census:1.50.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-context:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-core:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-googleapis:1.52.1=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.grpc:grpc-grpclb:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-netty-shaded:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-netty:1.50.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-protobuf-lite:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-protobuf:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-rls:1.50.2=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.grpc:grpc-services:1.50.2=compileClasspath,nonprodCompileClasspath,testCompileClasspath
io.grpc:grpc-services:1.52.1=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.grpc:grpc-stub:1.52.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.grpc:grpc-xds:1.50.2=compileClasspath,nonprodCompileClasspath,testCompileClasspath
io.grpc:grpc-xds:1.52.1=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-buffer:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-codec-http2:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-codec-http:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-codec-socks:4.1.79.Final=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-codec:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-common:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-handler-proxy:4.1.79.Final=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-handler:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-resolver:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-tcnative-boringssl-static:2.0.52.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-tcnative-classes:2.0.52.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-transport-native-unix-common:4.1.77.Final=default,deploy_jar,nonprodRuntimeClasspath,runtimeClasspath,testRuntimeClasspath
io.netty:netty-transport:4.1.77.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-transport-native-unix-common:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.netty:netty-transport:4.1.79.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.opencensus:opencensus-api:0.31.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.opencensus:opencensus-contrib-exemplar-util:0.31.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
io.opencensus:opencensus-contrib-grpc-metrics:0.31.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
@@ -304,8 +302,9 @@ jline:jline:1.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonp
joda-time:joda-time:2.10.10=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
junit:junit:4.13.2=default,nonprodCompileClasspath,nonprodRuntimeClasspath,testCompileClasspath,testRuntimeClasspath
net.arnx:nashorn-promise:0.1.1=nonprodRuntime,runtime,testRuntimeClasspath
net.bytebuddy:byte-buddy-agent:1.12.16=testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy:1.12.18=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy-agent:1.12.22=testCompileClasspath,testRuntimeClasspath
net.bytebuddy:byte-buddy:1.12.18=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath
net.bytebuddy:byte-buddy:1.12.22=testCompileClasspath,testRuntimeClasspath
net.java.dev.javacc:javacc:4.1=css
net.java.dev.jna:jna:5.8.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
net.ltgt.gradle.incap:incap:0.2=annotationProcessor,testAnnotationProcessor
@@ -315,22 +314,22 @@ org.apache.arrow:arrow-format:5.0.0=compileClasspath,default,deploy_jar,nonprodC
org.apache.arrow:arrow-memory-core:5.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.arrow:arrow-vector:5.0.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.avro:avro:1.8.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-model-fn-execution:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-model-job-management:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-model-pipeline:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-core-construction-java:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-core-java:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-direct-java:2.43.0=testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-google-cloud-dataflow-java:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-java-fn-execution:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-core:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-expansion-service:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-extensions-arrow:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-extensions-google-cloud-platform-core:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-extensions-protobuf:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-fn-execution:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-io-google-cloud-platform:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-io-kafka:2.43.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-model-fn-execution:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-model-job-management:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-model-pipeline:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-core-construction-java:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-core-java:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-direct-java:2.44.0=testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-google-cloud-dataflow-java:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-runners-java-fn-execution:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-core:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-expansion-service:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-extensions-arrow:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-extensions-google-cloud-platform-core:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-extensions-protobuf:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-fn-execution:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-io-google-cloud-platform:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-sdks-java-io-kafka:2.44.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-vendor-grpc-1_48_1:0.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.beam:beam-vendor-guava-26_0-jre:0.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.commons:commons-compress:1.22=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
@@ -340,8 +339,8 @@ org.apache.commons:commons-lang3:3.12.0=compileClasspath,default,deploy_jar,nonp
org.apache.commons:commons-text:1.10.0=testCompileClasspath,testRuntimeClasspath
org.apache.ftpserver:ftplet-api:1.2.0=testCompileClasspath,testRuntimeClasspath
org.apache.ftpserver:ftpserver-core:1.2.0=testCompileClasspath,testRuntimeClasspath
org.apache.httpcomponents:httpclient:4.5.13=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.httpcomponents:httpcore:4.4.15=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.httpcomponents:httpclient:4.5.14=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.httpcomponents:httpcore:4.4.16=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.apache.mina:mina-core:2.1.6=testCompileClasspath,testRuntimeClasspath
org.apache.sshd:sshd-core:2.0.0=testCompileClasspath,testRuntimeClasspath
org.apache.sshd:sshd-scp:2.0.0=testCompileClasspath,testRuntimeClasspath
@@ -356,7 +355,7 @@ org.checkerframework:checker-compat-qual:2.5.5=annotationProcessor,compileClassp
org.checkerframework:checker-qual:2.11.1=checkstyle
org.checkerframework:checker-qual:3.0.0=errorprone,nonprodAnnotationProcessor
org.checkerframework:checker-qual:3.12.0=annotationProcessor,testAnnotationProcessor
org.checkerframework:checker-qual:3.28.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.checkerframework:checker-qual:3.29.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.checkerframework:checker-qual:3.5.0=nonprodRuntime,runtime,soy
org.checkerframework:dataflow:3.0.0=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
org.checkerframework:javacutil:3.0.0=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
@@ -367,7 +366,14 @@ org.codehaus.mojo:animal-sniffer-annotations:1.22=default,deploy_jar,nonprodRunt
org.conscrypt:conscrypt-openjdk-uber:2.5.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.easymock:easymock:3.0=css
org.eclipse.angus:angus-activation:1.0.0=nonprodRuntime,runtime
org.flywaydb:flyway-core:9.10.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-http:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-io:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-security:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-server:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-servlet:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-util-ajax:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.eclipse.jetty:jetty-util:9.4.49.v20220914=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.flywaydb:flyway-core:9.12.0=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.glassfish.jaxb:jaxb-core:4.0.1=nonprodRuntime,runtime
org.glassfish.jaxb:jaxb-runtime:2.3.1=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.glassfish.jaxb:jaxb-runtime:4.0.1=nonprodRuntime,runtime
@@ -383,10 +389,10 @@ org.hamcrest:hamcrest:2.2=testCompileClasspath,testRuntimeClasspath
org.hibernate.common:hibernate-commons-annotations:5.1.2.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.hibernate:hibernate-core:5.6.14.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.hibernate:hibernate-hikaricp:5.6.14.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.jacoco:org.jacoco.agent:0.8.6=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.6=jacocoAnt
org.jacoco:org.jacoco.core:0.8.6=jacocoAnt
org.jacoco:org.jacoco.report:0.8.6=jacocoAnt
org.jacoco:org.jacoco.agent:0.8.7=jacocoAgent,jacocoAnt
org.jacoco:org.jacoco.ant:0.8.7=jacocoAnt
org.jacoco:org.jacoco.core:0.8.7=jacocoAnt
org.jacoco:org.jacoco.report:0.8.7=jacocoAnt
org.javassist:javassist:3.26.0-GA=checkstyle
org.jboss.logging:jboss-logging:3.4.3.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:1.1.1.Final=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
@@ -403,40 +409,40 @@ org.joda:joda-money:1.0.3=compileClasspath,default,deploy_jar,nonprodCompileClas
org.json:json:20160212=soy
org.json:json:20200518=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.jsoup:jsoup:1.15.3=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.junit-pioneer:junit-pioneer:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-api:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-engine:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-migrationsupport:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-params:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-commons:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-engine:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-launcher:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-runner:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-suite-api:1.9.1=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-suite-commons:1.9.1=testRuntimeClasspath
org.junit:junit-bom:5.9.1=testCompileClasspath,testRuntimeClasspath
org.junit-pioneer:junit-pioneer:2.0.0-RC1=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-api:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-engine:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-migrationsupport:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.jupiter:junit-jupiter-params:5.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-commons:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-engine:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-launcher:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-runner:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-suite-api:1.9.2=testCompileClasspath,testRuntimeClasspath
org.junit.platform:junit-platform-suite-commons:1.9.2=testRuntimeClasspath
org.junit:junit-bom:5.9.2=testCompileClasspath,testRuntimeClasspath
org.jvnet.staxex:stax-ex:1.8=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.mockito:mockito-core:1.10.19=css
org.mockito:mockito-core:4.9.0=testCompileClasspath,testRuntimeClasspath
org.mockito:mockito-junit-jupiter:4.9.0=testCompileClasspath,testRuntimeClasspath
org.mockito:mockito-core:5.0.0=testCompileClasspath,testRuntimeClasspath
org.mockito:mockito-junit-jupiter:5.0.0=testCompileClasspath,testRuntimeClasspath
org.mortbay.jetty:jetty-util:6.1.26=testCompileClasspath,testRuntimeClasspath
org.mortbay.jetty:jetty:6.1.26=testCompileClasspath,testRuntimeClasspath
org.objenesis:objenesis:2.1=css
org.objenesis:objenesis:3.3=testRuntimeClasspath
org.opentest4j:opentest4j:1.2.0=testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-analysis:7.0=soy
org.ow2.asm:asm-analysis:8.0.1=jacocoAnt
org.ow2.asm:asm-analysis:9.1=jacocoAnt
org.ow2.asm:asm-analysis:9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-commons:7.0=soy
org.ow2.asm:asm-commons:8.0.1=jacocoAnt
org.ow2.asm:asm-commons:9.1=jacocoAnt
org.ow2.asm:asm-commons:9.2=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-tree:7.0=soy
org.ow2.asm:asm-tree:8.0.1=jacocoAnt
org.ow2.asm:asm-tree:9.1=jacocoAnt
org.ow2.asm:asm-tree:9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm-util:7.0=soy
org.ow2.asm:asm-util:9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.ow2.asm:asm:7.0=soy
org.ow2.asm:asm:8.0.1=jacocoAnt
org.ow2.asm:asm:9.1=jacocoAnt
org.ow2.asm:asm:9.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.pcollections:pcollections:2.1.2=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
org.plumelib:plume-util:1.0.6=annotationProcessor,errorprone,nonprodAnnotationProcessor,testAnnotationProcessor
@@ -459,8 +465,8 @@ org.slf4j:jcl-over-slf4j:1.7.30=nonprodRuntime,runtime,testRuntimeClasspath
org.slf4j:jul-to-slf4j:1.7.30=nonprodRuntime,runtime,testRuntimeClasspath
org.slf4j:slf4j-api:1.7.30=nonprodRuntime,runtime
org.slf4j:slf4j-api:1.7.36=compileClasspath,nonprodCompileClasspath,nonprodRuntimeClasspath,testCompileClasspath
org.slf4j:slf4j-api:2.0.5=default,deploy_jar,runtimeClasspath,testRuntimeClasspath
org.slf4j:slf4j-jdk14:2.0.5=default,deploy_jar,runtimeClasspath,testRuntimeClasspath
org.slf4j:slf4j-api:2.0.6=default,deploy_jar,runtimeClasspath,testRuntimeClasspath
org.slf4j:slf4j-jdk14:2.0.6=default,deploy_jar,runtimeClasspath,testRuntimeClasspath
org.springframework:spring-core:5.3.18=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.springframework:spring-expression:5.3.18=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.springframework:spring-jcl:5.3.18=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
@@ -470,7 +476,7 @@ org.testcontainers:junit-jupiter:1.17.6=testCompileClasspath,testRuntimeClasspat
org.testcontainers:postgresql:1.17.6=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.testcontainers:selenium:1.17.6=testCompileClasspath,testRuntimeClasspath
org.testcontainers:testcontainers:1.17.6=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.threeten:threetenbp:1.6.4=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.threeten:threetenbp:1.6.5=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.tukaani:xz:1.5=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.w3c.css:sac:1.3=compileClasspath,default,deploy_jar,nonprodCompileClasspath,nonprodRuntimeClasspath,runtimeClasspath,testCompileClasspath,testRuntimeClasspath
org.webjars.npm:viz.js-graphviz-java:2.1.3=nonprodRuntime,runtime,testRuntimeClasspath

View File

@@ -25,7 +25,6 @@ import com.google.common.flogger.FluentLogger;
import google.registry.model.EppResource;
import google.registry.persistence.VKey;
import google.registry.request.Action.Service;
import google.registry.util.CloudTasksUtils;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
@@ -86,6 +85,6 @@ public final class AsyncTaskEnqueuer {
cloudTasksUtils.enqueue(
QUEUE_ASYNC_ACTIONS,
cloudTasksUtils.createPostTaskWithDelay(
ResaveEntityAction.PATH, Service.BACKEND.toString(), params, etaDuration));
ResaveEntityAction.PATH, Service.BACKEND, params, etaDuration));
}
}

View File

@@ -17,7 +17,6 @@ package google.registry.batch;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_REQUESTED_TIME;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESAVE_TIMES;
import static google.registry.batch.AsyncTaskEnqueuer.PARAM_RESOURCE_KEY;
import static google.registry.batch.CannedScriptExecutionAction.SCRIPT_PARAM;
import static google.registry.request.RequestParameters.extractBooleanParameter;
import static google.registry.request.RequestParameters.extractIntParameter;
import static google.registry.request.RequestParameters.extractLongParameter;
@@ -105,10 +104,27 @@ public class BatchModule {
}
@Provides
@Parameter(ExpandRecurringBillingEventsAction.PARAM_CURSOR_TIME)
static Optional<DateTime> provideCursorTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(
req, ExpandRecurringBillingEventsAction.PARAM_CURSOR_TIME);
@Parameter(ExpandBillingRecurrencesAction.PARAM_START_TIME)
static Optional<DateTime> provideStartTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(req, ExpandBillingRecurrencesAction.PARAM_START_TIME);
}
@Provides
@Parameter(ExpandBillingRecurrencesAction.PARAM_END_TIME)
static Optional<DateTime> provideEndTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(req, ExpandBillingRecurrencesAction.PARAM_END_TIME);
}
@Provides
@Parameter(WipeOutContactHistoryPiiAction.PARAM_CUTOFF_TIME)
static Optional<DateTime> provideCutoffTime(HttpServletRequest req) {
return extractOptionalDatetimeParameter(req, WipeOutContactHistoryPiiAction.PARAM_CUTOFF_TIME);
}
@Provides
@Parameter(ExpandBillingRecurrencesAction.PARAM_ADVANCE_CURSOR)
static boolean provideAdvanceCursor(HttpServletRequest req) {
return extractBooleanParameter(req, ExpandBillingRecurrencesAction.PARAM_ADVANCE_CURSOR);
}
@Provides
@@ -122,11 +138,4 @@ public class BatchModule {
static boolean provideIsDryRun(HttpServletRequest req) {
return extractBooleanParameter(req, PARAM_DRY_RUN);
}
// TODO(b/234424397): remove method after credential changes are rolled out.
@Provides
@Parameter(SCRIPT_PARAM)
static String provideScriptName(HttpServletRequest req) {
return extractRequiredParameter(req, SCRIPT_PARAM);
}
}

View File

@@ -16,26 +16,23 @@ package google.registry.batch;
import static google.registry.request.Action.Method.POST;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import google.registry.batch.cannedscript.GroupsApiChecker;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import javax.inject.Inject;
/**
* Action that executes a canned script specified by the caller.
*
* <p>This class is introduced to help the safe rollout of credential changes. The delegated
* credentials in particular, benefit from this: they require manual configuration of the peer
* system in each environment, and may wait hours or even days after deployment until triggered by
* user activities.
* <p>This class provides a hook for invoking hard-coded methods. The main use case is to verify in
* Sandbox and Production environments new features that depend on environment-specific
* configurations. For example, the {@code DelegatedCredential}, which requires correct GWorkspace
* configuration, has been tested this way. Since it is a hassle to add or remove endpoints, we keep
* this class all the time.
*
* <p>This action can be invoked using the Nomulus CLI command: {@code nomulus -e ${env} curl
* --service BACKEND -X POST -u '/_dr/task/executeCannedScript?script=${script_name}'}
* --service BACKEND -X POST -u '/_dr/task/executeCannedScript}'}
*/
// TODO(b/234424397): remove class after credential changes are rolled out.
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/executeCannedScript",
@@ -45,29 +42,18 @@ import javax.inject.Inject;
public class CannedScriptExecutionAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
static final String SCRIPT_PARAM = "script";
static final ImmutableMap<String, Runnable> SCRIPTS =
ImmutableMap.of("runGroupsApiChecks", GroupsApiChecker::runGroupsApiChecks);
private final String scriptName;
@Inject
CannedScriptExecutionAction(@Parameter(SCRIPT_PARAM) String scriptName) {
logger.atInfo().log("Received request to run script %s", scriptName);
this.scriptName = scriptName;
CannedScriptExecutionAction() {
logger.atInfo().log("Received request to run scripts.");
}
@Override
public void run() {
if (!SCRIPTS.containsKey(scriptName)) {
throw new IllegalArgumentException("Script not found:" + scriptName);
}
try {
SCRIPTS.get(scriptName).run();
logger.atInfo().log("Finished running %s.", scriptName);
// Invoke canned scripts here.
logger.atInfo().log("Finished running scripts.");
} catch (Throwable t) {
logger.atWarning().withCause(t).log("Error executing %s", scriptName);
logger.atWarning().withCause(t).log("Error executing scripts.");
throw new RuntimeException("Execution failed.");
}
}

View File

@@ -13,11 +13,11 @@
// limitations under the License.
package google.registry.batch;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Iterables;
import com.google.common.flogger.FluentLogger;
import com.google.common.primitives.Ints;
@@ -25,7 +25,6 @@ import google.registry.config.RegistryConfig.Config;
import google.registry.model.domain.token.AllocationToken;
import google.registry.model.domain.token.PackagePromotion;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarPoc;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.auth.Auth;
@@ -88,10 +87,10 @@ public class CheckPackagesComplianceAction implements Runnable {
private void checkPackages() {
ImmutableList<PackagePromotion> packages = tm().loadAllOf(PackagePromotion.class);
ImmutableList.Builder<PackagePromotion> packagesOverCreateLimitBuilder =
new ImmutableList.Builder<>();
ImmutableList.Builder<PackagePromotion> packagesOverActiveDomainsLimitBuilder =
new ImmutableList.Builder<>();
ImmutableMap.Builder<PackagePromotion, Long> packagesOverCreateLimitBuilder =
new ImmutableMap.Builder<>();
ImmutableMap.Builder<PackagePromotion, Long> packagesOverActiveDomainsLimitBuilder =
new ImmutableMap.Builder<>();
for (PackagePromotion packagePromo : packages) {
Long creates =
(Long)
@@ -103,12 +102,12 @@ public class CheckPackagesComplianceAction implements Runnable {
.setParameter("lastBilling", packagePromo.getNextBillingDate().minusYears(1))
.getSingleResult();
if (creates > packagePromo.getMaxCreates()) {
int overage = Ints.saturatedCast(creates) - packagePromo.getMaxCreates();
long overage = creates - packagePromo.getMaxCreates();
logger.atInfo().log(
"Package with package token %s has exceeded their max domain creation limit"
+ " by %d name(s).",
packagePromo.getToken().getKey(), overage);
packagesOverCreateLimitBuilder.add(packagePromo);
packagesOverCreateLimitBuilder.put(packagePromo, creates);
}
Long activeDomains =
@@ -125,20 +124,20 @@ public class CheckPackagesComplianceAction implements Runnable {
"Package with package token %s has exceed their max active domains limit by"
+ " %d name(s).",
packagePromo.getToken().getKey(), overage);
packagesOverActiveDomainsLimitBuilder.add(packagePromo);
packagesOverActiveDomainsLimitBuilder.put(packagePromo, activeDomains);
}
}
handlePackageCreationOverage(packagesOverCreateLimitBuilder.build());
handleActiveDomainOverage(packagesOverActiveDomainsLimitBuilder.build());
}
private void handlePackageCreationOverage(ImmutableList<PackagePromotion> overageList) {
private void handlePackageCreationOverage(ImmutableMap<PackagePromotion, Long> overageList) {
if (overageList.isEmpty()) {
logger.atInfo().log("Found no packages over their create limit.");
return;
}
logger.atInfo().log("Found %d packages over their create limit.", overageList.size());
for (PackagePromotion packagePromotion : overageList) {
for (PackagePromotion packagePromotion : overageList.keySet()) {
AllocationToken packageToken = tm().loadByKey(packagePromotion.getToken());
Optional<Registrar> registrar =
Registrar.loadByRegistrarIdCached(
@@ -147,9 +146,11 @@ public class CheckPackagesComplianceAction implements Runnable {
String body =
String.format(
packageCreateLimitEmailBody,
registrar.get().getRegistrarName(),
packagePromotion.getId(),
packageToken.getToken(),
registrySupportEmail);
registrar.get().getRegistrarName(),
packagePromotion.getMaxCreates(),
overageList.get(packagePromotion));
sendNotification(packageToken, packageCreateLimitEmailSubject, body, registrar.get());
} else {
throw new IllegalStateException(
@@ -158,13 +159,13 @@ public class CheckPackagesComplianceAction implements Runnable {
}
}
private void handleActiveDomainOverage(ImmutableList<PackagePromotion> overageList) {
private void handleActiveDomainOverage(ImmutableMap<PackagePromotion, Long> overageList) {
if (overageList.isEmpty()) {
logger.atInfo().log("Found no packages over their active domains limit.");
return;
}
logger.atInfo().log("Found %d packages over their active domains limit.", overageList.size());
for (PackagePromotion packagePromotion : overageList) {
for (PackagePromotion packagePromotion : overageList.keySet()) {
int daysSinceLastNotification =
packagePromotion
.getLastNotificationSent()
@@ -176,15 +177,18 @@ public class CheckPackagesComplianceAction implements Runnable {
continue;
} else if (daysSinceLastNotification < FORTY_DAYS) {
// Send an upgrade email if last email was between 30 and 40 days ago
sendActiveDomainOverageEmail(/* warning= */ false, packagePromotion);
sendActiveDomainOverageEmail(
/* warning= */ false, packagePromotion, overageList.get(packagePromotion));
} else {
// Send a warning email
sendActiveDomainOverageEmail(/* warning= */ true, packagePromotion);
sendActiveDomainOverageEmail(
/* warning= */ true, packagePromotion, overageList.get(packagePromotion));
}
}
}
private void sendActiveDomainOverageEmail(boolean warning, PackagePromotion packagePromotion) {
private void sendActiveDomainOverageEmail(
boolean warning, PackagePromotion packagePromotion, long activeDomains) {
String emailSubject =
warning ? packageDomainLimitWarningEmailSubject : packageDomainLimitUpgradeEmailSubject;
String emailTemplate =
@@ -197,9 +201,11 @@ public class CheckPackagesComplianceAction implements Runnable {
String body =
String.format(
emailTemplate,
registrar.get().getRegistrarName(),
packagePromotion.getId(),
packageToken.getToken(),
registrySupportEmail);
registrar.get().getRegistrarName(),
packagePromotion.getMaxDomains(),
activeDomains);
sendNotification(packageToken, emailSubject, body, registrar.get());
tm().put(packagePromotion.asBuilder().setLastNotificationSent(clock.nowUtc()).build());
} else {
@@ -212,15 +218,9 @@ public class CheckPackagesComplianceAction implements Runnable {
AllocationToken packageToken, String subject, String body, Registrar registrar) {
logger.atInfo().log(
String.format(
"Compliance email sent to the %s registrar regarding the package with token" + " %s.",
"Compliance email sent to support regarding the %s registrar and the package with token"
+ " %s.",
registrar.getRegistrarName(), packageToken.getToken()));
sendEmailUtils.sendEmail(
subject,
body,
Optional.of(registrySupportEmail),
registrar.getContacts().stream()
.filter(c -> c.getTypes().contains(RegistrarPoc.Type.ADMIN))
.map(RegistrarPoc::getEmailAddress)
.collect(toImmutableList()));
sendEmailUtils.sendEmail(subject, body, ImmutableList.of(registrySupportEmail));
}
}

View File

@@ -12,10 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.util;
package google.registry.batch;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.tools.ServiceConnection.getServer;
import static java.util.concurrent.TimeUnit.SECONDS;
import com.google.api.gax.rpc.ApiException;
@@ -23,6 +24,8 @@ import com.google.cloud.tasks.v2.AppEngineHttpRequest;
import com.google.cloud.tasks.v2.AppEngineRouting;
import com.google.cloud.tasks.v2.CloudTasksClient;
import com.google.cloud.tasks.v2.HttpMethod;
import com.google.cloud.tasks.v2.HttpRequest;
import com.google.cloud.tasks.v2.OidcToken;
import com.google.cloud.tasks.v2.QueueName;
import com.google.cloud.tasks.v2.Task;
import com.google.common.base.Joiner;
@@ -36,12 +39,21 @@ import com.google.common.net.MediaType;
import com.google.common.net.UrlEscapers;
import com.google.protobuf.ByteString;
import com.google.protobuf.util.Timestamps;
import google.registry.config.RegistryConfig.Config;
import google.registry.request.Action.Service;
import google.registry.util.Clock;
import google.registry.util.CollectionUtils;
import google.registry.util.Retrier;
import java.io.Serializable;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Optional;
import java.util.Random;
import java.util.function.BiConsumer;
import java.util.function.Consumer;
import java.util.function.Supplier;
import javax.annotation.Nullable;
import javax.inject.Inject;
import org.joda.time.Duration;
/** Utilities for dealing with Cloud Tasks. */
@@ -55,18 +67,26 @@ public class CloudTasksUtils implements Serializable {
private final Clock clock;
private final String projectId;
private final String locationId;
// defaultServiceAccount and iapClientId are nullable because Optional isn't serializable
@Nullable private final String defaultServiceAccount;
@Nullable private final String iapClientId;
private final SerializableCloudTasksClient client;
@Inject
public CloudTasksUtils(
Retrier retrier,
Clock clock,
String projectId,
String locationId,
@Config("projectId") String projectId,
@Config("locationId") String locationId,
@Config("defaultServiceAccount") Optional<String> defaultServiceAccount,
@Config("iapClientId") Optional<String> iapClientId,
SerializableCloudTasksClient client) {
this.retrier = retrier;
this.clock = clock;
this.projectId = projectId;
this.locationId = locationId;
this.defaultServiceAccount = defaultServiceAccount.orElse(null);
this.iapClientId = iapClientId.orElse(null);
this.client = client;
}
@@ -91,6 +111,74 @@ public class CloudTasksUtils implements Serializable {
return enqueue(queue, Arrays.asList(tasks));
}
/**
* Converts a (possible) set of params into an HTTP request via the appropriate method.
*
* <p>For GET requests we add them on to the URL, and for POST requests we add them in the body of
* the request.
*
* <p>The parameters {@code putHeadersFunction} and {@code setBodyFunction} are used so that this
* method can be called with either an AppEngine HTTP request or a standard non-AppEngine HTTP
* request. The two objects do not have the same methods, but both have ways of setting headers /
* body.
*
* @return the resulting path (unchanged for POST requests, with params added for GET requests)
*/
private String processRequestParameters(
String path,
HttpMethod method,
Multimap<String, String> params,
BiConsumer<String, String> putHeadersFunction,
Consumer<ByteString> setBodyFunction) {
if (CollectionUtils.isNullOrEmpty(params)) {
return path;
}
Escaper escaper = UrlEscapers.urlPathSegmentEscaper();
String encodedParams =
Joiner.on("&")
.join(
params.entries().stream()
.map(
entry ->
String.format(
"%s=%s",
escaper.escape(entry.getKey()), escaper.escape(entry.getValue())))
.collect(toImmutableList()));
if (method.equals(HttpMethod.GET)) {
return String.format("%s?%s", path, encodedParams);
}
putHeadersFunction.accept(HttpHeaders.CONTENT_TYPE, MediaType.FORM_DATA.toString());
setBodyFunction.accept(ByteString.copyFrom(encodedParams, StandardCharsets.UTF_8));
return path;
}
/**
* Creates a {@link Task} that does not use AppEngine for submission.
*
* <p>This uses the standard Cloud Tasks auth format to create and send an OIDC ID token set to
* the default service account. That account must have permission to submit tasks to Cloud Tasks.
*/
private Task createNonAppEngineTask(
String path, HttpMethod method, Service service, Multimap<String, String> params) {
HttpRequest.Builder requestBuilder = HttpRequest.newBuilder().setHttpMethod(method);
path =
processRequestParameters(
path, method, params, requestBuilder::putHeaders, requestBuilder::setBody);
OidcToken.Builder oidcTokenBuilder =
OidcToken.newBuilder().setServiceAccountEmail(defaultServiceAccount);
// If the service is using IAP, add that as the audience for the token so the request can be
// appropriately authed. Otherwise, use the project name.
if (iapClientId != null) {
oidcTokenBuilder.setAudience(iapClientId);
} else {
oidcTokenBuilder.setAudience(projectId);
}
requestBuilder.setOidcToken(oidcTokenBuilder.build());
String totalPath = String.format("%s%s", getServer(service), path);
requestBuilder.setUrl(totalPath);
return Task.newBuilder().setHttpRequest(requestBuilder.build()).build();
}
/**
* Create a {@link Task} to be enqueued.
*
@@ -108,7 +196,7 @@ public class CloudTasksUtils implements Serializable {
* the worker service</a>
*/
private Task createTask(
String path, HttpMethod method, String service, Multimap<String, String> params) {
String path, HttpMethod method, Service service, Multimap<String, String> params) {
checkArgument(
path != null && !path.isEmpty() && path.charAt(0) == '/',
"The path must start with a '/'.");
@@ -116,33 +204,21 @@ public class CloudTasksUtils implements Serializable {
method.equals(HttpMethod.GET) || method.equals(HttpMethod.POST),
"HTTP method %s is used. Only GET and POST are allowed.",
method);
AppEngineHttpRequest.Builder requestBuilder =
AppEngineHttpRequest.newBuilder()
.setHttpMethod(method)
.setAppEngineRouting(AppEngineRouting.newBuilder().setService(service).build());
if (!CollectionUtils.isNullOrEmpty(params)) {
Escaper escaper = UrlEscapers.urlPathSegmentEscaper();
String encodedParams =
Joiner.on("&")
.join(
params.entries().stream()
.map(
entry ->
String.format(
"%s=%s",
escaper.escape(entry.getKey()), escaper.escape(entry.getValue())))
.collect(toImmutableList()));
if (method == HttpMethod.GET) {
path = String.format("%s?%s", path, encodedParams);
} else {
requestBuilder
.putHeaders(HttpHeaders.CONTENT_TYPE, MediaType.FORM_DATA.toString())
.setBody(ByteString.copyFrom(encodedParams, StandardCharsets.UTF_8));
}
// If the default service account is configured, send a standard non-AppEngine HTTP request
if (defaultServiceAccount != null) {
return createNonAppEngineTask(path, method, service, params);
} else {
AppEngineHttpRequest.Builder requestBuilder =
AppEngineHttpRequest.newBuilder()
.setHttpMethod(method)
.setAppEngineRouting(
AppEngineRouting.newBuilder().setService(service.toString()).build());
path =
processRequestParameters(
path, method, params, requestBuilder::putHeaders, requestBuilder::setBody);
requestBuilder.setRelativeUri(path);
return Task.newBuilder().setAppEngineHttpRequest(requestBuilder.build()).build();
}
requestBuilder.setRelativeUri(path);
return Task.newBuilder().setAppEngineHttpRequest(requestBuilder.build()).build();
}
/**
@@ -165,7 +241,7 @@ public class CloudTasksUtils implements Serializable {
private Task createTaskWithJitter(
String path,
HttpMethod method,
String service,
Service service,
Multimap<String, String> params,
Optional<Integer> jitterSeconds) {
if (!jitterSeconds.isPresent() || jitterSeconds.get() <= 0) {
@@ -199,7 +275,7 @@ public class CloudTasksUtils implements Serializable {
private Task createTaskWithDelay(
String path,
HttpMethod method,
String service,
Service service,
Multimap<String, String> params,
Duration delay) {
if (delay.isEqual(Duration.ZERO)) {
@@ -211,11 +287,11 @@ public class CloudTasksUtils implements Serializable {
.build();
}
public Task createPostTask(String path, String service, Multimap<String, String> params) {
public Task createPostTask(String path, Service service, Multimap<String, String> params) {
return createTask(path, HttpMethod.POST, service, params);
}
public Task createGetTask(String path, String service, Multimap<String, String> params) {
public Task createGetTask(String path, Service service, Multimap<String, String> params) {
return createTask(path, HttpMethod.GET, service, params);
}
@@ -224,7 +300,7 @@ public class CloudTasksUtils implements Serializable {
*/
public Task createPostTaskWithJitter(
String path,
String service,
Service service,
Multimap<String, String> params,
Optional<Integer> jitterSeconds) {
return createTaskWithJitter(path, HttpMethod.POST, service, params, jitterSeconds);
@@ -235,7 +311,7 @@ public class CloudTasksUtils implements Serializable {
*/
public Task createGetTaskWithJitter(
String path,
String service,
Service service,
Multimap<String, String> params,
Optional<Integer> jitterSeconds) {
return createTaskWithJitter(path, HttpMethod.GET, service, params, jitterSeconds);
@@ -243,13 +319,13 @@ public class CloudTasksUtils implements Serializable {
/** Create a {@link Task} via HTTP.POST that will be delayed for {@code delay}. */
public Task createPostTaskWithDelay(
String path, String service, Multimap<String, String> params, Duration delay) {
String path, Service service, Multimap<String, String> params, Duration delay) {
return createTaskWithDelay(path, HttpMethod.POST, service, params, delay);
}
/** Create a {@link Task} via HTTP.GET that will be delayed for {@code delay}. */
public Task createGetTaskWithDelay(
String path, String service, Multimap<String, String> params, Duration delay) {
String path, Service service, Multimap<String, String> params, Duration delay) {
return createTaskWithDelay(path, HttpMethod.GET, service, params, delay);
}

View File

@@ -19,6 +19,7 @@ import static com.google.common.base.Preconditions.checkState;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.batch.BatchModule.PARAM_DRY_RUN;
import static google.registry.config.RegistryEnvironment.PRODUCTION;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_DELETE;
import static google.registry.model.tld.Registries.getTldsOfType;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
@@ -32,12 +33,11 @@ import com.google.common.collect.Sets;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.dns.DnsQueue;
import google.registry.model.CreateAutoTimestamp;
import google.registry.model.EppResourceUtils;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.tld.Registry.TldType;
import google.registry.model.tld.Tld.TldType;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
@@ -98,8 +98,6 @@ public class DeleteProberDataAction implements Runnable {
/** Number of domains to retrieve and delete per SQL transaction. */
private static final int BATCH_SIZE = 1000;
@Inject DnsQueue dnsQueue;
@Inject
@Parameter(PARAM_DRY_RUN)
boolean isDryRun;
@@ -222,7 +220,7 @@ public class DeleteProberDataAction implements Runnable {
}
}
private void hardDeleteDomainsAndHosts(
private static void hardDeleteDomainsAndHosts(
ImmutableList<String> domainRepoIds, ImmutableList<String> hostNames) {
tm().query("DELETE FROM Host WHERE hostName IN :hostNames")
.setParameter("hostNames", hostNames)
@@ -264,6 +262,6 @@ public class DeleteProberDataAction implements Runnable {
// messages, or auto-renews because those will all be hard-deleted the next time the job runs
// anyway.
tm().putAll(ImmutableList.of(deletedDomain, historyEntry));
dnsQueue.addDomainRefreshTask(deletedDomain.getDomainName());
requestDomainDnsRefresh(deletedDomain.getDomainName());
}
}

View File

@@ -0,0 +1,163 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.batch.BatchModule.PARAM_DRY_RUN;
import static google.registry.beam.BeamUtils.createJobName;
import static google.registry.model.common.Cursor.CursorType.RECURRING_BILLING;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.api.services.dataflow.Dataflow;
import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.billing.ExpandBillingRecurrencesPipeline;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.common.Cursor;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.io.IOException;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
/**
* An action that kicks off a {@link ExpandBillingRecurrencesPipeline} dataflow job to expand {@link
* BillingRecurrence} billing events into synthetic {@link BillingEvent} events.
*/
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/expandBillingRecurrences",
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class ExpandBillingRecurrencesAction implements Runnable {
public static final String PARAM_START_TIME = "startTime";
public static final String PARAM_END_TIME = "endTime";
public static final String PARAM_ADVANCE_CURSOR = "advanceCursor";
private static final String PIPELINE_NAME = "expand_billing_recurrences_pipeline";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Clock clock;
@Inject
@Parameter(PARAM_DRY_RUN)
boolean isDryRun;
@Inject
@Parameter(PARAM_ADVANCE_CURSOR)
boolean advanceCursor;
@Inject
@Parameter(PARAM_START_TIME)
Optional<DateTime> startTimeParam;
@Inject
@Parameter(PARAM_END_TIME)
Optional<DateTime> endTimeParam;
@Inject
@Config("projectId")
String projectId;
@Inject
@Config("defaultJobRegion")
String jobRegion;
@Inject
@Config("beamStagingBucketUrl")
String stagingBucketUrl;
@Inject Dataflow dataflow;
@Inject Response response;
@Inject
ExpandBillingRecurrencesAction() {}
@Override
public void run() {
checkArgument(!(isDryRun && advanceCursor), "Cannot advance the cursor in a dry run.");
DateTime endTime = endTimeParam.orElse(clock.nowUtc());
checkArgument(
!endTime.isAfter(clock.nowUtc()), "End time (%s) must be at or before now", endTime);
DateTime startTime =
startTimeParam.orElse(
tm().transact(
() ->
tm().loadByKeyIfPresent(Cursor.createGlobalVKey(RECURRING_BILLING))
.orElse(Cursor.createGlobal(RECURRING_BILLING, START_OF_TIME))
.getCursorTime()));
checkArgument(
startTime.isBefore(endTime),
String.format("Start time (%s) must be before end time (%s)", startTime, endTime));
LaunchFlexTemplateParameter launchParameter =
new LaunchFlexTemplateParameter()
.setJobName(
createJobName(
String.format(
"expand-billing-%s", startTime.toString("yyyy-MM-dd't'HH-mm-ss'z'")),
clock))
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(
new ImmutableMap.Builder<String, String>()
.put("registryEnvironment", RegistryEnvironment.get().name())
.put("startTime", startTime.toString("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"))
.put("endTime", endTime.toString("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"))
.put("isDryRun", Boolean.toString(isDryRun))
.put("advanceCursor", Boolean.toString(advanceCursor))
.build());
logger.atInfo().log(
"Launching billing recurrence expansion pipeline for event time range [%s, %s)%s.",
startTime,
endTime,
isDryRun ? " in dry run mode" : advanceCursor ? "" : " without advancing the cursor");
try {
LaunchFlexTemplateResponse launchResponse =
dataflow
.projects()
.locations()
.flexTemplates()
.launch(
projectId,
jobRegion,
new LaunchFlexTemplateRequest().setLaunchParameter(launchParameter))
.execute();
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
response.setStatus(SC_OK);
response.setPayload(
String.format(
"Launched billing recurrence expansion pipeline: %s",
launchResponse.getJob().getId()));
} catch (IOException e) {
logger.atWarning().withCause(e).log("Pipeline Launch failed");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(String.format("Pipeline launch failed: %s", e.getMessage()));
}
}
}

View File

@@ -1,376 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static com.google.common.collect.Sets.difference;
import static com.google.common.collect.Sets.newHashSet;
import static google.registry.batch.BatchModule.PARAM_DRY_RUN;
import static google.registry.model.common.Cursor.CursorType.RECURRING_BILLING;
import static google.registry.model.domain.Period.Unit.YEARS;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_AUTORENEW;
import static google.registry.persistence.transaction.QueryComposer.Comparator.EQ;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.union;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static google.registry.util.DateTimeUtils.earliestOf;
import static google.registry.util.DomainNameUtils.getTldFromDomainName;
import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Range;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import google.registry.config.RegistryConfig.Config;
import google.registry.flows.domain.DomainPricingLogic;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingEvent.Recurring;
import google.registry.model.common.Cursor;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.tld.Registry;
import google.registry.persistence.VKey;
import google.registry.request.Action;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import javax.inject.Inject;
import org.joda.time.DateTime;
/**
* An action that expands {@link Recurring} billing events into synthetic {@link OneTime} events.
*
* <p>The cursor used throughout this action (overridden if necessary using the parameter {@code
* cursorTime}) represents the inclusive lower bound on the range of billing times that will be
* expanded as a result of the job (the exclusive upper bound being the execution time of the job).
*/
@Action(
service = Action.Service.BACKEND,
path = "/_dr/task/expandRecurringBillingEvents",
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public class ExpandRecurringBillingEventsAction implements Runnable {
public static final String PARAM_CURSOR_TIME = "cursorTime";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
@Inject Clock clock;
@Inject
@Config("jdbcBatchSize")
int batchSize;
@Inject @Parameter(PARAM_DRY_RUN) boolean isDryRun;
@Inject @Parameter(PARAM_CURSOR_TIME) Optional<DateTime> cursorTimeParam;
@Inject DomainPricingLogic domainPricingLogic;
@Inject Response response;
@Inject ExpandRecurringBillingEventsAction() {}
@Override
public void run() {
DateTime executeTime = clock.nowUtc();
DateTime persistedCursorTime =
tm().transact(
() ->
tm().loadByKeyIfPresent(Cursor.createGlobalVKey(RECURRING_BILLING))
.orElse(Cursor.createGlobal(RECURRING_BILLING, START_OF_TIME))
.getCursorTime());
DateTime cursorTime = cursorTimeParam.orElse(persistedCursorTime);
checkArgument(
cursorTime.isBefore(executeTime), "Cursor time must be earlier than execution time.");
logger.atInfo().log(
"Running Recurring billing event expansion for billing time range [%s, %s).",
cursorTime, executeTime);
expandSqlBillingEventsInBatches(executeTime, cursorTime, persistedCursorTime);
}
private void expandSqlBillingEventsInBatches(
DateTime executeTime, DateTime cursorTime, DateTime persistedCursorTime) {
int totalBillingEventsSaved = 0;
long maxProcessedRecurrenceId = 0;
SqlBatchResults sqlBatchResults;
do {
final long prevMaxProcessedRecurrenceId = maxProcessedRecurrenceId;
sqlBatchResults =
tm().transact(
() -> {
Set<String> expandedDomains = newHashSet();
int batchBillingEventsSaved = 0;
long maxRecurrenceId = prevMaxProcessedRecurrenceId;
List<Recurring> recurrings =
tm().query(
"FROM BillingRecurrence "
+ "WHERE eventTime <= :executeTime "
+ "AND eventTime < recurrenceEndTime "
+ "AND id > :maxProcessedRecurrenceId "
+ "AND recurrenceEndTime > :adjustedCursorTime "
+ "ORDER BY id ASC",
Recurring.class)
.setParameter("executeTime", executeTime)
.setParameter("maxProcessedRecurrenceId", prevMaxProcessedRecurrenceId)
.setParameter(
"adjustedCursorTime",
cursorTime.minus(Registry.DEFAULT_AUTO_RENEW_GRACE_PERIOD))
.setMaxResults(batchSize)
.getResultList();
for (Recurring recurring : recurrings) {
if (expandedDomains.contains(recurring.getTargetId())) {
// On the off chance this batch contains multiple recurrences for the same
// domain (which is actually possible if a given domain is quickly renewed
// multiple times in a row), then short-circuit after the first one is
// processed that involves actually expanding a billing event. This is
// necessary because otherwise we get an "Inserted/updated object reloaded"
// error from Hibernate when those billing events would be loaded
// inside a transaction where they were already written. Note, there is no
// actual further work to be done in this case anyway, not unless it has
// somehow been over a year since this action last ran successfully (and if
// that were somehow true, the remaining billing events would still be
// expanded on subsequent runs).
continue;
}
int billingEventsSaved =
expandBillingEvent(
recurring, executeTime, cursorTime, isDryRun, domainPricingLogic);
batchBillingEventsSaved += billingEventsSaved;
if (billingEventsSaved > 0) {
expandedDomains.add(recurring.getTargetId());
}
maxRecurrenceId = Math.max(maxRecurrenceId, recurring.getId());
}
return SqlBatchResults.create(
batchBillingEventsSaved,
maxRecurrenceId,
maxRecurrenceId > prevMaxProcessedRecurrenceId);
});
totalBillingEventsSaved += sqlBatchResults.batchBillingEventsSaved();
maxProcessedRecurrenceId = sqlBatchResults.maxProcessedRecurrenceId();
if (sqlBatchResults.batchBillingEventsSaved() > 0) {
logger.atInfo().log(
"Saved %d billing events in batch (%d total) with max recurrence id %d.",
sqlBatchResults.batchBillingEventsSaved(),
totalBillingEventsSaved,
maxProcessedRecurrenceId);
} else {
// If we're churning through a lot of no-op recurrences that don't need expanding (yet?),
// then only log one no-op every so often as a good balance between letting the user track
// that the action is still running while also not spamming the logs incessantly.
logger.atInfo().atMostEvery(3, TimeUnit.MINUTES).log(
"Processed up to max recurrence id %d (no billing events saved recently).",
maxProcessedRecurrenceId);
}
} while (sqlBatchResults.shouldContinue());
if (!isDryRun) {
logger.atInfo().log("Saved %d total OneTime billing events.", totalBillingEventsSaved);
} else {
logger.atInfo().log(
"Generated %d total OneTime billing events (dry run).", totalBillingEventsSaved);
}
logger.atInfo().log(
"Recurring event expansion %s complete for billing event range [%s, %s).",
isDryRun ? "(dry run) " : "", cursorTime, executeTime);
tm().transact(
() -> {
// Check for the unlikely scenario where the cursor has been altered during the
// expansion.
DateTime currentCursorTime =
tm().loadByKeyIfPresent(Cursor.createGlobalVKey(RECURRING_BILLING))
.orElse(Cursor.createGlobal(RECURRING_BILLING, START_OF_TIME))
.getCursorTime();
if (!currentCursorTime.equals(persistedCursorTime)) {
throw new IllegalStateException(
String.format(
"Current cursor position %s does not match persisted cursor position %s.",
currentCursorTime, persistedCursorTime));
}
if (!isDryRun) {
tm().put(Cursor.createGlobal(RECURRING_BILLING, executeTime));
}
});
}
@AutoValue
abstract static class SqlBatchResults {
abstract int batchBillingEventsSaved();
abstract long maxProcessedRecurrenceId();
abstract boolean shouldContinue();
static SqlBatchResults create(
int batchBillingEventsSaved, long maxProcessedRecurrenceId, boolean shouldContinue) {
return new AutoValue_ExpandRecurringBillingEventsAction_SqlBatchResults(
batchBillingEventsSaved, maxProcessedRecurrenceId, shouldContinue);
}
}
private static int expandBillingEvent(
Recurring recurring,
DateTime executeTime,
DateTime cursorTime,
boolean isDryRun,
DomainPricingLogic domainPricingLogic) {
ImmutableSet.Builder<OneTime> syntheticOneTimesBuilder = new ImmutableSet.Builder<>();
final Registry tld = Registry.get(getTldFromDomainName(recurring.getTargetId()));
// Determine the complete set of times at which this recurring event should
// occur (up to and including the runtime of the action).
Iterable<DateTime> eventTimes =
recurring
.getRecurrenceTimeOfYear()
.getInstancesInRange(
Range.closed(
recurring.getEventTime(),
earliestOf(recurring.getRecurrenceEndTime(), executeTime)));
// Convert these event times to billing times
final ImmutableSet<DateTime> billingTimes =
getBillingTimesInScope(eventTimes, cursorTime, executeTime, tld);
VKey<Domain> domainKey = VKey.create(Domain.class, recurring.getDomainRepoId());
Iterable<OneTime> oneTimesForDomain;
oneTimesForDomain =
tm().createQueryComposer(OneTime.class)
.where("domainRepoId", EQ, recurring.getDomainRepoId())
.list();
// Determine the billing times that already have OneTime events persisted.
ImmutableSet<DateTime> existingBillingTimes =
getExistingBillingTimes(oneTimesForDomain, recurring);
ImmutableSet.Builder<DomainHistory> historyEntriesBuilder = new ImmutableSet.Builder<>();
// Create synthetic OneTime events for all billing times that do not yet have
// an event persisted.
for (DateTime billingTime : difference(billingTimes, existingBillingTimes)) {
// Construct a new HistoryEntry that parents over the OneTime
DomainHistory historyEntry =
new DomainHistory.Builder()
.setBySuperuser(false)
.setRegistrarId(recurring.getRegistrarId())
.setModificationTime(tm().getTransactionTime())
.setDomain(tm().loadByKey(domainKey))
.setPeriod(Period.create(1, YEARS))
.setReason("Domain autorenewal by ExpandRecurringBillingEventsAction")
.setRequestedByRegistrar(false)
.setType(DOMAIN_AUTORENEW)
// Note: the following statement seems to not be entirely correct as manual renewal
// during the autorenew grace period also closes out the existing recurrence, but in
// that instance the autorenew history entry should still have the transaction records
// for obvious reasons. It can be argued the history entry should always have the
// transaction record, regardless of what happens afterward. If the domain is deleted
// later during the autorenew grace period, another history entry for the delete would
// record that mutation separately, but the previous autorenew should not have its
// history entry retroactively altered, or in this case have the transaction records
// omitted when its created belatedly (when billing time is in scope). However, since
// we will be rewriting this action and only want to do the absolute minimum change to
// fix it for now, we will leave the current logic in place to avoid any unnecessary
// complications.
//
// Don't write a domain transaction record if the recurrence was
// ended prior to the billing time (i.e. a domain was deleted
// during the autorenew grace period).
.setDomainTransactionRecords(
recurring.getRecurrenceEndTime().isBefore(billingTime)
? ImmutableSet.of()
: ImmutableSet.of(
DomainTransactionRecord.create(
tld.getTldStr(),
// We report this when the autorenew grace period
// ends
billingTime,
TransactionReportField.netRenewsFieldFromYears(1),
1)))
.build();
historyEntriesBuilder.add(historyEntry);
DateTime eventTime = billingTime.minus(tld.getAutoRenewGracePeriodLength());
syntheticOneTimesBuilder.add(
new OneTime.Builder()
.setBillingTime(billingTime)
.setRegistrarId(recurring.getRegistrarId())
// Determine the cost for a one-year renewal.
.setCost(
domainPricingLogic
.getRenewPrice(tld, recurring.getTargetId(), eventTime, 1, recurring)
.getRenewCost())
.setEventTime(eventTime)
.setFlags(union(recurring.getFlags(), Flag.SYNTHETIC))
.setDomainHistory(historyEntry)
.setPeriodYears(1)
.setReason(recurring.getReason())
.setSyntheticCreationTime(executeTime)
.setCancellationMatchingBillingEvent(recurring)
.setTargetId(recurring.getTargetId())
.build());
}
Set<DomainHistory> historyEntries = historyEntriesBuilder.build();
Set<OneTime> syntheticOneTimes = syntheticOneTimesBuilder.build();
if (!isDryRun) {
ImmutableSet<ImmutableObject> entitiesToSave =
new ImmutableSet.Builder<ImmutableObject>()
.addAll(historyEntries)
.addAll(syntheticOneTimes)
.build();
tm().putAll(entitiesToSave);
}
return syntheticOneTimes.size();
}
/**
* Filters a set of {@link DateTime}s down to event times that are in scope for a particular
* action run, given the cursor time and the action execution time.
*/
protected static ImmutableSet<DateTime> getBillingTimesInScope(
Iterable<DateTime> eventTimes,
DateTime cursorTime,
DateTime executeTime,
final Registry tld) {
return Streams.stream(eventTimes)
.map(eventTime -> eventTime.plus(tld.getAutoRenewGracePeriodLength()))
.filter(Range.closedOpen(cursorTime, executeTime))
.collect(toImmutableSet());
}
/**
* Determines an {@link ImmutableSet} of {@link DateTime}s that have already been persisted for a
* given recurring billing event.
*/
private static ImmutableSet<DateTime> getExistingBillingTimes(
Iterable<BillingEvent.OneTime> oneTimesForDomain,
final BillingEvent.Recurring recurringEvent) {
return Streams.stream(oneTimesForDomain)
.filter(
billingEvent ->
recurringEvent
.createVKey()
.equals(billingEvent.getCancellationMatchingBillingEvent()))
.map(OneTime::getBillingTime)
.collect(toImmutableSet());
}
}

View File

@@ -14,31 +14,39 @@
package google.registry.batch;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.apache.http.HttpStatus.SC_INTERNAL_SERVER_ERROR;
import static org.apache.http.HttpStatus.SC_OK;
import static google.registry.batch.BatchModule.PARAM_DRY_RUN;
import static google.registry.beam.BeamUtils.createJobName;
import static javax.servlet.http.HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
import static javax.servlet.http.HttpServletResponse.SC_OK;
import com.google.common.annotations.VisibleForTesting;
import com.google.api.services.dataflow.Dataflow;
import com.google.api.services.dataflow.model.LaunchFlexTemplateParameter;
import com.google.api.services.dataflow.model.LaunchFlexTemplateRequest;
import com.google.api.services.dataflow.model.LaunchFlexTemplateResponse;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.MediaType;
import google.registry.beam.wipeout.WipeOutContactHistoryPiiPipeline;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryEnvironment;
import google.registry.model.contact.ContactHistory;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Stream;
import java.io.IOException;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
/**
* An action that wipes out Personal Identifiable Information (PII) fields of {@link ContactHistory}
* entities.
* An action that launches {@link WipeOutContactHistoryPiiPipeline} to wipe out Personal
* Identifiable Information (PII) fields of {@link ContactHistory} entities.
*
* <p>ContactHistory entities should be retained in the database for only certain amount of time.
* This periodic wipe out action only applies to SQL.
* <p>{@link ContactHistory} entities should be retained in the database for only certain amount of
* time.
*/
@Action(
service = Service.BACKEND,
@@ -47,90 +55,89 @@ import org.joda.time.DateTime;
public class WipeOutContactHistoryPiiAction implements Runnable {
public static final String PATH = "/_dr/task/wipeOutContactHistoryPii";
public static final String PARAM_CUTOFF_TIME = "wipeoutTime";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final String PIPELINE_NAME = "wipe_out_contact_history_pii_pipeline";
private final Clock clock;
private final Response response;
private final boolean isDryRun;
private final Optional<DateTime> maybeCutoffTime;
private final int minMonthsBeforeWipeOut;
private final int wipeOutQueryBatchSize;
private final String stagingBucketUrl;
private final String projectId;
private final String jobRegion;
private final Dataflow dataflow;
private final Response response;
@Inject
public WipeOutContactHistoryPiiAction(
Clock clock,
@Parameter(PARAM_DRY_RUN) boolean isDryRun,
@Parameter(PARAM_CUTOFF_TIME) Optional<DateTime> maybeCutoffTime,
@Config("minMonthsBeforeWipeOut") int minMonthsBeforeWipeOut,
@Config("wipeOutQueryBatchSize") int wipeOutQueryBatchSize,
@Config("beamStagingBucketUrl") String stagingBucketUrl,
@Config("projectId") String projectId,
@Config("defaultJobRegion") String jobRegion,
Dataflow dataflow,
Response response) {
this.clock = clock;
this.response = response;
this.isDryRun = isDryRun;
this.maybeCutoffTime = maybeCutoffTime;
this.minMonthsBeforeWipeOut = minMonthsBeforeWipeOut;
this.wipeOutQueryBatchSize = wipeOutQueryBatchSize;
this.stagingBucketUrl = stagingBucketUrl;
this.projectId = projectId;
this.jobRegion = jobRegion;
this.dataflow = dataflow;
this.response = response;
}
@Override
public void run() {
response.setContentType(MediaType.PLAIN_TEXT_UTF_8);
DateTime cutoffTime =
maybeCutoffTime.orElse(clock.nowUtc().minusMonths(minMonthsBeforeWipeOut));
LaunchFlexTemplateParameter launchParameter =
new LaunchFlexTemplateParameter()
.setJobName(
createJobName(
String.format(
"contact-history-pii-wipeout-%s",
cutoffTime.toString("yyyy-MM-dd't'HH-mm-ss'z'")),
clock))
.setContainerSpecGcsPath(
String.format("%s/%s_metadata.json", stagingBucketUrl, PIPELINE_NAME))
.setParameters(
ImmutableMap.of(
"registryEnvironment",
RegistryEnvironment.get().name(),
"cutoffTime",
cutoffTime.toString("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"),
"isDryRun",
Boolean.toString(isDryRun)));
logger.atInfo().log(
"Launching Beam pipeline to wipe out all PII of contact history entities prior to %s%s.",
cutoffTime, " in dry run mode");
try {
int totalNumOfWipedEntities = 0;
DateTime wipeOutTime = clock.nowUtc().minusMonths(minMonthsBeforeWipeOut);
logger.atInfo().log(
"About to wipe out all PII of contact history entities prior to %s.", wipeOutTime);
int numOfWipedEntities = 0;
do {
numOfWipedEntities =
tm().transact(
() ->
wipeOutContactHistoryData(
getNextContactHistoryEntitiesWithPiiBatch(wipeOutTime)));
totalNumOfWipedEntities += numOfWipedEntities;
} while (numOfWipedEntities > 0);
String msg =
String.format(
"Done. Wiped out PII of %d ContactHistory entities in total.",
totalNumOfWipedEntities);
logger.atInfo().log(msg);
response.setPayload(msg);
LaunchFlexTemplateResponse launchResponse =
dataflow
.projects()
.locations()
.flexTemplates()
.launch(
projectId,
jobRegion,
new LaunchFlexTemplateRequest().setLaunchParameter(launchParameter))
.execute();
logger.atInfo().log("Got response: %s", launchResponse.getJob().toPrettyString());
response.setStatus(SC_OK);
} catch (Exception e) {
logger.atSevere().withCause(e).log(
"Exception thrown during the process of wiping out contact history PII.");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(
String.format(
"Exception thrown during the process of wiping out contact history PII with cause"
+ ": %s",
e));
"Launched contact history PII wipeout pipeline: %s",
launchResponse.getJob().getId()));
} catch (IOException e) {
logger.atWarning().withCause(e).log("Pipeline Launch failed");
response.setStatus(SC_INTERNAL_SERVER_ERROR);
response.setPayload(String.format("Pipeline launch failed: %s", e.getMessage()));
}
}
/**
* Returns a stream of up to {@link #wipeOutQueryBatchSize} {@link ContactHistory} entities
* containing PII that are prior to @param wipeOutTime.
*/
@VisibleForTesting
Stream<ContactHistory> getNextContactHistoryEntitiesWithPiiBatch(DateTime wipeOutTime) {
// email is one of the required fields in EPP, meaning it's initially not null.
// Therefore, checking if it's null is one way to avoid processing contact history entities
// that have been processed previously. Refer to RFC 5733 for more information.
return tm().query(
"FROM ContactHistory WHERE modificationTime < :wipeOutTime " + "AND email IS NOT NULL",
ContactHistory.class)
.setParameter("wipeOutTime", wipeOutTime)
.setMaxResults(wipeOutQueryBatchSize)
.getResultStream();
}
/** Wipes out the PII of each of the {@link ContactHistory} entities in the stream. */
@VisibleForTesting
int wipeOutContactHistoryData(Stream<ContactHistory> contactHistoryEntities) {
AtomicInteger numOfEntities = new AtomicInteger(0);
contactHistoryEntities.forEach(
contactHistoryEntity -> {
tm().update(contactHistoryEntity.asBuilder().wipeOutPii().build());
numOfEntities.incrementAndGet();
});
logger.atInfo().log(
"Wiped out all PII fields of %d ContactHistory entities.", numOfEntities.get());
return numOfEntities.get();
}
}

View File

@@ -1,122 +0,0 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.batch.cannedscript;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.util.RegistrarUtils.normalizeRegistrarId;
import com.google.api.services.admin.directory.Directory;
import com.google.api.services.groupssettings.Groupssettings;
import com.google.common.base.Supplier;
import com.google.common.base.Suppliers;
import com.google.common.base.Throwables;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import dagger.Component;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule;
import google.registry.config.CredentialModule.AdcDelegatedCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.groups.DirectoryGroupsConnection;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarPoc;
import google.registry.util.GoogleCredentialsBundle;
import google.registry.util.UtilsModule;
import java.util.List;
import java.util.Set;
import javax.inject.Singleton;
/**
* Verifies that the credential with the {@link AdcDelegatedCredential} annotation can be used to
* access the Google Workspace Groups API.
*/
// TODO(b/234424397): remove class after credential changes are rolled out.
public class GroupsApiChecker {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private static final Supplier<GroupsConnectionComponent> COMPONENT_SUPPLIER =
Suppliers.memoize(DaggerGroupsApiChecker_GroupsConnectionComponent::create);
public static void runGroupsApiChecks() {
GroupsConnectionComponent component = COMPONENT_SUPPLIER.get();
DirectoryGroupsConnection groupsConnection = component.groupsConnection();
List<Registrar> registrars =
Streams.stream(Registrar.loadAllCached())
.filter(registrar -> registrar.isLive() && registrar.getType() == Registrar.Type.REAL)
.collect(toImmutableList());
for (Registrar registrar : registrars) {
for (final RegistrarPoc.Type type : RegistrarPoc.Type.values()) {
String groupKey =
String.format(
"%s-%s-contacts@%s",
normalizeRegistrarId(registrar.getRegistrarId()),
type.getDisplayName(),
component.gSuiteDomainName());
try {
Set<String> currentMembers = groupsConnection.getMembersOfGroup(groupKey);
logger.atInfo().log("Found %s members for %s.", currentMembers.size(), groupKey);
} catch (Exception e) {
Throwables.throwIfUnchecked(e);
throw new RuntimeException(e);
}
}
}
}
@Singleton
@Component(
modules = {
ConfigModule.class,
CredentialModule.class,
GroupsApiModule.class,
UtilsModule.class
})
interface GroupsConnectionComponent {
DirectoryGroupsConnection groupsConnection();
@Config("gSuiteDomainName")
String gSuiteDomainName();
}
@Module
static class GroupsApiModule {
@Provides
static Directory provideDirectory(
@AdcDelegatedCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Directory.Builder(
credentialsBundle.getHttpTransport(),
credentialsBundle.getJsonFactory(),
credentialsBundle.getHttpRequestInitializer())
.setApplicationName(projectId)
.build();
}
@Provides
static Groupssettings provideGroupsSettings(
@AdcDelegatedCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Groupssettings.Builder(
credentialsBundle.getHttpTransport(),
credentialsBundle.getJsonFactory(),
credentialsBundle.getHttpRequestInitializer())
.setApplicationName(projectId)
.build();
}
}
}

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.invoicing;
package google.registry.beam.billing;
import com.google.auto.value.AutoValue;
import com.google.common.base.Joiner;
@@ -64,7 +64,7 @@ public abstract class BillingEvent implements Serializable {
"amount",
"flags");
/** Returns the unique ID for the {@code OneTime} associated with this event. */
/** Returns the unique ID for the {@code BillingEvent} associated with this event. */
abstract long id();
/** Returns the UTC DateTime this event becomes billable. */
@@ -189,7 +189,7 @@ public abstract class BillingEvent implements Serializable {
.minusDays(1)
.toString(),
billingId(),
String.format("%s - %s", registrarId(), tld()),
registrarId(),
String.format("%s | TLD: %s | TERM: %d-year", action(), tld(), years()),
amount(),
currency(),
@@ -233,7 +233,7 @@ public abstract class BillingEvent implements Serializable {
/** Returns the billing account id, which is the {@code BillingEvent.billingId}. */
abstract String productAccountKey();
/** Returns the invoice grouping key, which is in the format "registrarId - tld". */
/** Returns the invoice grouping key, which is the registrar ID. */
abstract String usageGroupingKey();
/** Returns a description of the item, formatted as "action | TLD: tld | TERM: n-year." */

View File

@@ -0,0 +1,508 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.billing;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.Sets.difference;
import static google.registry.model.common.Cursor.CursorType.RECURRING_BILLING;
import static google.registry.model.domain.Period.Unit.YEARS;
import static google.registry.model.reporting.HistoryEntry.Type.DOMAIN_AUTORENEW;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static google.registry.util.CollectionUtils.union;
import static google.registry.util.DateTimeUtils.START_OF_TIME;
import static google.registry.util.DateTimeUtils.earliestOf;
import static google.registry.util.DateTimeUtils.latestOf;
import static org.apache.beam.sdk.values.TypeDescriptors.voids;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Range;
import dagger.Component;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.config.RegistryConfig.Config;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.flows.custom.CustomLogicFactoryModule;
import google.registry.flows.custom.CustomLogicModule;
import google.registry.flows.domain.DomainPricingLogic;
import google.registry.flows.domain.DomainPricingLogic.AllocationTokenInvalidForPremiumNameException;
import google.registry.model.ImmutableObject;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingCancellation;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingRecurrence;
import google.registry.model.common.Cursor;
import google.registry.model.domain.Domain;
import google.registry.model.domain.DomainHistory;
import google.registry.model.domain.Period;
import google.registry.model.reporting.DomainTransactionRecord;
import google.registry.model.reporting.DomainTransactionRecord.TransactionReportField;
import google.registry.model.tld.Tld;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.util.Clock;
import google.registry.util.SystemClock;
import java.io.Serializable;
import java.math.BigInteger;
import java.util.Optional;
import java.util.Set;
import javax.inject.Singleton;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.coders.KvCoder;
import org.apache.beam.sdk.coders.VarIntCoder;
import org.apache.beam.sdk.coders.VarLongCoder;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.GroupIntoBatches;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.Wait;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PDone;
import org.joda.time.DateTime;
/**
* Definition of a Dataflow Flex pipeline template, which expands {@link BillingRecurrence} to
* {@link BillingEvent} when an autorenew occurs within the given time frame.
*
* <p>This pipeline works in three stages:
*
* <ul>
* <li>Gather the {@link BillingRecurrence}s that are in scope for expansion. The exact condition
* of {@link BillingRecurrence}s to include can be found in {@link
* #getRecurrencesInScope(Pipeline)}.
* <li>Expand the {@link BillingRecurrence}s to {@link BillingEvent} (and corresponding {@link
* DomainHistory}) that fall within the [{@link #startTime}, {@link #endTime}) window,
* excluding those that are already present (to make this pipeline idempotent when running
* with the same parameters multiple times, either in parallel or in sequence). The {@link
* BillingRecurrence} is also updated with the information on when it was last expanded, so it
* would not be in scope for expansion until at least a year later.
* <li>If the cursor for billing events should be advanced, advance it to {@link #endTime} after
* all of the expansions in the previous step is done, only when it is currently at {@link
* #startTime}.
* </ul>
*
* <p>Note that the creation of new {@link BillingEvent} and {@link DomainHistory} is done
* speculatively as soon as its event time is in scope for expansion (i.e. within the window of
* operation). If a domain is subsequently cancelled during the autorenew grace period, a {@link
* BillingCancellation} would have been created to cancel the {@link BillingEvent} out. Similarly, a
* {@link DomainHistory} for the delete will be created which negates the effect of the
* speculatively created {@link DomainHistory}, specifically for the transaction records. Both the
* {@link BillingEvent} and {@link DomainHistory} will only be used (and cancelled out) when the
* billing time becomes effective, which is after the grace period, when the cancellations would
* have been written, if need be. This is no different from what we do with manual renewals or
* normal creates, where entities are always created for the action regardless of whether their
* effects will be negated later due to subsequent actions within respective grace periods.
*
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha \
* --pipeline=expandBilling}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*
* @see BillingCancellation#forGracePeriod
* @see google.registry.flows.domain.DomainFlowUtils#createCancelingRecords
* @see <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates">Using
* Flex Templates</a>
*/
public class ExpandBillingRecurrencesPipeline implements Serializable {
private static final long serialVersionUID = -5827984301386630194L;
private static final DomainPricingLogic domainPricingLogic;
private static final int batchSize;
static {
PipelineComponent pipelineComponent =
DaggerExpandBillingRecurrencesPipeline_PipelineComponent.create();
domainPricingLogic = pipelineComponent.domainPricingLogic();
batchSize = pipelineComponent.batchSize();
}
// Inclusive lower bound of the expansion window.
private final DateTime startTime;
// Exclusive lower bound of the expansion window.
private final DateTime endTime;
private final boolean isDryRun;
private final boolean advanceCursor;
private final Counter recurrencesInScopeCounter =
Metrics.counter("ExpandBilling", "Recurrences in scope for expansion");
// Note that this counter is only accurate when running in dry run mode. Because SQL persistence
// is a side effect and not idempotent, a transaction to save OneTimes could be successful but the
// transform that contains it could be still be retried, rolling back the counter increment. The
// same transform, when retried, would skip the already expanded OneTime, causing this counter to
// be lower than it should be when not in dry run mode.
// See: https://beam.apache.org/documentation/programming-guide/#user-code-idempotence
private final Counter oneTimesToExpandCounter =
Metrics.counter("ExpandBilling", "OneTimes that would be expanded");
ExpandBillingRecurrencesPipeline(ExpandBillingRecurrencesPipelineOptions options, Clock clock) {
startTime = DateTime.parse(options.getStartTime());
endTime = DateTime.parse(options.getEndTime());
checkArgument(
!endTime.isAfter(clock.nowUtc()),
String.format("End time %s must be at or before now.", endTime));
checkArgument(
startTime.isBefore(endTime),
String.format("[%s, %s) is not a valid window of operation.", startTime, endTime));
isDryRun = options.getIsDryRun();
advanceCursor = options.getAdvanceCursor();
}
private PipelineResult run(Pipeline pipeline) {
setupPipeline(pipeline);
return pipeline.run();
}
void setupPipeline(Pipeline pipeline) {
PCollection<KV<Integer, Long>> recurrenceIds = getRecurrencesInScope(pipeline);
PCollection<Void> expanded = expandRecurrences(recurrenceIds);
if (!isDryRun && advanceCursor) {
advanceCursor(expanded);
}
}
PCollection<KV<Integer, Long>> getRecurrencesInScope(Pipeline pipeline) {
return pipeline.apply(
"Read all Recurrences in scope",
// Use native query because JPQL does not support timestamp arithmetics.
RegistryJpaIO.read(
"SELECT billing_recurrence_id "
+ "FROM \"BillingRecurrence\" "
// Recurrence should not close before the first event time.
+ "WHERE event_time < recurrence_end_time "
// First event time should be before end time.
+ "AND event_Time < :endTime "
// Recurrence should not close before start time.
+ "AND :startTime < recurrence_end_time "
// Last expansion should happen at least one year before end time.
+ "AND recurrence_last_expansion < :oneYearAgo "
// The recurrence should not close before next expansion time.
+ "AND recurrence_last_expansion + INTERVAL '1 YEAR' < recurrence_end_time",
ImmutableMap.of(
"endTime",
endTime,
"startTime",
startTime,
"oneYearAgo",
endTime.minusYears(1)),
true,
(BigInteger id) -> {
recurrencesInScopeCounter.inc();
// Note that because all elements are mapped to the same dummy key, the next
// batching transform will effectively be serial. This however does not matter for
// our use case because the elements were obtained from a SQL read query, which
// are returned sequentially already. Therefore, having a sequential step to group
// them does not reduce overall parallelism of the pipeline, and the batches can
// then be distributed to all available workers for further processing, where the
// main benefit of parallelism shows. In benchmarking, turning the distribution
// of elements in this step resulted in marginal improvement in overall
// performance at best without clear indication on why or to which degree. If the
// runtime becomes a concern later on, we could consider fine-tuning the sharding
// of output elements in this step.
//
// See: https://stackoverflow.com/a/44956702/791306
return KV.of(0, id.longValue());
})
.withCoder(KvCoder.of(VarIntCoder.of(), VarLongCoder.of())));
}
private PCollection<Void> expandRecurrences(PCollection<KV<Integer, Long>> recurrenceIds) {
return recurrenceIds
.apply(
"Group into batches",
GroupIntoBatches.<Integer, Long>ofSize(batchSize).withShardedKey())
.apply(
"Expand and save Recurrences into OneTimes and corresponding DomainHistories",
MapElements.into(voids())
.via(
element -> {
Iterable<Long> ids = element.getValue();
tm().transact(
() -> {
ImmutableSet.Builder<ImmutableObject> results =
new ImmutableSet.Builder<>();
ids.forEach(id -> expandOneRecurrence(id, results));
if (!isDryRun) {
tm().putAll(results.build());
}
});
return null;
}));
}
private void expandOneRecurrence(
Long recurrenceId, ImmutableSet.Builder<ImmutableObject> results) {
BillingRecurrence billingRecurrence =
tm().loadByKey(BillingRecurrence.createVKey(recurrenceId));
// Determine the complete set of EventTimes this recurrence event should expand to within
// [max(recurrenceLastExpansion + 1 yr, startTime), min(recurrenceEndTime, endTime)).
//
// This range should always be legal for recurrences that are returned from the query. However,
// it is possible that the recurrence has changed between when the read transformation occurred
// and now. This could be caused by some out-of-process mutations (such as a domain deletion
// closing out a previously open-ended recurrence), or more subtly, Beam could execute the same
// work multiple times due to transient communication issues between workers and the scheduler.
// Such opportunistic retries are OK for pure functional transformations, but can cause
// unexpected behavior when side effects are executed more than once. For example, the
// recurrence_last_expansion field could be updated by a worker after a success expansion, which
// failed to report the status to the scheduler in time, which in turn scheduled another worker
// to work on the same batch. The second worker would see a new recurrence_last_expansion that
// causes the range to be illegal.
//
// The best way to handle any unexpected behavior is to simply drop the recurrence from
// expansion, if its new state still calls for an expansion, it would be picked up the next time
// the pipeline runs.
ImmutableSet<DateTime> eventTimes;
try {
eventTimes =
ImmutableSet.copyOf(
billingRecurrence
.getRecurrenceTimeOfYear()
.getInstancesInRange(
Range.closedOpen(
latestOf(
billingRecurrence.getRecurrenceLastExpansion().plusYears(1),
startTime),
earliestOf(billingRecurrence.getRecurrenceEndTime(), endTime))));
} catch (IllegalArgumentException e) {
return;
}
Domain domain = tm().loadByKey(Domain.createVKey(billingRecurrence.getDomainRepoId()));
Tld tld = Tld.get(domain.getTld());
// Find the times for which the OneTime billing event are already created, making this expansion
// idempotent. There is no need to match to the domain repo ID as the cancellation matching
// billing event itself can only be for a single domain.
ImmutableSet<DateTime> existingEventTimes =
ImmutableSet.copyOf(
tm().query(
"SELECT eventTime FROM BillingEvent WHERE cancellationMatchingBillingEvent ="
+ " :key",
DateTime.class)
.setParameter("key", billingRecurrence.createVKey())
.getResultList());
Set<DateTime> eventTimesToExpand = difference(eventTimes, existingEventTimes);
if (eventTimesToExpand.isEmpty()) {
return;
}
DateTime recurrenceLastExpansionTime = billingRecurrence.getRecurrenceLastExpansion();
// Create new OneTime and DomainHistory for EventTimes that needs to be expanded.
for (DateTime eventTime : eventTimesToExpand) {
recurrenceLastExpansionTime = latestOf(recurrenceLastExpansionTime, eventTime);
oneTimesToExpandCounter.inc();
DateTime billingTime = eventTime.plus(tld.getAutoRenewGracePeriodLength());
// Note that the DomainHistory is created as of transaction time, as opposed to event time.
// This might be counterintuitive because other DomainHistories are created at the time
// mutation events occur, such as in DomainDeleteFlow or DomainRenewFlow. Therefore, it is
// possible to have a DomainHistory for a delete during the autorenew grace period with a
// modification time before that of the DomainHistory for the autorenew itself. This is not
// ideal, but necessary because we save the **current** state of the domain (as of transaction
// time) to the DomainHistory , instead of the state of the domain as of event time (which
// would required loading the domain from DomainHistory at event time).
//
// Even though doing the loading is seemly possible, it generally is a bad idea to create
// DomainHistories retroactively and in all instances that we create a HistoryEntry we always
// set the modification time to the transaction time. It would also violate the invariance
// that a DomainHistory with a higher revision ID (which is always allocated with monotonic
// increase) always has a later modification time.
//
// Lastly because the domain entity itself did not change as part of the expansion, we should
// not project it to transaction time before saving it in the history, which would require us
// to save the projected domain as well. Any changes to the domain itself are handled when
// the domain is actually used or explicitly projected and saved. The DomainHistory created
// here does not actually affect anything materially (e.g. RDE). We can understand it in such
// a way that this history represents not when the domain is autorenewed (at event time), but
// when its autorenew billing event is created (at transaction time).
DomainHistory historyEntry =
new DomainHistory.Builder()
.setBySuperuser(false)
.setRegistrarId(billingRecurrence.getRegistrarId())
.setModificationTime(tm().getTransactionTime())
.setDomain(domain)
.setPeriod(Period.create(1, YEARS))
.setReason("Domain autorenewal by ExpandRecurringBillingEventsPipeline")
.setRequestedByRegistrar(false)
.setType(DOMAIN_AUTORENEW)
.setDomainTransactionRecords(
// Don't write a domain transaction record if the domain is deleted before billing
// time (i.e. within the autorenew grace period). We cannot rely on a negating
// DomainHistory created by DomainDeleteFlow because it only cancels transaction
// records already present. In this case the domain was deleted before this
// pipeline runs to expand the OneTime (which should be rare because this pipeline
// should run every day), and no negating transaction records would have been
// created when the deletion occurred. Again, there is no need to project the
// domain, because if it were deleted before this transaction, its updated delete
// time would have already been loaded here.
//
// We don't compare recurrence end time with billing time because the recurrence
// could be caused for other reasons during the grace period, like a manual
// renewal, in which case we still want to write the transaction record. Also,
// the expansion happens when event time is in scope, which means the billing time
// is still 45 days in the future, and the recurrence could have been closed
// between now and then.
//
// A side effect of this logic is that if a transfer occurs within the ARGP, it
// would have recorded both a TRANSFER_SUCCESSFUL and a NET_RENEWS_1_YEAR, even
// though the transfer would have subsumed the autorenew. There is no perfect
// solution for this because even if we expand the recurrence when the billing
// event is in scope (as was the case in the old action), we still cannot use
// recurrence end time < billing time as an indicator for if a transfer had
// occurred during ARGP (see last paragraph, renewals during ARGP also close the
// recurrence),therefore we still cannot always be correct when constructing the
// transaction records that way (either we miss transfers, or we miss renewals
// during ARGP).
//
// See: DomainFlowUtils#createCancellingRecords
domain.getDeletionTime().isBefore(billingTime)
? ImmutableSet.of()
: ImmutableSet.of(
DomainTransactionRecord.create(
tld.getTldStr(),
// We report this when the autorenew grace period ends.
billingTime,
TransactionReportField.netRenewsFieldFromYears(1),
1)))
.build();
results.add(historyEntry);
// It is OK to always create a OneTime, even though the domain might be deleted or transferred
// later during autorenew grace period, as a cancellation will always be written out in those
// instances.
BillingEvent billingEvent = null;
try {
billingEvent =
new BillingEvent.Builder()
.setBillingTime(billingTime)
.setRegistrarId(billingRecurrence.getRegistrarId())
// Determine the cost for a one-year renewal.
.setCost(
domainPricingLogic
.getRenewPrice(
tld,
billingRecurrence.getTargetId(),
eventTime,
1,
billingRecurrence,
Optional.empty())
.getRenewCost())
.setEventTime(eventTime)
.setFlags(union(billingRecurrence.getFlags(), Flag.SYNTHETIC))
.setDomainHistory(historyEntry)
.setPeriodYears(1)
.setReason(billingRecurrence.getReason())
.setSyntheticCreationTime(endTime)
.setCancellationMatchingBillingEvent(billingRecurrence)
.setTargetId(billingRecurrence.getTargetId())
.build();
} catch (AllocationTokenInvalidForPremiumNameException e) {
// This should not be reached since we are not using an allocation token
return;
}
results.add(billingEvent);
}
results.add(
billingRecurrence
.asBuilder()
.setRecurrenceLastExpansion(recurrenceLastExpansionTime)
.build());
}
private PDone advanceCursor(PCollection<Void> persisted) {
return PDone.in(
persisted
.getPipeline()
.apply("Create one dummy element", Create.of((Void) null))
.apply("Wait for all saves to finish", Wait.on(persisted))
// Because only one dummy element is created in the start PCollection, this
// transform is guaranteed to only process one element and therefore only run once.
// Because the previous step waits for all emissions of voids from the expansion step to
// finish, this transform is guaranteed to run only after all expansions are done and
// persisted.
.apply(
"Advance cursor",
ParDo.of(
new DoFn<Void, Void>() {
@ProcessElement
public void processElement() {
tm().transact(
() -> {
DateTime currentCursorTime =
tm().loadByKeyIfPresent(
Cursor.createGlobalVKey(RECURRING_BILLING))
.orElse(
Cursor.createGlobal(RECURRING_BILLING, START_OF_TIME))
.getCursorTime();
if (!currentCursorTime.equals(startTime)) {
throw new IllegalStateException(
String.format(
"Current cursor position %s does not match start time"
+ " %s.",
currentCursorTime, startTime));
}
tm().put(Cursor.createGlobal(RECURRING_BILLING, endTime));
});
}
}))
.getPipeline());
}
public static void main(String[] args) {
PipelineOptionsFactory.register(ExpandBillingRecurrencesPipelineOptions.class);
ExpandBillingRecurrencesPipelineOptions options =
PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(ExpandBillingRecurrencesPipelineOptions.class);
// Hardcode the transaction level to be at serializable we do not want concurrent runs of the
// pipeline for the same window to create duplicate OneTimes. This ensures that the set of
// existing OneTimes do not change by the time new OneTimes are inserted within a transaction.
//
// Per PostgreSQL, serializable isolation level does not introduce any blocking beyond that
// present in repeatable read other than some overhead related to monitoring possible
// serializable anomalies. Therefore, in most cases, since each worker of the same job works on
// a different set of recurrences, it is not possible for their execution order to affect
// serialization outcome, and the performance penalty should be minimum when using serializable
// compared to using repeatable read.
//
// We should pay some attention to the runtime of the job and logs when we run this job daily on
// production to check the actual performance impact for using this isolation level (i.e. check
// the frequency of occurrence of retried transactions due to serialization errors) to assess
// the actual parallelism of the job.
//
// See: https://www.postgresql.org/docs/current/transaction-iso.html
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_SERIALIZABLE);
Pipeline pipeline = Pipeline.create(options);
new ExpandBillingRecurrencesPipeline(options, new SystemClock()).run(pipeline);
}
@Singleton
@Component(
modules = {CustomLogicModule.class, CustomLogicFactoryModule.class, ConfigModule.class})
interface PipelineComponent {
DomainPricingLogic domainPricingLogic();
@Config("jdbcBatchSize")
int batchSize();
}
}

View File

@@ -0,0 +1,49 @@
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.billing;
import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
public interface ExpandBillingRecurrencesPipelineOptions extends RegistryPipelineOptions {
@Description(
"The inclusive lower bound of on the range of event times that will be expanded, in ISO 8601"
+ " format")
String getStartTime();
void setStartTime(String startTime);
@Description(
"The exclusive upper bound of on the range of event times that will be expanded, in ISO 8601"
+ " format")
String getEndTime();
void setEndTime(String endTime);
@Description("If true, the expanded billing events and history entries will not be saved.")
@Default.Boolean(false)
boolean getIsDryRun();
void setIsDryRun(boolean isDryRun);
@Description(
"If true, set the RECURRING_BILLING global cursor to endTime after saving all expanded"
+ " billing events and history entries.")
@Default.Boolean(true)
boolean getAdvanceCursor();
void setAdvanceCursor(boolean advanceCursor);
}

View File

@@ -12,18 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.invoicing;
package google.registry.beam.billing;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static org.apache.beam.sdk.values.TypeDescriptors.strings;
import com.google.common.flogger.FluentLogger;
import google.registry.beam.billing.BillingEvent.InvoiceGroupingKey;
import google.registry.beam.billing.BillingEvent.InvoiceGroupingKey.InvoiceGroupingKeyCoder;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryJpaIO.Read;
import google.registry.beam.invoicing.BillingEvent.InvoiceGroupingKey;
import google.registry.beam.invoicing.BillingEvent.InvoiceGroupingKey.InvoiceGroupingKeyCoder;
import google.registry.model.billing.BillingEvent.Flag;
import google.registry.model.billing.BillingEvent.OneTime;
import google.registry.model.billing.BillingBase.Flag;
import google.registry.model.billing.BillingEvent;
import google.registry.model.registrar.Registrar;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.reporting.billing.BillingModule;
@@ -86,29 +86,30 @@ public class InvoicingPipeline implements Serializable {
void setupPipeline(Pipeline pipeline) {
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_COMMITTED);
PCollection<BillingEvent> billingEvents = readFromCloudSql(options, pipeline);
PCollection<google.registry.beam.billing.BillingEvent> billingEvents =
readFromCloudSql(options, pipeline);
saveInvoiceCsv(billingEvents, options);
saveDetailedCsv(billingEvents, options);
}
static PCollection<BillingEvent> readFromCloudSql(
static PCollection<google.registry.beam.billing.BillingEvent> readFromCloudSql(
InvoicingPipelineOptions options, Pipeline pipeline) {
Read<Object[], BillingEvent> read =
RegistryJpaIO.<Object[], BillingEvent>read(
Read<Object[], google.registry.beam.billing.BillingEvent> read =
RegistryJpaIO.<Object[], google.registry.beam.billing.BillingEvent>read(
makeCloudSqlQuery(options.getYearMonth()), false, row -> parseRow(row).orElse(null))
.withCoder(SerializableCoder.of(BillingEvent.class));
.withCoder(SerializableCoder.of(google.registry.beam.billing.BillingEvent.class));
PCollection<BillingEvent> billingEventsWithNulls =
PCollection<google.registry.beam.billing.BillingEvent> billingEventsWithNulls =
pipeline.apply("Read BillingEvents from Cloud SQL", read);
// Remove null billing events
return billingEventsWithNulls.apply(Filter.by(Objects::nonNull));
}
private static Optional<BillingEvent> parseRow(Object[] row) {
OneTime oneTime = (OneTime) row[0];
private static Optional<google.registry.beam.billing.BillingEvent> parseRow(Object[] row) {
BillingEvent billingEvent = (BillingEvent) row[0];
Registrar registrar = (Registrar) row[1];
CurrencyUnit currency = oneTime.getCost().getCurrencyUnit();
CurrencyUnit currency = billingEvent.getCost().getCurrencyUnit();
if (!registrar.getBillingAccountMap().containsKey(currency)) {
logger.atSevere().log(
"Registrar %s does not have a product account key for the currency unit: %s",
@@ -117,37 +118,40 @@ public class InvoicingPipeline implements Serializable {
}
return Optional.of(
BillingEvent.create(
oneTime.getId(),
oneTime.getBillingTime(),
oneTime.getEventTime(),
google.registry.beam.billing.BillingEvent.create(
billingEvent.getId(),
billingEvent.getBillingTime(),
billingEvent.getEventTime(),
registrar.getRegistrarId(),
registrar.getBillingAccountMap().get(currency),
registrar.getPoNumber().orElse(""),
DomainNameUtils.getTldFromDomainName(oneTime.getTargetId()),
oneTime.getReason().toString(),
oneTime.getTargetId(),
oneTime.getDomainRepoId(),
Optional.ofNullable(oneTime.getPeriodYears()).orElse(0),
oneTime.getCost().getCurrencyUnit().toString(),
oneTime.getCost().getAmount().doubleValue(),
DomainNameUtils.getTldFromDomainName(billingEvent.getTargetId()),
billingEvent.getReason().toString(),
billingEvent.getTargetId(),
billingEvent.getDomainRepoId(),
Optional.ofNullable(billingEvent.getPeriodYears()).orElse(0),
billingEvent.getCost().getCurrencyUnit().toString(),
billingEvent.getCost().getAmount().doubleValue(),
String.join(
" ", oneTime.getFlags().stream().map(Flag::toString).collect(toImmutableSet()))));
" ",
billingEvent.getFlags().stream().map(Flag::toString).collect(toImmutableSet()))));
}
/** Transform that converts a {@code BillingEvent} into an invoice CSV row. */
private static class GenerateInvoiceRows
extends PTransform<PCollection<BillingEvent>, PCollection<String>> {
extends PTransform<
PCollection<google.registry.beam.billing.BillingEvent>, PCollection<String>> {
private static final long serialVersionUID = -8090619008258393728L;
@Override
public PCollection<String> expand(PCollection<BillingEvent> input) {
public PCollection<String> expand(
PCollection<google.registry.beam.billing.BillingEvent> input) {
return input
.apply(
"Map to invoicing key",
MapElements.into(TypeDescriptor.of(InvoiceGroupingKey.class))
.via(BillingEvent::getInvoiceGroupingKey))
.via(google.registry.beam.billing.BillingEvent::getInvoiceGroupingKey))
.apply(
"Filter out free events", Filter.by((InvoiceGroupingKey key) -> key.unitPrice() != 0))
.setCoder(new InvoiceGroupingKeyCoder())
@@ -161,7 +165,8 @@ public class InvoicingPipeline implements Serializable {
/** Saves the billing events to a single overall invoice CSV file. */
static void saveInvoiceCsv(
PCollection<BillingEvent> billingEvents, InvoicingPipelineOptions options) {
PCollection<google.registry.beam.billing.BillingEvent> billingEvents,
InvoicingPipelineOptions options) {
billingEvents
.apply("Generate overall invoice rows", new GenerateInvoiceRows())
.apply(
@@ -182,16 +187,17 @@ public class InvoicingPipeline implements Serializable {
/** Saves the billing events to detailed report CSV files keyed by registrar-tld pairs. */
static void saveDetailedCsv(
PCollection<BillingEvent> billingEvents, InvoicingPipelineOptions options) {
PCollection<google.registry.beam.billing.BillingEvent> billingEvents,
InvoicingPipelineOptions options) {
String yearMonth = options.getYearMonth();
billingEvents.apply(
"Write detailed report for each registrar-tld pair",
FileIO.<String, BillingEvent>writeDynamic()
FileIO.<String, google.registry.beam.billing.BillingEvent>writeDynamic()
.to(
String.format(
"%s/%s/%s",
options.getBillingBucketUrl(), BillingModule.INVOICES_DIRECTORY, yearMonth))
.by(BillingEvent::getDetailedReportGroupingKey)
.by(google.registry.beam.billing.BillingEvent::getDetailedReportGroupingKey)
.withNumShards(1)
.withDestinationCoder(StringUtf8Coder.of())
.withNaming(
@@ -200,8 +206,8 @@ public class InvoicingPipeline implements Serializable {
String.format(
"%s_%s_%s.csv", BillingModule.DETAIL_REPORT_PREFIX, yearMonth, key))
.via(
Contextful.fn(BillingEvent::toCsv),
TextIO.sink().withHeader(BillingEvent.getHeader())));
Contextful.fn(google.registry.beam.billing.BillingEvent::toCsv),
TextIO.sink().withHeader(google.registry.beam.billing.BillingEvent.getHeader())));
}
/** Create the Cloud SQL query for a given yearMonth at runtime. */

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.invoicing;
package google.registry.beam.billing;
import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Description;

View File

@@ -71,7 +71,7 @@ public final class RegistryJpaIO {
}
/**
* Returns a {@link Read} connector based on the given {@code jpql} query string.
* Returns a {@link Read} connector based on the given native or {@code jpql} query string.
*
* <p>User should take care to prevent sql-injection attacks.
*/

View File

@@ -15,7 +15,6 @@
package google.registry.beam.common;
import google.registry.config.RegistryEnvironment;
import google.registry.model.annotations.DeleteAfterMigration;
import google.registry.persistence.PersistenceModule.JpaTransactionManagerType;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import java.util.Objects;
@@ -57,17 +56,6 @@ public interface RegistryPipelineOptions extends GcpOptions {
void setSqlWriteBatchSize(int sqlWriteBatchSize);
@DeleteAfterMigration
@Description(
"Whether to use self allocated primary IDs when building entities. This should only be used"
+ " when the IDs are not significant and the resulting entities are not persisted back to"
+ " the database. Use with caution as self allocated IDs are not unique across workers,"
+ " and persisting entities with these IDs can be dangerous.")
@Default.Boolean(false)
boolean getUseSelfAllocatedId();
void setUseSelfAllocatedId(boolean useSelfAllocatedId);
static RegistryPipelineComponent toRegistryPipelineComponent(RegistryPipelineOptions options) {
return DaggerRegistryPipelineComponent.builder()
.isolationOverride(options.getIsolationOverride())

View File

@@ -21,7 +21,6 @@ import com.google.common.flogger.FluentLogger;
import dagger.Lazy;
import google.registry.config.RegistryEnvironment;
import google.registry.config.SystemPropertySetter;
import google.registry.model.IdService;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.persistence.transaction.TransactionManagerFactory;
import org.apache.beam.sdk.harness.JvmInitializer;
@@ -63,15 +62,5 @@ public class RegistryPipelineWorkerInitializer implements JvmInitializer {
}
TransactionManagerFactory.setJpaTmOnBeamWorker(transactionManagerLazy::get);
SystemPropertySetter.PRODUCTION_IMPL.setProperty(PROPERTY, "true");
// Use self-allocated IDs if requested. Note that this inevitably results in duplicate IDs from
// multiple workers, which can also collide with existing IDs in the database. So they cannot be
// dependent upon for comparison or anything significant. The resulting entities can never be
// persisted back into the database. This is a stop-gap measure that should only be used when
// you need to create Buildables in Beam, but do not have control over how the IDs are
// allocated, and you don't care about the generated IDs as long
// as you can build the entities.
if (registryOptions.getUseSelfAllocatedId()) {
IdService.setForceUseSelfAllocatedId();
}
}
}

View File

@@ -24,8 +24,10 @@ import java.util.stream.Stream;
import javax.annotation.Nullable;
import javax.persistence.EntityManager;
import javax.persistence.Query;
import javax.persistence.TemporalType;
import javax.persistence.TypedQuery;
import javax.persistence.criteria.CriteriaQuery;
import org.joda.time.DateTime;
/** Interface for query instances used by {@link RegistryJpaIO.Read}. */
public interface RegistryQuery<T> extends Serializable {
@@ -57,7 +59,14 @@ public interface RegistryQuery<T> extends Serializable {
Query query =
nativeQuery ? entityManager.createNativeQuery(sql) : entityManager.createQuery(sql);
if (parameters != null) {
parameters.forEach(query::setParameter);
parameters.forEach(
(key, value) -> {
if (value instanceof DateTime) {
query.setParameter(key, ((DateTime) value).toDate(), TemporalType.TIMESTAMP);
} else {
query.setParameter(key, value);
}
});
}
JpaTransactionManager.setQueryFetchSize(query, QUERY_FETCH_SIZE);
@SuppressWarnings("unchecked")

View File

@@ -26,13 +26,14 @@ import com.google.auto.value.AutoValue;
import com.google.cloud.storage.BlobId;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import google.registry.batch.CloudTasksUtils;
import google.registry.gcs.GcsUtils;
import google.registry.keyring.api.PgpHelper;
import google.registry.model.common.Cursor;
import google.registry.model.rde.RdeMode;
import google.registry.model.rde.RdeNamingUtils;
import google.registry.model.rde.RdeRevision;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.rde.BrdaCopyAction;
import google.registry.rde.DepositFragment;
import google.registry.rde.Ghostryde;
@@ -46,7 +47,6 @@ import google.registry.rde.RdeUtil;
import google.registry.request.Action.Service;
import google.registry.request.RequestParameters;
import google.registry.tldconfig.idn.IdnTableEnum;
import google.registry.util.CloudTasksUtils;
import google.registry.xjc.rdeheader.XjcRdeHeader;
import google.registry.xjc.rdeheader.XjcRdeHeaderElement;
import google.registry.xml.ValidationMode;
@@ -272,12 +272,12 @@ public class RdeIO {
tm().transact(
() -> {
PendingDeposit key = input.getKey();
Registry registry = Registry.get(key.tld());
Tld tld = Tld.get(key.tld());
Optional<Cursor> cursor =
tm().transact(
() ->
tm().loadByKeyIfPresent(
Cursor.createScopedVKey(key.cursor(), registry)));
Cursor.createScopedVKey(key.cursor(), tld)));
DateTime position = getCursorTimeOrStartOfTime(cursor);
checkState(key.interval() != null, "Interval must be present");
DateTime newPosition = key.watermark().plus(key.interval());
@@ -290,7 +290,7 @@ public class RdeIO {
"Partial ordering of RDE deposits broken: %s %s",
position,
key);
tm().put(Cursor.createScoped(key.cursor(), newPosition, registry));
tm().put(Cursor.createScoped(key.cursor(), newPosition, tld));
logger.atInfo().log(
"Rolled forward %s on %s cursor to %s.", key.cursor(), key.tld(), newPosition);
RdeRevision.saveRevision(key.tld(), key.watermark(), key.mode(), input.getValue());
@@ -306,7 +306,7 @@ public class RdeIO {
RDE_UPLOAD_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
RdeUploadAction.PATH,
Service.BACKEND.getServiceId(),
Service.BACKEND,
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
key.tld(),
@@ -318,7 +318,7 @@ public class RdeIO {
BRDA_QUEUE,
cloudTasksUtils.createPostTaskWithDelay(
BrdaCopyAction.PATH,
Service.BACKEND.getServiceId(),
Service.BACKEND,
ImmutableMultimap.of(
RequestParameters.PARAM_TLD,
key.tld(),

View File

@@ -38,6 +38,7 @@ import com.google.common.flogger.FluentLogger;
import com.google.common.io.BaseEncoding;
import dagger.BindsInstance;
import dagger.Component;
import google.registry.batch.CloudTasksUtils;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryPipelineOptions;
import google.registry.config.CloudTasksUtilsModule;
@@ -62,7 +63,6 @@ import google.registry.rde.DepositFragment;
import google.registry.rde.PendingDeposit;
import google.registry.rde.PendingDeposit.PendingDepositCoder;
import google.registry.rde.RdeMarshaller;
import google.registry.util.CloudTasksUtils;
import google.registry.util.UtilsModule;
import google.registry.xml.ValidationMode;
import java.io.ByteArrayInputStream;
@@ -170,7 +170,6 @@ import org.joda.time.DateTime;
* @see <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates">Using
* Flex Templates</a>
*/
@SuppressWarnings("ALL")
@Singleton
public class RdePipeline implements Serializable {
@@ -688,13 +687,6 @@ public class RdePipeline implements Serializable {
RdePipelineOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(RdePipelineOptions.class);
// We need to self allocate the IDs because the pipeline creates EPP resources from history
// entries and projects them to watermark. These buildable entities would otherwise request an
// ID from datastore, which Beam does not have access to. The IDs are not included in the
// deposits or are these entities persisted back to the database, so it is OK to use a self
// allocated ID to get around the limitations of beam.
options.setUseSelfAllocatedId(true);
RegistryPipelineOptions.validateRegistryPipelineOptions(options);
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_READ_COMMITTED);
DaggerRdePipeline_RdePipelineComponent.builder().options(options).build().rdePipeline().run();

View File

@@ -26,7 +26,6 @@ import google.registry.beam.common.RegistryJpaIO;
import google.registry.beam.common.RegistryJpaIO.Read;
import google.registry.beam.spec11.SafeBrowsingTransforms.EvaluateSafeBrowsingFn;
import google.registry.config.RegistryConfig.ConfigModule;
import google.registry.model.IdService;
import google.registry.model.domain.Domain;
import google.registry.model.reporting.Spec11ThreatMatch;
import google.registry.model.reporting.Spec11ThreatMatch.ThreatType;
@@ -175,7 +174,7 @@ public class Spec11Pipeline implements Serializable {
.setDomainName(input.getKey().domainName())
.setDomainRepoId(input.getKey().domainRepoId())
.setRegistrarId(input.getKey().registrarId())
.setId(IdService.allocateId())
// TODO(b/264416932) Assign id to prevent duplicate inserts.
.build();
output.output(spec11ThreatMatch);
}

View File

@@ -0,0 +1,166 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.wipeout;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import static org.apache.beam.sdk.values.TypeDescriptors.voids;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Streams;
import google.registry.beam.common.RegistryJpaIO;
import google.registry.model.contact.ContactHistory;
import google.registry.model.reporting.HistoryEntry.HistoryEntryId;
import google.registry.persistence.PersistenceModule.TransactionIsolationLevel;
import google.registry.persistence.VKey;
import java.io.Serializable;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.coders.KvCoder;
import org.apache.beam.sdk.coders.StringUtf8Coder;
import org.apache.beam.sdk.coders.VarLongCoder;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.join.CoGroupByKey;
import org.apache.beam.sdk.transforms.join.KeyedPCollectionTuple;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.TupleTag;
import org.joda.time.DateTime;
/**
* Definition of a Dataflow Flex pipeline template, which finds out {@link ContactHistory} entries
* that are older than a given age (excluding the most recent one, even if it falls with the range)
* and wipe out PII information in them.
*
* <p>To stage this template locally, run {@code ./nom_build :core:sBP --environment=alpha \
* --pipeline=wipeOutContactHistoryPii}.
*
* <p>Then, you can run the staged template via the API client library, gCloud or a raw REST call.
*/
public class WipeOutContactHistoryPiiPipeline implements Serializable {
private static final long serialVersionUID = -4111052675715913820L;
private static final TupleTag<Long> REVISIONS_TO_WIPE = new TupleTag<>();
private static final TupleTag<Long> MOST_RECENT_REVISION = new TupleTag<>();
private final DateTime cutoffTime;
private final boolean dryRun;
private final Counter contactsInScope =
Metrics.counter("WipeOutContactHistoryPii", "contacts in scope");
private final Counter historiesToWipe =
Metrics.counter("WipeOutContactHistoryPii", "contact histories to wipe PII from");
private final Counter historiesWiped =
Metrics.counter("WipeOutContactHistoryPii", "contact histories actually updated");
WipeOutContactHistoryPiiPipeline(WipeOutContactHistoryPiiPipelineOptions options) {
dryRun = options.getIsDryRun();
cutoffTime = DateTime.parse(options.getCutoffTime());
}
void setup(Pipeline pipeline) {
KeyedPCollectionTuple.of(REVISIONS_TO_WIPE, getHistoryEntriesToWipe(pipeline))
.and(MOST_RECENT_REVISION, getMostRecentHistoryEntries(pipeline))
.apply("Group by contact", CoGroupByKey.create())
.apply(
"Wipe out PII",
MapElements.into(voids())
.via(
kv -> {
String repoId = kv.getKey();
long mostRecentRevision = kv.getValue().getOnly(MOST_RECENT_REVISION);
ImmutableList<Long> revisionsToWipe =
Streams.stream(kv.getValue().getAll(REVISIONS_TO_WIPE))
.filter(e -> e != mostRecentRevision)
.collect(toImmutableList());
if (revisionsToWipe.isEmpty()) {
return null;
}
contactsInScope.inc();
tm().transact(
() -> {
for (long revisionId : revisionsToWipe) {
historiesToWipe.inc();
ContactHistory history =
tm().loadByKey(
VKey.create(
ContactHistory.class,
new HistoryEntryId(repoId, revisionId)));
// In the unlikely case where multiple pipelines run at the
// same time, or where the runner decides to rerun a particular
// transform, we might have a history entry that has already been
// wiped at this point. There's no need to wipe it again.
if (!dryRun
&& history.getContactBase().isPresent()
&& history.getContactBase().get().getEmailAddress() != null) {
historiesWiped.inc();
tm().update(history.asBuilder().wipeOutPii().build());
}
}
});
return null;
}));
}
PCollection<KV<String, Long>> getHistoryEntriesToWipe(Pipeline pipeline) {
return pipeline.apply(
"Find contact histories to wipee",
// Email is one of the required fields in EPP, meaning it's initially not null when it
// is set by EPP flows (even though it is nullalbe in the SQL schema). Therefore,
// checking if it's null is one way to avoid processing contact history entities that
// have been processed previously. Refer to RFC 5733 for more information.
RegistryJpaIO.read(
"SELECT repoId, revisionId FROM ContactHistory WHERE email IS NOT NULL AND"
+ " modificationTime < :cutoffTime",
ImmutableMap.of("cutoffTime", cutoffTime),
Object[].class,
row -> KV.of((String) row[0], (long) row[1]))
.withCoder(KvCoder.of(StringUtf8Coder.of(), VarLongCoder.of())));
}
PCollection<KV<String, Long>> getMostRecentHistoryEntries(Pipeline pipeline) {
return pipeline.apply(
"Find the most recent historiy entry for each contact",
RegistryJpaIO.read(
"SELECT repoId, revisionId FROM ContactHistory"
+ " WHERE (repoId, modificationTime) IN"
+ " (SELECT repoId, MAX(modificationTime) FROM ContactHistory GROUP BY repoId)",
ImmutableMap.of(),
Object[].class,
row -> KV.of((String) row[0], (long) row[1]))
.withCoder(KvCoder.of(StringUtf8Coder.of(), VarLongCoder.of())));
}
PipelineResult run(Pipeline pipeline) {
setup(pipeline);
return pipeline.run();
}
public static void main(String[] args) {
PipelineOptionsFactory.register(WipeOutContactHistoryPiiPipelineOptions.class);
WipeOutContactHistoryPiiPipelineOptions options =
PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(WipeOutContactHistoryPiiPipelineOptions.class);
// Repeatable read should be more than enough since we are dealing with old history entries that
// are otherwise immutable.
options.setIsolationOverride(TransactionIsolationLevel.TRANSACTION_REPEATABLE_READ);
Pipeline pipeline = Pipeline.create(options);
new WipeOutContactHistoryPiiPipeline(options).run(pipeline);
}
}

View File

@@ -0,0 +1,37 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.beam.wipeout;
import google.registry.beam.common.RegistryPipelineOptions;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
public interface WipeOutContactHistoryPiiPipelineOptions extends RegistryPipelineOptions {
@Description(
"A contact history entry with a history modification time before this time will have its PII"
+ " wiped, unless it is the most entry for the contact.")
String getCutoffTime();
void setCutoffTime(String value);
@Description(
"If true, the wiped out billing events will not be saved but the pipeline metrics counter"
+ " will still be updated.")
@Default.Boolean(false)
boolean getIsDryRun();
void setIsDryRun(boolean value);
}

View File

@@ -36,7 +36,6 @@ import com.google.api.services.bigquery.model.GetQueryResultsResponse;
import com.google.api.services.bigquery.model.Job;
import com.google.api.services.bigquery.model.JobConfiguration;
import com.google.api.services.bigquery.model.JobConfigurationExtract;
import com.google.api.services.bigquery.model.JobConfigurationLoad;
import com.google.api.services.bigquery.model.JobConfigurationQuery;
import com.google.api.services.bigquery.model.JobReference;
import com.google.api.services.bigquery.model.JobStatistics;
@@ -57,7 +56,6 @@ import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.ListeningExecutorService;
import com.google.common.util.concurrent.MoreExecutors;
import google.registry.bigquery.BigqueryUtils.DestinationFormat;
import google.registry.bigquery.BigqueryUtils.SourceFormat;
import google.registry.bigquery.BigqueryUtils.TableType;
import google.registry.bigquery.BigqueryUtils.WriteDisposition;
import google.registry.util.NonFinalForTesting;
@@ -375,23 +373,6 @@ public class BigqueryConnection implements AutoCloseable {
}
}
/**
* Starts an asynchronous load job to populate the specified destination table with the given
* source URIs and source format. Returns a ListenableFuture that holds the same destination table
* object on success.
*/
public ListenableFuture<DestinationTable> startLoad(
DestinationTable dest, SourceFormat sourceFormat, Iterable<String> sourceUris) {
Job job = new Job()
.setConfiguration(new JobConfiguration()
.setLoad(new JobConfigurationLoad()
.setWriteDisposition(dest.getWriteDisposition().toString())
.setSourceFormat(sourceFormat.toString())
.setSourceUris(ImmutableList.copyOf(sourceUris))
.setDestinationTable(dest.getTableReference())));
return transform(runJobToCompletion(job, dest), this::updateTable, directExecutor());
}
/**
* Starts an asynchronous query job to populate the specified destination table with the results
* of the specified query, or if the table is a view, to update the view to reflect that query.

View File

@@ -20,7 +20,7 @@ import com.google.common.collect.ImmutableList;
import dagger.Module;
import dagger.Provides;
import dagger.multibindings.Multibinds;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.GoogleCredentialsBundle;
import java.util.Map;
@@ -34,7 +34,7 @@ public abstract class BigqueryModule {
@Provides
static Bigquery provideBigquery(
@DefaultCredential GoogleCredentialsBundle credentialsBundle,
@ApplicationDefaultCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId) {
return new Bigquery.Builder(
credentialsBundle.getHttpTransport(),

View File

@@ -25,18 +25,6 @@ import org.joda.time.format.ISODateTimeFormat;
/** Utilities related to Bigquery. */
public class BigqueryUtils {
/** Bigquery modes for schema fields. */
public enum FieldMode {
NULLABLE,
REQUIRED,
REPEATED;
/** Return the name of the field mode as it should appear in the Bigquery schema. */
public String schemaName() {
return name();
}
}
/** Bigquery schema field types. */
public enum FieldType {
STRING,
@@ -44,19 +32,7 @@ public class BigqueryUtils {
FLOAT,
TIMESTAMP,
RECORD,
BOOLEAN;
/** Return the name of the field type as it should appear in the Bigquery schema. */
public String schemaName() {
return name();
}
}
/** Source formats for Bigquery load jobs. */
public enum SourceFormat {
CSV,
NEWLINE_DELIMITED_JSON,
DATASTORE_BACKUP
BOOLEAN
}
/** Destination formats for Bigquery extract jobs. */

View File

@@ -19,18 +19,15 @@ import com.google.cloud.tasks.v2.CloudTasksClient;
import com.google.cloud.tasks.v2.CloudTasksSettings;
import dagger.Module;
import dagger.Provides;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.batch.CloudTasksUtils;
import google.registry.batch.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.batch.CloudTasksUtils.SerializableCloudTasksClient;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.CloudTasksUtils.GcpCloudTasksClient;
import google.registry.util.CloudTasksUtils.SerializableCloudTasksClient;
import google.registry.util.GoogleCredentialsBundle;
import google.registry.util.Retrier;
import java.io.IOException;
import java.io.Serializable;
import java.util.function.Supplier;
import javax.inject.Singleton;
/**
* A {@link Module} that provides {@link CloudTasksUtils}.
@@ -41,21 +38,10 @@ import javax.inject.Singleton;
@Module
public abstract class CloudTasksUtilsModule {
@Singleton
@Provides
public static CloudTasksUtils provideCloudTasksUtils(
@Config("projectId") String projectId,
@Config("locationId") String locationId,
SerializableCloudTasksClient client,
Retrier retrier,
Clock clock) {
return new CloudTasksUtils(retrier, clock, projectId, locationId, client);
}
// Provides a supplier instead of using a Dagger @Provider because the latter is not serializable.
@Provides
public static Supplier<CloudTasksClient> provideCloudTasksClientSupplier(
@DefaultCredential GoogleCredentialsBundle credentials) {
@ApplicationDefaultCredential GoogleCredentialsBundle credentials) {
return (Supplier<CloudTasksClient> & Serializable)
() -> {
CloudTasksClient client;

View File

@@ -66,38 +66,6 @@ public abstract class CredentialModule {
return GoogleCredentialsBundle.create(credential);
}
/**
* Provides the default {@link GoogleCredentialsBundle} from the Google Cloud runtime.
*
* <p>The credential returned depends on the runtime environment:
*
* <ul>
* <li>On AppEngine, returns the service account credential for
* PROJECT_ID@appspot.gserviceaccount.com
* <li>On Compute Engine, returns the service account credential for
* PROJECT_NUMBER-compute@developer.gserviceaccount.com
* <li>On end user host, this returns the credential downloaded by gcloud. Please refer to <a
* href="https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login">Cloud
* SDK documentation</a> for details.
* </ul>
*/
@DefaultCredential
@Provides
@Singleton
public static GoogleCredentialsBundle provideDefaultCredential(
@Config("defaultCredentialOauthScopes") ImmutableList<String> requiredScopes) {
GoogleCredentials credential;
try {
credential = GoogleCredentials.getApplicationDefault();
} catch (IOException e) {
throw new RuntimeException(e);
}
if (credential.createScopedRequired()) {
credential = credential.createScoped(requiredScopes);
}
return GoogleCredentialsBundle.create(credential);
}
/**
* Provides a {@link GoogleCredentialsBundle} for accessing Google Workspace APIs, such as Drive
* and Sheets.
@@ -162,13 +130,6 @@ public abstract class CredentialModule {
@Retention(RetentionPolicy.RUNTIME)
public @interface ApplicationDefaultCredential {}
/** Dagger qualifier for the Application Default Credential. */
@Qualifier
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Deprecated // Switching to @ApplicationDefaultCredential
public @interface DefaultCredential {}
/** Dagger qualifier for the credential for Google Workspace APIs. */
@Qualifier
@Documented

View File

@@ -32,6 +32,8 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSortedMap;
import dagger.Module;
import dagger.Provides;
import google.registry.dns.ReadDnsRefreshRequestsAction;
import google.registry.model.common.DnsRefreshRequest;
import google.registry.persistence.transaction.JpaTransactionManager;
import google.registry.util.YamlUtils;
import java.lang.annotation.Documented;
@@ -60,8 +62,8 @@ import org.joda.time.Duration;
*
* <p>This class does not represent the total configuration of the Nomulus service. It's <b>only
* meant for settings that need to be configured <i>once</i></b>. Settings which may be subject to
* change in the future, should instead be retrieved from Datastore. The {@link
* google.registry.model.tld.Registry Registry} class is one such example of this.
* change in the future, should instead be retrieved from the database. The {@link
* google.registry.model.tld.Tld Tld} class is one such example of this.
*
* <p>Note: Only settings that are actually configurable belong in this file. It's not a catch-all
* for anything widely used throughout the code base.
@@ -118,6 +120,18 @@ public final class RegistryConfig {
return config.gcpProject.locationId;
}
@Provides
@Config("serviceAccountEmails")
public static ImmutableList<String> provideServiceAccountEmails(RegistryConfigSettings config) {
return ImmutableList.copyOf(config.gcpProject.serviceAccountEmails);
}
@Provides
@Config("defaultServiceAccount")
public static Optional<String> provideDefaultServiceAccount(RegistryConfigSettings config) {
return Optional.ofNullable(config.gcpProject.defaultServiceAccount);
}
/**
* The filename of the logo to be displayed in the header of the registrar console.
*
@@ -288,9 +302,9 @@ public final class RegistryConfig {
/**
* The maximum number of domain and host updates to batch together to send to
* PublishDnsUpdatesAction, to avoid exceeding AppEngine's limits.
* PublishDnsUpdatesAction, to avoid exceeding HTTP request timeout limits.
*
* @see google.registry.dns.ReadDnsQueueAction
* @see google.registry.dns.ReadDnsRefreshRequestsAction
*/
@Provides
@Config("dnsTldUpdateBatchSize")
@@ -321,23 +335,30 @@ public final class RegistryConfig {
}
/**
* The requested maximum duration for ReadDnsQueueAction.
* The requested maximum duration for {@link ReadDnsRefreshRequestsAction}.
*
* <p>ReadDnsQueueAction reads update tasks from the dns-pull queue. It will continue reading
* tasks until either the queue is empty, or this duration has passed.
* <p>{@link ReadDnsRefreshRequestsAction} reads refresh requests from {@link DnsRefreshRequest}
* It will continue reading requests until either no requests exist that matche the condition,
* or this duration has passed.
*
* <p>This time is the maximum duration between the first and last attempt to lease tasks from
* the dns-pull queue. The actual running time might be slightly longer, as we process the
* tasks.
* <p>This time is the maximum duration between the first and last attempt to read requests from
* {@link DnsRefreshRequest}. The actual running time might be slightly longer, as we process
* the requests.
*
* <p>This value should be less than the cron-job repeat rate for ReadDnsQueueAction, to make
* sure we don't have multiple ReadDnsActions leasing tasks simultaneously.
* <p>The requests that are read will not be read again by any action until after this period
* has passed, so concurrent runs (or runs that are very close to each other) of {@link
* ReadDnsRefreshRequestsAction} will not keep reading the same requests with the earliest
* request time.
*
* @see google.registry.dns.ReadDnsQueueAction
* <p>Still, this value should ideally be less than the cloud scheduler job repeat rate for
* {@link ReadDnsRefreshRequestsAction}, to not waste resources on multiple actions running at
* the same time.
*
* <p>see google.registry.dns.ReadDnsRefreshRequestsAction
*/
@Provides
@Config("readDnsQueueActionRuntime")
public static Duration provideReadDnsQueueRuntime() {
@Config("readDnsRefreshRequestsActionRuntime")
public static Duration provideReadDnsRefreshRequestsRuntime() {
return Duration.standardSeconds(45);
}
@@ -575,9 +596,9 @@ public final class RegistryConfig {
/**
* Returns the default job region to run Apache Beam (Cloud Dataflow) jobs in.
*
* @see google.registry.beam.invoicing.InvoicingPipeline
* @see google.registry.beam.billing.InvoicingPipeline
* @see google.registry.beam.spec11.Spec11Pipeline
* @see google.registry.beam.invoicing.InvoicingPipeline
* @see google.registry.beam.billing.InvoicingPipeline
*/
@Provides
@Config("defaultJobRegion")
@@ -655,7 +676,7 @@ public final class RegistryConfig {
/**
* Returns the URL of the GCS bucket we store invoices and detail reports in.
*
* @see google.registry.beam.invoicing.InvoicingPipeline
* @see google.registry.beam.billing.InvoicingPipeline
*/
@Provides
@Config("billingBucketUrl")
@@ -691,7 +712,7 @@ public final class RegistryConfig {
/**
* Returns the file prefix for the invoice CSV file.
*
* @see google.registry.beam.invoicing.InvoicingPipeline
* @see google.registry.beam.billing.InvoicingPipeline
* @see google.registry.reporting.billing.BillingEmailUtils
*/
@Provides
@@ -924,7 +945,7 @@ public final class RegistryConfig {
* <p>Note that this uses {@code @Named} instead of {@code @Config} so that it can be used from
* the low-level util package, which cannot have a dependency on the config package.
*
* @see google.registry.util.CloudTasksUtils
* @see google.registry.batch.CloudTasksUtils
*/
@Provides
@Named("transientFailureRetries")
@@ -1153,6 +1174,12 @@ public final class RegistryConfig {
return ImmutableSet.copyOf(config.oAuth.allowedOauthClientIds);
}
@Provides
@Config("iapClientId")
public static Optional<String> provideIapClientId(RegistryConfigSettings config) {
return Optional.ofNullable(config.oAuth.iapClientId);
}
/**
* Provides the OAuth scopes required for accessing Google APIs using the default credential.
*/
@@ -1304,12 +1331,6 @@ public final class RegistryConfig {
return config.contactHistory.minMonthsBeforeWipeOut;
}
@Provides
@Config("wipeOutQueryBatchSize")
public static int provideWipeOutQueryBatchSize(RegistryConfigSettings config) {
return config.contactHistory.wipeOutQueryBatchSize;
}
@Provides
@Config("jdbcBatchSize")
public static int provideHibernateJdbcBatchSize(RegistryConfigSettings config) {

View File

@@ -54,6 +54,8 @@ public class RegistryConfigSettings {
public String backendServiceUrl;
public String toolsServiceUrl;
public String pubapiServiceUrl;
public List<String> serviceAccountEmails;
public String defaultServiceAccount;
}
/** Configuration options for OAuth settings for authenticating users. */
@@ -61,6 +63,7 @@ public class RegistryConfigSettings {
public List<String> availableOauthScopes;
public List<String> requiredOauthScopes;
public List<String> allowedOauthClientIds;
public String iapClientId;
}
/** Configuration options for accessing Google APIs. */
@@ -241,7 +244,6 @@ public class RegistryConfigSettings {
/** Configuration for contact history. */
public static class ContactHistory {
public int minMonthsBeforeWipeOut;
public int wipeOutQueryBatchSize;
}
/** Configuration for dns update. */

View File

@@ -22,6 +22,14 @@ gcpProject:
backendServiceUrl: https://localhost
toolsServiceUrl: https://localhost
pubapiServiceUrl: https://localhost
# Service accounts eligible for authorization (e.g. default service account,
# account used by Cloud Scheduler) to send authenticated requests.
serviceAccountEmails:
- default-service-account-email@email.com
- cloud-scheduler-email@email.com
# The default service account with which the service is running. For example,
# on GAE this would be {project-id}@appspot.gserviceaccount.com
defaultServiceAccount: null
gSuite:
# Publicly accessible domain name of the running G Suite instance.
@@ -306,6 +314,9 @@ oAuth:
# in this list. Client IDs are typically of the format
# numbers-alphanumerics.apps.googleusercontent.com
allowedOauthClientIds: []
# GCP Identity-Aware Proxy client ID, if set up (note: this requires manual setup
# of User objects in the database for Nomulus tool users)
iapClientId: null
credentialOAuth:
# OAuth scopes required for accessing Google APIs using the default
@@ -424,7 +435,7 @@ misc:
beam:
# The default region to run Apache Beam (Cloud Dataflow) jobs in.
defaultJobRegion: us-east1
defaultJobRegion: us-central1
# The GCE machine type to use when a job is CPU-intensive (e. g. RDE). Be sure
# to check the VM CPU quota for the job region. In a massively parallel
# pipeline this quota can be easily reached and needs to be raised, otherwise
@@ -466,8 +477,6 @@ registryTool:
contactHistory:
# The number of months that a ContactHistory entity should be stored in the database.
minMonthsBeforeWipeOut: 18
# The batch size for querying ContactHistory table in the database.
wipeOutQueryBatchSize: 500
# Configuration options relevant to the DNS update functionality.
dnsUpdate:
@@ -535,52 +544,53 @@ sslCertificateValidation:
# Configuration options for the package compliance monitoring
packageMonitoring:
# Email subject text to notify partners their package has exceeded the limit for domain creates
packageCreateLimitEmailSubject: "NOTICE: Your package is being upgraded"
# Email body text template notify partners their package has exceeded the limit for domain creates
# Email subject text to notify tech support that a package has exceeded the limit for domain creates
packageCreateLimitEmailSubject: "ACTION REQUIRED: Package needs to be upgraded"
# Email body text template notify support that a package has exceeded the limit for domain creates
packageCreateLimitEmailBody: >
Dear %1$s,
Dear Support,
We are contacting you to inform you that your package with the package token
%2$s has exceeded its limit for annual domain creations.
Your package will now be upgraded to the next tier.
A package has exceeded its max create limit and needs to be upgraded to the
next tier.
If you have any questions or require additional support, please contact us
at %3$s.
Regards,
Example Registry
Package ID: %1$s
Package Token: %2$s
Registrar: %3$s
Current Max Create Limit: %4$s
Creates Completed: %5$s
# Email subject text to notify partners their package has exceeded the limit for current active domains and warn them their package will be upgraded in 30 days
packageDomainLimitWarningEmailSubject: "WARNING: Your package has exceeded the domain limit"
# Email body text template to warn partners their package has exceeded the limit for active domains and will be upgraded in 30 days
# Email subject text to notify support that a package has exceeded the limit
# for current active domains and a warning needs to be sent
packageDomainLimitWarningEmailSubject: "ACTION REQUIRED: Package has exceeded the domain limit - send warning"
# Email body text template to inform support that a package has exceeded the
# limit for active domains and a warning needs to be sent that the package
# will be upgraded in 30 days
packageDomainLimitWarningEmailBody: >
Dear %1$s,
We are contacting you to inform you that your package with the package token
%2$s has exceeded its limit for active domains.
Your package will be upgraded to the next tier in 30 days if the number of active domains does not return below the limit.
If you have any questions or require additional support, please contact us
at %3$s.
Regards,
Example Registry
# Email subject text to notify partners their package has exceeded the limit
# for current active domains for more than 30 days and will be upgraded
packageDomainLimitUpgradeEmailSubject: "NOTICE: Your package is being upgraded"
# Email body text template to warn partners their package has exceeded the
# limit for active domains for more than 30 days and will be upgraded
packageDomainLimitUpgradeEmailBody: >
Dear %1$s,
Dear Support,
We are contacting you to inform you that your package with the package token
%2$s has exceeded its limit for active domains.
Your package will now be upgraded to the next tier.
A package has exceeded its max domain limit. Please send a warning to the
registrar that their package will be upgraded to the next tier in 30 days if
the number of active domains does not return below the limit.
Package ID: %1$s
Package Token: %2$s
Registrar: %3$s
Active Domain Limit: %4$s
Current Active Domains: %5$s
If you have any questions or require additional support, please contact us
at %3$s.
Regards,
Example Registry
# Email subject text to notify support that a package has exceeded the limit
# for current active domains for more than 30 days and needs to be upgraded
packageDomainLimitUpgradeEmailSubject: "ACTION REQUIRED: Package has exceeded the domain limit - upgrade package"
# Email body text template to inform support that a package has exceeded the
# limit for active domains for more than 30 days and needs to be upgraded
packageDomainLimitUpgradeEmailBody: >
Dear Support,
A package has exceeded its max domain limit for over 30 days and needs to be
upgraded to the next tier.
Package ID: %1$s
Package Token: %2$s
Registrar: %3$s
Active Domain Limit: %4$s
Current Active Domains: %5$s

View File

@@ -28,8 +28,8 @@ import static google.registry.cron.CronModule.JITTER_SECONDS_PARAM;
import static google.registry.cron.CronModule.QUEUE_PARAM;
import static google.registry.cron.CronModule.RUN_IN_EMPTY_PARAM;
import static google.registry.model.tld.Registries.getTldsOfType;
import static google.registry.model.tld.Registry.TldType.REAL;
import static google.registry.model.tld.Registry.TldType.TEST;
import static google.registry.model.tld.Tld.TldType.REAL;
import static google.registry.model.tld.Tld.TldType.TEST;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ArrayListMultimap;
@@ -38,6 +38,7 @@ import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Multimap;
import com.google.common.collect.Streams;
import com.google.common.flogger.FluentLogger;
import google.registry.batch.CloudTasksUtils;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
@@ -45,7 +46,6 @@ import google.registry.request.ParameterMap;
import google.registry.request.RequestParameters;
import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.util.CloudTasksUtils;
import java.util.Optional;
import java.util.stream.Stream;
import javax.inject.Inject;
@@ -140,13 +140,25 @@ public final class TldFanoutAction implements Runnable {
for (String tld : tlds) {
Task task = createTask(tld, flowThruParams);
Task createdTask = cloudTasksUtils.enqueue(queue, task);
outputPayload.append(
String.format(
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri());
if (createdTask.hasAppEngineHttpRequest()) {
outputPayload.append(
String.format(
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(),
tld,
createdTask.getAppEngineHttpRequest().getRelativeUri()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getAppEngineHttpRequest().getRelativeUri());
} else {
outputPayload.append(
String.format(
"- Task: '%s', tld: '%s', endpoint: '%s'\n",
createdTask.getName(), tld, createdTask.getHttpRequest().getUrl()));
logger.atInfo().log(
"Task: '%s', tld: '%s', endpoint: '%s'.",
createdTask.getName(), tld, createdTask.getHttpRequest().getUrl());
}
}
response.setContentType(PLAIN_TEXT_UTF_8);
response.setPayload(outputPayload.toString());
@@ -158,6 +170,6 @@ public final class TldFanoutAction implements Runnable {
params.put(RequestParameters.PARAM_TLD, tld);
}
return cloudTasksUtils.createPostTaskWithJitter(
endpoint, Service.BACKEND.toString(), params, jitterSeconds);
endpoint, Service.BACKEND, params, jitterSeconds);
}
}

View File

@@ -1,38 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
/** Static class for DNS-related constants. */
public class DnsConstants {
private DnsConstants() {}
/** The name of the DNS pull queue. */
public static final String DNS_PULL_QUEUE_NAME = "dns-pull"; // See queue.xml.
/** The name of the DNS publish push queue. */
public static final String DNS_PUBLISH_PUSH_QUEUE_NAME = "dns-publish"; // See queue.xml.
/** The parameter to use for storing the target type ("domain" or "host" or "zone"). */
public static final String DNS_TARGET_TYPE_PARAM = "Target-Type";
/** The parameter to use for storing the target name (domain or host name) with the task. */
public static final String DNS_TARGET_NAME_PARAM = "Target-Name";
/** The parameter to use for storing the creation time with the task. */
public static final String DNS_TARGET_CREATE_TIME_PARAM = "Create-Time";
/** The possible values of the {@code DNS_TARGET_TYPE_PARAM} parameter. */
public enum TargetType { DOMAIN, HOST, ZONE }
}

View File

@@ -14,27 +14,25 @@
package google.registry.dns;
import static google.registry.dns.DnsConstants.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsConstants.DNS_PULL_QUEUE_NAME;
import static google.registry.dns.RefreshDnsOnHostRenameAction.PARAM_HOST_KEY;
import static google.registry.request.RequestParameters.extractEnumParameter;
import static google.registry.request.RequestParameters.extractIntParameter;
import static google.registry.request.RequestParameters.extractOptionalIntParameter;
import static google.registry.request.RequestParameters.extractOptionalParameter;
import static google.registry.request.RequestParameters.extractRequiredParameter;
import static google.registry.request.RequestParameters.extractSetOfParameters;
import com.google.appengine.api.taskqueue.Queue;
import com.google.appengine.api.taskqueue.QueueFactory;
import com.google.common.hash.HashFunction;
import com.google.common.hash.Hashing;
import dagger.Binds;
import dagger.Module;
import dagger.Provides;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.dns.writer.DnsWriterZone;
import google.registry.request.Parameter;
import google.registry.request.RequestParameters;
import java.util.Optional;
import java.util.Set;
import javax.inject.Named;
import javax.servlet.http.HttpServletRequest;
import org.joda.time.DateTime;
@@ -48,7 +46,10 @@ public abstract class DnsModule {
public static final String PARAM_DOMAINS = "domains";
public static final String PARAM_HOSTS = "hosts";
public static final String PARAM_PUBLISH_TASK_ENQUEUED = "enqueued";
public static final String PARAM_REFRESH_REQUEST_CREATED = "itemsCreated";
public static final String PARAM_REFRESH_REQUEST_TIME = "requestTime";
// This parameter cannot be named "jitterSeconds", which will be read by TldFanoutAction and not
// be passed down to actions.
public static final String PARAM_DNS_JITTER_SECONDS = "dnsJitterSeconds";
@Binds
@DnsWriterZone
@@ -65,28 +66,19 @@ public abstract class DnsModule {
return Hashing.murmur3_32_fixed();
}
@Provides
@Named(DNS_PULL_QUEUE_NAME)
static Queue provideDnsPullQueue() {
return QueueFactory.getQueue(DNS_PULL_QUEUE_NAME);
}
@Provides
@Named(DNS_PUBLISH_PUSH_QUEUE_NAME)
static Queue provideDnsUpdatePushQueue() {
return QueueFactory.getQueue(DNS_PUBLISH_PUSH_QUEUE_NAME);
}
@Provides
@Parameter(PARAM_PUBLISH_TASK_ENQUEUED)
static DateTime provideCreateTime(HttpServletRequest req) {
return DateTime.parse(extractRequiredParameter(req, PARAM_PUBLISH_TASK_ENQUEUED));
}
// TODO: Retire the old header after DNS pull queue migration.
@Provides
@Parameter(PARAM_REFRESH_REQUEST_CREATED)
@Parameter(PARAM_REFRESH_REQUEST_TIME)
static DateTime provideItemsCreateTime(HttpServletRequest req) {
return DateTime.parse(extractRequiredParameter(req, PARAM_REFRESH_REQUEST_CREATED));
return DateTime.parse(
extractOptionalParameter(req, "itemsCreated")
.orElse(extractRequiredParameter(req, PARAM_REFRESH_REQUEST_TIME)));
}
@Provides
@@ -125,6 +117,12 @@ public abstract class DnsModule {
return extractRequiredParameter(req, PARAM_HOST_KEY);
}
@Provides
@Parameter(PARAM_DNS_JITTER_SECONDS)
static Optional<Integer> provideJitterSeconds(HttpServletRequest req) {
return extractOptionalIntParameter(req, PARAM_DNS_JITTER_SECONDS);
}
@Provides
@Parameter("domainOrHostName")
static String provideName(HttpServletRequest req) {

View File

@@ -1,170 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.appengine.api.taskqueue.QueueFactory.getQueue;
import static com.google.common.base.Preconditions.checkArgument;
import static google.registry.dns.DnsConstants.DNS_PULL_QUEUE_NAME;
import static google.registry.dns.DnsConstants.DNS_TARGET_CREATE_TIME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_NAME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_TYPE_PARAM;
import static google.registry.model.tld.Registries.assertTldExists;
import static google.registry.request.RequestParameters.PARAM_TLD;
import static google.registry.util.DomainNameUtils.getTldFromDomainName;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import com.google.appengine.api.taskqueue.Queue;
import com.google.appengine.api.taskqueue.QueueConstants;
import com.google.appengine.api.taskqueue.TaskHandle;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TaskOptions.Method;
import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.apphosting.api.DeadlineExceededException;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.InternetDomainName;
import com.google.common.util.concurrent.RateLimiter;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.model.tld.Registries;
import google.registry.util.Clock;
import google.registry.util.NonFinalForTesting;
import java.util.List;
import java.util.Optional;
import java.util.logging.Level;
import javax.inject.Inject;
import javax.inject.Named;
import org.joda.time.Duration;
/**
* Methods for manipulating the queue used for DNS write tasks.
*
* <p>This includes a {@link RateLimiter} to limit the {@link Queue#leaseTasks} call rate to 9 QPS,
* to stay under the 10 QPS limit for this function.
*
* <p>Note that overlapping calls to {@link ReadDnsQueueAction} (the only place where
* {@link DnsQueue#leaseTasks} is used) will have different rate limiters, so they could exceed the
* allowed rate. This should be rare though - because {@link DnsQueue#leaseTasks} is only used in
* {@link ReadDnsQueueAction}, which is run as a cron job with running time shorter than the cron
* repeat time - meaning there should never be two instances running at once.
*
* @see google.registry.config.RegistryConfig.ConfigModule#provideReadDnsQueueRuntime
*/
public class DnsQueue {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final Queue queue;
final Clock clock;
// Queue.leaseTasks is limited to 10 requests per second as per
// https://cloud.google.com/appengine/docs/standard/java/javadoc/com/google/appengine/api/taskqueue/Queue.html
// "If you generate more than 10 LeaseTasks requests per second, only the first 10 requests will
// return results. The others will return no results."
private static final RateLimiter rateLimiter = RateLimiter.create(9);
@Inject
public DnsQueue(@Named(DNS_PULL_QUEUE_NAME) Queue queue, Clock clock) {
this.queue = queue;
this.clock = clock;
}
@VisibleForTesting
public static DnsQueue createForTesting(Clock clock) {
return new DnsQueue(getQueue(DNS_PULL_QUEUE_NAME), clock);
}
@NonFinalForTesting
@VisibleForTesting
long leaseTasksBatchSize = QueueConstants.maxLeaseCount();
/** Enqueues the given task type with the given target name to the DNS queue. */
private TaskHandle addToQueue(
TargetType targetType, String targetName, String tld, Duration countdown) {
logger.atInfo().log(
"Adding task type=%s, target=%s, tld=%s to pull queue %s (%d tasks currently on queue).",
targetType, targetName, tld, DNS_PULL_QUEUE_NAME, queue.fetchStatistics().getNumTasks());
return queue.add(
TaskOptions.Builder.withDefaults()
.method(Method.PULL)
.countdownMillis(countdown.getMillis())
.param(DNS_TARGET_TYPE_PARAM, targetType.toString())
.param(DNS_TARGET_NAME_PARAM, targetName)
.param(DNS_TARGET_CREATE_TIME_PARAM, clock.nowUtc().toString())
.param(PARAM_TLD, tld));
}
/** Adds a task to the queue to refresh the DNS information for the specified subordinate host. */
public TaskHandle addHostRefreshTask(String hostName) {
Optional<InternetDomainName> tld = Registries.findTldForName(InternetDomainName.from(hostName));
checkArgument(
tld.isPresent(), String.format("%s is not a subordinate host to a known tld", hostName));
return addToQueue(TargetType.HOST, hostName, tld.get().toString(), Duration.ZERO);
}
/** Enqueues a task to refresh DNS for the specified domain now. */
public TaskHandle addDomainRefreshTask(String domainName) {
return addDomainRefreshTask(domainName, Duration.ZERO);
}
/** Enqueues a task to refresh DNS for the specified domain at some point in the future. */
public TaskHandle addDomainRefreshTask(String domainName, Duration countdown) {
return addToQueue(
TargetType.DOMAIN,
domainName,
assertTldExists(getTldFromDomainName(domainName)),
countdown);
}
/** Adds a task to the queue to refresh the DNS information for the specified zone. */
public TaskHandle addZoneRefreshTask(String zoneName) {
return addToQueue(TargetType.ZONE, zoneName, zoneName, Duration.ZERO);
}
/**
* Returns the maximum number of tasks that can be leased with {@link #leaseTasks}.
*
* <p>If this many tasks are returned, then there might be more tasks still waiting in the queue.
*
* <p>If less than this number of tasks are returned, then there are no more items in the queue.
*/
public long getLeaseTasksBatchSize() {
return leaseTasksBatchSize;
}
/** Returns handles for a batch of tasks, leased for the specified duration. */
public List<TaskHandle> leaseTasks(Duration leaseDuration) {
try {
rateLimiter.acquire();
int numTasks = queue.fetchStatistics().getNumTasks();
logger.at((numTasks >= leaseTasksBatchSize) ? Level.WARNING : Level.INFO).log(
"There are %d tasks in the DNS queue '%s'.", numTasks, DNS_PULL_QUEUE_NAME);
return queue.leaseTasks(leaseDuration.getMillis(), MILLISECONDS, leaseTasksBatchSize);
} catch (TransientFailureException | DeadlineExceededException e) {
logger.atSevere().withCause(e).log("Failed leasing tasks too fast.");
return ImmutableList.of();
}
}
/** Delete a list of tasks, removing them from the queue permanently. */
public void deleteTasks(List<TaskHandle> tasks) {
try {
queue.deleteTask(tasks);
} catch (TransientFailureException | DeadlineExceededException e) {
logger.atSevere().withCause(e).log("Failed deleting tasks too fast.");
}
}
}

View File

@@ -0,0 +1,133 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
import com.google.common.collect.ImmutableList;
import com.google.common.net.InternetDomainName;
import google.registry.model.common.DnsRefreshRequest;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Tld;
import java.util.Collection;
import java.util.Optional;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/** Utility class to handle DNS refresh requests. */
public final class DnsUtils {
/** The name of the DNS publish push queue. */
public static final String DNS_PUBLISH_PUSH_QUEUE_NAME = "dns-publish"; // See queue.xml.
private DnsUtils() {}
private static void requestDnsRefresh(String name, TargetType type, Duration delay) {
// Throws an IllegalArgumentException if the name is not under a managed TLD -- we only update
// DNS for names that are under our management.
String tld = Registries.findTldForNameOrThrow(InternetDomainName.from(name)).toString();
tm().transact(
() ->
tm().insert(
new DnsRefreshRequest(
type, name, tld, tm().getTransactionTime().plus(delay))));
}
public static void requestDomainDnsRefresh(String domainName, Duration delay) {
requestDnsRefresh(domainName, TargetType.DOMAIN, delay);
}
public static void requestDomainDnsRefresh(String domainName) {
requestDomainDnsRefresh(domainName, Duration.ZERO);
}
public static void requestHostDnsRefresh(String hostName) {
requestDnsRefresh(hostName, TargetType.HOST, Duration.ZERO);
}
/**
* Returns pending DNS update requests that need further processing up to batch size, in ascending
* order of their request time, and updates their processing time to now.
*
* <p>The criteria to pick the requests to include are:
*
* <ul>
* <li>They are for the given TLD.
* <li>Their request time is not in the future.
* <li>The last time they were processed is before the cooldown period.
* </ul>
*/
public static ImmutableList<DnsRefreshRequest> readAndUpdateRequestsWithLatestProcessTime(
String tld, Duration cooldown, int batchSize) {
return tm().transact(
() -> {
DateTime transactionTime = tm().getTransactionTime();
ImmutableList<DnsRefreshRequest> requests =
tm().query(
"FROM DnsRefreshRequest WHERE tld = :tld "
+ "AND requestTime <= :now AND lastProcessTime < :cutoffTime "
+ "ORDER BY requestTime ASC, id ASC",
DnsRefreshRequest.class)
.setParameter("tld", tld)
.setParameter("now", transactionTime)
.setParameter("cutoffTime", transactionTime.minus(cooldown))
.setMaxResults(batchSize)
.getResultStream()
// Note that the process time is when the request was last read, batched and
// queued up for publishing, not when it is actually published by the DNS
// writer. This timestamp acts as a cooldown so the same request will not be
// retried too frequently. See DnsRefreshRequest.getLastProcessTime for a
// detailed explanation.
.map(e -> e.updateProcessTime(transactionTime))
.collect(toImmutableList());
tm().updateAll(requests);
return requests;
});
}
/**
* Removes the requests that have been processed.
*
* <p>Note that if a request entity has already been deleted, the method still succeeds without
* error because all we care about is that it no longer exists after the method runs.
*/
public static void deleteRequests(Collection<DnsRefreshRequest> requests) {
tm().transact(
() ->
tm().delete(
requests.stream()
.map(DnsRefreshRequest::createVKey)
.collect(toImmutableList())));
}
public static long getDnsAPlusAAAATtlForHost(String host, Duration dnsDefaultATtl) {
Optional<InternetDomainName> tldName = Registries.findTldForName(InternetDomainName.from(host));
Duration dnsAPlusAaaaTtl = dnsDefaultATtl;
if (tldName.isPresent()) {
Tld tld = Tld.get(tldName.get().toString());
if (tld.getDnsAPlusAaaaTtl().isPresent()) {
dnsAPlusAaaaTtl = tld.getDnsAPlusAaaaTtl().get();
}
}
return dnsAPlusAaaaTtl.getStandardSeconds();
}
/** The possible values of the {@code DNS_TARGET_TYPE_PARAM} parameter. */
public enum TargetType {
DOMAIN,
HOST
}
}

View File

@@ -19,7 +19,7 @@ import static com.google.common.base.Preconditions.checkState;
import com.google.common.collect.ImmutableMap;
import com.google.common.flogger.FluentLogger;
import google.registry.dns.writer.DnsWriter;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import java.util.Map;
import javax.inject.Inject;
@@ -41,7 +41,7 @@ public final class DnsWriterProxy {
* If the DnsWriter doesn't belong to this TLD, will return null.
*/
public DnsWriter getByClassNameForTld(String className, String tld) {
if (!Registry.get(tld).getDnsWriters().contains(className)) {
if (!Tld.get(tld).getDnsWriters().contains(className)) {
logger.atWarning().log(
"Loaded potentially stale DNS writer %s which is not active on TLD %s.", className, tld);
return null;

View File

@@ -15,14 +15,16 @@
package google.registry.dns;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static google.registry.dns.DnsConstants.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsModule.PARAM_DNS_WRITER;
import static google.registry.dns.DnsModule.PARAM_DOMAINS;
import static google.registry.dns.DnsModule.PARAM_HOSTS;
import static google.registry.dns.DnsModule.PARAM_LOCK_INDEX;
import static google.registry.dns.DnsModule.PARAM_NUM_PUBLISH_LOCKS;
import static google.registry.dns.DnsModule.PARAM_PUBLISH_TASK_ENQUEUED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_CREATED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_TIME;
import static google.registry.dns.DnsUtils.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.request.Action.Method.POST;
import static google.registry.request.RequestParameters.PARAM_TLD;
@@ -35,6 +37,7 @@ import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.net.InternetDomainName;
import dagger.Lazy;
import google.registry.batch.CloudTasksUtils;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsMetrics.ActionStatus;
import google.registry.dns.DnsMetrics.CommitStatus;
@@ -44,7 +47,7 @@ import google.registry.model.domain.Domain;
import google.registry.model.host.Host;
import google.registry.model.registrar.Registrar;
import google.registry.model.registrar.RegistrarPoc;
import google.registry.model.tld.Registry;
import google.registry.model.tld.Tld;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Header;
@@ -54,7 +57,6 @@ import google.registry.request.Response;
import google.registry.request.auth.Auth;
import google.registry.request.lock.LockHandler;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import google.registry.util.DomainNameUtils;
import google.registry.util.EmailMessage;
import google.registry.util.SendEmailService;
@@ -85,7 +87,6 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final DnsQueue dnsQueue;
private final DnsWriterProxy dnsWriterProxy;
private final DnsMetrics dnsMetrics;
private final Duration timeout;
@@ -94,9 +95,9 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
/**
* The DNS writer to use for this batch.
*
* <p>This comes from the fanout in {@link ReadDnsQueueAction} which dispatches each batch to be
* published by each DNS writer on the TLD. So this field contains the value of one of the DNS
* writers configured in {@link Registry#getDnsWriters()}, as of the time the batch was written
* <p>This comes from the fanout in {@link ReadDnsRefreshRequestsAction} which dispatches each
* batch to be published by each DNS writer on the TLD. So this field contains the value of one of
* the DNS writers configured in {@link Tld#getDnsWriters()}, as of the time the batch was written
* out (and not necessarily currently).
*/
private final String dnsWriter;
@@ -124,7 +125,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
public PublishDnsUpdatesAction(
@Parameter(PARAM_DNS_WRITER) String dnsWriter,
@Parameter(PARAM_PUBLISH_TASK_ENQUEUED) DateTime enqueuedTime,
@Parameter(PARAM_REFRESH_REQUEST_CREATED) DateTime itemsCreateTime,
@Parameter(PARAM_REFRESH_REQUEST_TIME) DateTime itemsCreateTime,
@Parameter(PARAM_LOCK_INDEX) int lockIndex,
@Parameter(PARAM_NUM_PUBLISH_LOCKS) int numPublishLocks,
@Parameter(PARAM_DOMAINS) Set<String> domains,
@@ -139,7 +140,6 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
@Config("gSuiteOutgoingEmailAddress") InternetAddress gSuiteOutgoingEmailAddress,
@Header(APP_ENGINE_RETRY_HEADER) Optional<Integer> appEngineRetryCount,
@Header(CLOUD_TASKS_RETRY_HEADER) Optional<Integer> cloudTasksRetryCount,
DnsQueue dnsQueue,
DnsWriterProxy dnsWriterProxy,
DnsMetrics dnsMetrics,
LockHandler lockHandler,
@@ -147,12 +147,11 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
CloudTasksUtils cloudTasksUtils,
SendEmailService sendEmailService,
Response response) {
this.dnsQueue = dnsQueue;
this.dnsWriterProxy = dnsWriterProxy;
this.dnsMetrics = dnsMetrics;
this.timeout = timeout;
this.sendEmailService = sendEmailService;
this.retryCount =
retryCount =
cloudTasksRetryCount.orElse(
appEngineRetryCount.orElseThrow(
() -> new IllegalStateException("Missing a valid retry count header")));
@@ -276,7 +275,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
return null;
}
private InternetAddress emailToInternetAddress(String email) {
private static InternetAddress emailToInternetAddress(String email) {
try {
return new InternetAddress(email, true);
} catch (Exception e) {
@@ -306,7 +305,7 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
registrar.get().getContacts().stream()
.filter(c -> c.getTypes().contains(RegistrarPoc.Type.ADMIN))
.map(RegistrarPoc::getEmailAddress)
.map(this::emailToInternetAddress)
.map(PublishDnsUpdatesAction::emailToInternetAddress)
.collect(toImmutableList());
sendEmailService.sendEmail(
@@ -339,14 +338,14 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
DNS_PUBLISH_PUSH_QUEUE_NAME,
cloudTasksUtils.createPostTask(
PATH,
Service.BACKEND.toString(),
Service.BACKEND,
ImmutableMultimap.<String, String>builder()
.put(PARAM_TLD, tld)
.put(PARAM_DNS_WRITER, dnsWriter)
.put(PARAM_LOCK_INDEX, Integer.toString(lockIndex))
.put(PARAM_NUM_PUBLISH_LOCKS, Integer.toString(numPublishLocks))
.put(PARAM_PUBLISH_TASK_ENQUEUED, clock.nowUtc().toString())
.put(PARAM_REFRESH_REQUEST_CREATED, itemsCreateTime.toString())
.put(PARAM_REFRESH_REQUEST_TIME, itemsCreateTime.toString())
.put(PARAM_DOMAINS, Joiner.on(",").join(domains))
.put(PARAM_HOSTS, Joiner.on(",").join(hosts))
.build()));
@@ -356,10 +355,10 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
private void requeueBatch() {
logger.atInfo().log("Requeueing batch for retry.");
for (String domain : nullToEmpty(domains)) {
dnsQueue.addDomainRefreshTask(domain);
requestDomainDnsRefresh(domain);
}
for (String host : nullToEmpty(hosts)) {
dnsQueue.addHostRefreshTask(host);
requestHostDnsRefresh(host);
}
}
@@ -372,11 +371,11 @@ public final class PublishDnsUpdatesAction implements Runnable, Callable<Void> {
return false;
}
// Check if the Registry object's num locks has changed since this task was batched
int registryNumPublishLocks = Registry.get(tld).getNumDnsPublishLocks();
if (registryNumPublishLocks != numPublishLocks) {
int tldNumPublishLocks = Tld.get(tld).getNumDnsPublishLocks();
if (tldNumPublishLocks != numPublishLocks) {
logger.atWarning().log(
"Registry numDnsPublishLocks %d out of sync with parameter %d.",
registryNumPublishLocks, numPublishLocks);
tldNumPublishLocks, numPublishLocks);
return false;
}
return true;

View File

@@ -1,401 +0,0 @@
// Copyright 2017 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.common.collect.ImmutableSetMultimap.toImmutableSetMultimap;
import static com.google.common.collect.Sets.difference;
import static google.registry.dns.DnsConstants.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsConstants.DNS_TARGET_CREATE_TIME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_NAME_PARAM;
import static google.registry.dns.DnsConstants.DNS_TARGET_TYPE_PARAM;
import static google.registry.dns.DnsModule.PARAM_DNS_WRITER;
import static google.registry.dns.DnsModule.PARAM_DOMAINS;
import static google.registry.dns.DnsModule.PARAM_HOSTS;
import static google.registry.dns.DnsModule.PARAM_LOCK_INDEX;
import static google.registry.dns.DnsModule.PARAM_NUM_PUBLISH_LOCKS;
import static google.registry.dns.DnsModule.PARAM_PUBLISH_TASK_ENQUEUED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_CREATED;
import static google.registry.request.RequestParameters.PARAM_TLD;
import static google.registry.util.DomainNameUtils.getSecondLevelDomain;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.appengine.api.taskqueue.TaskHandle;
import com.google.auto.value.AutoValue;
import com.google.cloud.tasks.v2.Task;
import com.google.common.collect.ComparisonChain;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.ImmutableSetMultimap;
import com.google.common.collect.Iterables;
import com.google.common.collect.Ordering;
import com.google.common.flogger.FluentLogger;
import com.google.common.hash.HashFunction;
import com.google.common.hash.Hashing;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Registry;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import google.registry.util.CloudTasksUtils;
import java.io.UnsupportedEncodingException;
import java.util.Collection;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collectors;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Action for fanning out DNS refresh tasks by TLD, using data taken from the DNS pull queue.
*
* <h3>Parameters Reference</h3>
*
* <ul>
* <li>{@code jitterSeconds} Randomly delay each task by up to this many seconds.
* </ul>
*/
@Action(
service = Action.Service.BACKEND,
path = "/_dr/cron/readDnsQueue",
automaticallyPrintOk = true,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public final class ReadDnsQueueAction implements Runnable {
private static final String PARAM_JITTER_SECONDS = "jitterSeconds";
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
/**
* Buffer time since the end of this action until retriable tasks are available again.
*
* <p>We read batches of tasks from the queue in a loop. Any task that will need to be retried has
* to be kept out of the queue for the duration of this action - otherwise we will lease it again
* in a subsequent loop.
*
* <p>The 'requestedMaximumDuration' value is the maximum delay between the first and last calls
* to lease tasks, hence we want the lease duration to be (slightly) longer than that.
* LEASE_PADDING is the value we add to {@link #requestedMaximumDuration} to make sure the lease
* duration is indeed longer.
*/
private static final Duration LEASE_PADDING = Duration.standardMinutes(1);
private final int tldUpdateBatchSize;
private final Duration requestedMaximumDuration;
private final Optional<Integer> jitterSeconds;
private final Clock clock;
private final DnsQueue dnsQueue;
private final HashFunction hashFunction;
private final CloudTasksUtils cloudTasksUtils;
@Inject
ReadDnsQueueAction(
@Config("dnsTldUpdateBatchSize") int tldUpdateBatchSize,
@Config("readDnsQueueActionRuntime") Duration requestedMaximumDuration,
@Parameter(PARAM_JITTER_SECONDS) Optional<Integer> jitterSeconds,
Clock clock,
DnsQueue dnsQueue,
HashFunction hashFunction,
CloudTasksUtils cloudTasksUtils) {
this.tldUpdateBatchSize = tldUpdateBatchSize;
this.requestedMaximumDuration = requestedMaximumDuration;
this.jitterSeconds = jitterSeconds;
this.clock = clock;
this.dnsQueue = dnsQueue;
this.hashFunction = hashFunction;
this.cloudTasksUtils = cloudTasksUtils;
}
/** Container for items we pull out of the DNS pull queue and process for fanout. */
@AutoValue
abstract static class RefreshItem implements Comparable<RefreshItem> {
static RefreshItem create(TargetType type, String name, DateTime creationTime) {
return new AutoValue_ReadDnsQueueAction_RefreshItem(type, name, creationTime);
}
abstract TargetType type();
abstract String name();
abstract DateTime creationTime();
@Override
public int compareTo(RefreshItem other) {
return ComparisonChain.start()
.compare(this.type(), other.type())
.compare(this.name(), other.name())
.compare(this.creationTime(), other.creationTime())
.result();
}
}
/** Leases all tasks from the pull queue and creates per-tld update actions for them. */
@Override
public void run() {
DateTime requestedEndTime = clock.nowUtc().plus(requestedMaximumDuration);
ImmutableSet<String> tlds = Registries.getTlds();
while (requestedEndTime.isAfterNow()) {
List<TaskHandle> tasks = dnsQueue.leaseTasks(requestedMaximumDuration.plus(LEASE_PADDING));
logger.atInfo().log("Leased %d DNS update tasks.", tasks.size());
if (!tasks.isEmpty()) {
dispatchTasks(ImmutableSet.copyOf(tasks), tlds);
}
if (tasks.size() < dnsQueue.getLeaseTasksBatchSize()) {
return;
}
}
}
/** A set of tasks grouped based on the action to take on them. */
@AutoValue
abstract static class ClassifiedTasks {
/**
* List of tasks we want to keep in the queue (want to retry in the future).
*
* <p>Normally, any task we lease from the queue will be deleted - either because we are going
* to process it now (these tasks are part of refreshItemsByTld), or because these tasks are
* "corrupt" in some way (don't parse, don't have the required parameters etc.).
*
* <p>Some tasks however are valid, but can't be processed at the moment. These tasks will be
* kept in the queue for future processing.
*
* <p>This includes tasks belonging to paused TLDs (which we want to process once the TLD is
* unpaused) and tasks belonging to (currently) unknown TLDs.
*/
abstract ImmutableSet<TaskHandle> tasksToKeep();
/** The paused TLDs for which we found at least one refresh request. */
abstract ImmutableSet<String> pausedTlds();
/**
* The unknown TLDs for which we found at least one refresh request.
*
* <p>Unknown TLDs might have valid requests because the list of TLDs is heavily cached. Hence,
* when we add a new TLD - it is possible that some instances will not have that TLD in their
* list yet. We don't want to discard these tasks, so we wait a bit and retry them again.
*
* <p>This is less likely for production TLDs but is quite likely for test TLDs where we might
* create a TLD and then use it within seconds.
*/
abstract ImmutableSet<String> unknownTlds();
/**
* All the refresh items we need to actually fulfill, grouped by TLD.
*
* <p>By default, the multimap is ordered - this can be changed with the builder.
*
* <p>The items for each TLD will be grouped together, and domains and hosts will be grouped
* within a TLD.
*
* <p>The grouping and ordering of domains and hosts is not technically necessary, but a
* predictable ordering makes it possible to write detailed tests.
*/
abstract ImmutableSetMultimap<String, RefreshItem> refreshItemsByTld();
static Builder builder() {
Builder builder = new AutoValue_ReadDnsQueueAction_ClassifiedTasks.Builder();
builder
.refreshItemsByTldBuilder()
.orderKeysBy(Ordering.natural())
.orderValuesBy(Ordering.natural());
return builder;
}
@AutoValue.Builder
abstract static class Builder {
abstract ImmutableSet.Builder<TaskHandle> tasksToKeepBuilder();
abstract ImmutableSet.Builder<String> pausedTldsBuilder();
abstract ImmutableSet.Builder<String> unknownTldsBuilder();
abstract ImmutableSetMultimap.Builder<String, RefreshItem> refreshItemsByTldBuilder();
abstract ClassifiedTasks build();
}
}
/**
* Creates per-tld update actions for tasks leased from the pull queue.
*
* <p>Will return "irrelevant" tasks to the queue for future processing. "Irrelevant" tasks are
* tasks for paused TLDs or tasks for TLDs not part of {@link Registries#getTlds()}.
*/
private void dispatchTasks(ImmutableSet<TaskHandle> tasks, ImmutableSet<String> tlds) {
ClassifiedTasks classifiedTasks = classifyTasks(tasks, tlds);
if (!classifiedTasks.pausedTlds().isEmpty()) {
logger.atInfo().log(
"The dns-pull queue is paused for TLDs: %s.", classifiedTasks.pausedTlds());
}
if (!classifiedTasks.unknownTlds().isEmpty()) {
logger.atWarning().log(
"The dns-pull queue has unknown TLDs: %s.", classifiedTasks.unknownTlds());
}
bucketRefreshItems(classifiedTasks.refreshItemsByTld());
if (!classifiedTasks.tasksToKeep().isEmpty()) {
logger.atWarning().log(
"Keeping %d DNS update tasks in the queue.", classifiedTasks.tasksToKeep().size());
}
// Delete the tasks we don't want to see again from the queue.
//
// tasksToDelete includes both the tasks that we already fulfilled in this call (were part of
// refreshItemsByTld) and "corrupt" tasks we weren't able to parse correctly (this shouldn't
// happen, and we logged a "severe" error)
//
// We let the lease on the rest of the tasks (the tasks we want to keep) expire on its own. The
// reason we don't release these tasks back to the queue immediately is that we are going to
// immediately read another batch of tasks from the queue - and we don't want to get the same
// tasks again.
ImmutableSet<TaskHandle> tasksToDelete =
difference(tasks, classifiedTasks.tasksToKeep()).immutableCopy();
logger.atInfo().log("Removing %d DNS update tasks from the queue.", tasksToDelete.size());
dnsQueue.deleteTasks(tasksToDelete.asList());
logger.atInfo().log("Done processing DNS tasks.");
}
/**
* Classifies the given tasks based on what action we need to take on them.
*
* <p>Note that some tasks might appear in multiple categories (if multiple actions are to be
* taken on them) or in no category (if no action is to be taken on them)
*/
private static ClassifiedTasks classifyTasks(
ImmutableSet<TaskHandle> tasks, ImmutableSet<String> tlds) {
ClassifiedTasks.Builder classifiedTasksBuilder = ClassifiedTasks.builder();
// Read all tasks on the DNS pull queue and load them into the refresh item multimap.
for (TaskHandle task : tasks) {
try {
Map<String, String> params = ImmutableMap.copyOf(task.extractParams());
DateTime creationTime = DateTime.parse(params.get(DNS_TARGET_CREATE_TIME_PARAM));
String tld = params.get(PARAM_TLD);
if (tld == null) {
logger.atSevere().log(
"Discarding invalid DNS refresh request %s; no TLD specified.", task);
} else if (!tlds.contains(tld)) {
classifiedTasksBuilder.tasksToKeepBuilder().add(task);
classifiedTasksBuilder.unknownTldsBuilder().add(tld);
} else if (Registry.get(tld).getDnsPaused()) {
classifiedTasksBuilder.tasksToKeepBuilder().add(task);
classifiedTasksBuilder.pausedTldsBuilder().add(tld);
} else {
String typeString = params.get(DNS_TARGET_TYPE_PARAM);
String name = params.get(DNS_TARGET_NAME_PARAM);
TargetType type = TargetType.valueOf(typeString);
switch (type) {
case DOMAIN:
case HOST:
classifiedTasksBuilder
.refreshItemsByTldBuilder()
.put(tld, RefreshItem.create(type, name, creationTime));
break;
default:
logger.atSevere().log(
"Discarding DNS refresh request %s of type %s.", task, typeString);
break;
}
}
} catch (RuntimeException | UnsupportedEncodingException e) {
logger.atSevere().withCause(e).log("Discarding invalid DNS refresh request %s.", task);
}
}
return classifiedTasksBuilder.build();
}
/**
* Subdivides the tld to {@link RefreshItem} multimap into buckets by lock index, if applicable.
*
* <p>If the tld has numDnsPublishLocks <= 1, we enqueue all updates on the default lock 1 of 1.
*/
private void bucketRefreshItems(ImmutableSetMultimap<String, RefreshItem> refreshItemsByTld) {
// Loop through the multimap by TLD and generate refresh tasks for the hosts and domains for
// each configured DNS writer.
for (Map.Entry<String, Collection<RefreshItem>> tldRefreshItemsEntry
: refreshItemsByTld.asMap().entrySet()) {
String tld = tldRefreshItemsEntry.getKey();
int numPublishLocks = Registry.get(tld).getNumDnsPublishLocks();
// 1 lock or less implies no TLD-wide locks, simply enqueue everything under lock 1 of 1
if (numPublishLocks <= 1) {
enqueueUpdates(tld, 1, 1, tldRefreshItemsEntry.getValue());
} else {
tldRefreshItemsEntry.getValue().stream()
.collect(
toImmutableSetMultimap(
refreshItem -> getLockIndex(tld, numPublishLocks, refreshItem),
refreshItem -> refreshItem))
.asMap()
.forEach((key, value) -> enqueueUpdates(tld, key, numPublishLocks, value));
}
}
}
/**
* Returns the lock index for a given refreshItem.
*
* <p>We hash the second level domain for all records, to group in-bailiwick hosts (the only ones
* we refresh DNS for) with their superordinate domains. We use consistent hashing to determine
* the lock index because it gives us [0,N) bucketing properties out of the box, then add 1 to
* make indexes within [1,N].
*/
private int getLockIndex(String tld, int numPublishLocks, RefreshItem refreshItem) {
String domain = getSecondLevelDomain(refreshItem.name(), tld);
return Hashing.consistentHash(hashFunction.hashString(domain, UTF_8), numPublishLocks) + 1;
}
/**
* Creates DNS refresh tasks for all writers for the tld within a lock index and batches large
* updates into smaller chunks.
*/
private void enqueueUpdates(
String tld, int lockIndex, int numPublishLocks, Collection<RefreshItem> items) {
for (List<RefreshItem> chunk : Iterables.partition(items, tldUpdateBatchSize)) {
DateTime earliestCreateTime =
chunk.stream().map(RefreshItem::creationTime).min(Comparator.naturalOrder()).get();
for (String dnsWriter : Registry.get(tld).getDnsWriters()) {
Task task =
cloudTasksUtils.createPostTaskWithJitter(
PublishDnsUpdatesAction.PATH,
Service.BACKEND.toString(),
ImmutableMultimap.<String, String>builder()
.put(PARAM_TLD, tld)
.put(PARAM_DNS_WRITER, dnsWriter)
.put(PARAM_LOCK_INDEX, Integer.toString(lockIndex))
.put(PARAM_NUM_PUBLISH_LOCKS, Integer.toString(numPublishLocks))
.put(PARAM_PUBLISH_TASK_ENQUEUED, clock.nowUtc().toString())
.put(PARAM_REFRESH_REQUEST_CREATED, earliestCreateTime.toString())
.put(
PARAM_DOMAINS,
chunk.stream()
.filter(item -> item.type() == TargetType.DOMAIN)
.map(RefreshItem::name)
.collect(Collectors.joining(",")))
.put(
PARAM_HOSTS,
chunk.stream()
.filter(item -> item.type() == TargetType.HOST)
.map(RefreshItem::name)
.collect(Collectors.joining(",")))
.build(),
jitterSeconds);
cloudTasksUtils.enqueue(DNS_PUBLISH_PUSH_QUEUE_NAME, task);
}
}
}
}

View File

@@ -0,0 +1,205 @@
// Copyright 2023 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.dns;
import static com.google.common.collect.ImmutableSetMultimap.toImmutableSetMultimap;
import static google.registry.dns.DnsModule.PARAM_DNS_JITTER_SECONDS;
import static google.registry.dns.DnsModule.PARAM_DNS_WRITER;
import static google.registry.dns.DnsModule.PARAM_DOMAINS;
import static google.registry.dns.DnsModule.PARAM_HOSTS;
import static google.registry.dns.DnsModule.PARAM_LOCK_INDEX;
import static google.registry.dns.DnsModule.PARAM_NUM_PUBLISH_LOCKS;
import static google.registry.dns.DnsModule.PARAM_PUBLISH_TASK_ENQUEUED;
import static google.registry.dns.DnsModule.PARAM_REFRESH_REQUEST_TIME;
import static google.registry.dns.DnsUtils.DNS_PUBLISH_PUSH_QUEUE_NAME;
import static google.registry.dns.DnsUtils.deleteRequests;
import static google.registry.dns.DnsUtils.readAndUpdateRequestsWithLatestProcessTime;
import static google.registry.request.Action.Method.POST;
import static google.registry.request.RequestParameters.PARAM_TLD;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import static google.registry.util.DomainNameUtils.getSecondLevelDomain;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.cloud.tasks.v2.Task;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMultimap;
import com.google.common.flogger.FluentLogger;
import com.google.common.hash.HashFunction;
import com.google.common.hash.Hashing;
import google.registry.batch.CloudTasksUtils;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.model.common.DnsRefreshRequest;
import google.registry.model.tld.Tld;
import google.registry.request.Action;
import google.registry.request.Action.Service;
import google.registry.request.Parameter;
import google.registry.request.auth.Auth;
import google.registry.util.Clock;
import java.util.Collection;
import java.util.Optional;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* Action for fanning out DNS refresh tasks by TLD, using data taken from {@link DnsRefreshRequest}
* table.
*/
@Action(
service = Service.BACKEND,
path = "/_dr/task/readDnsRefreshRequests",
automaticallyPrintOk = true,
method = POST,
auth = Auth.AUTH_INTERNAL_OR_ADMIN)
public final class ReadDnsRefreshRequestsAction implements Runnable {
private static final FluentLogger logger = FluentLogger.forEnclosingClass();
private final int tldUpdateBatchSize;
private final Duration requestedMaximumDuration;
private final Optional<Integer> jitterSeconds;
private final String tld;
private final Clock clock;
private final HashFunction hashFunction;
private final CloudTasksUtils cloudTasksUtils;
@Inject
ReadDnsRefreshRequestsAction(
@Config("dnsTldUpdateBatchSize") int tldUpdateBatchSize,
@Config("readDnsRefreshRequestsActionRuntime") Duration requestedMaximumDuration,
@Parameter(PARAM_DNS_JITTER_SECONDS) Optional<Integer> jitterSeconds,
@Parameter(PARAM_TLD) String tld,
Clock clock,
HashFunction hashFunction,
CloudTasksUtils cloudTasksUtils) {
this.tldUpdateBatchSize = tldUpdateBatchSize;
this.requestedMaximumDuration = requestedMaximumDuration;
this.jitterSeconds = jitterSeconds;
this.tld = tld;
this.clock = clock;
this.hashFunction = hashFunction;
this.cloudTasksUtils = cloudTasksUtils;
}
/**
* Reads requests up to the maximum requested runtime, and enqueues update batches from the these
* requests.
*/
@Override
public void run() {
if (Tld.get(tld).getDnsPaused()) {
logger.atInfo().log("The queue updated is paused for TLD: %s.", tld);
return;
}
DateTime requestedEndTime = clock.nowUtc().plus(requestedMaximumDuration);
// See getLockIndex(), requests are evenly distributed to [1, numDnsPublishLocks], so each
// bucket would be roughly the size of tldUpdateBatchSize.
int processBatchSize = tldUpdateBatchSize * Tld.get(tld).getNumDnsPublishLocks();
while (requestedEndTime.isAfter(clock.nowUtc())) {
ImmutableList<DnsRefreshRequest> requests =
readAndUpdateRequestsWithLatestProcessTime(
tld, requestedMaximumDuration, processBatchSize);
logger.atInfo().log("Read %d DNS update requests for TLD %s.", requests.size(), tld);
if (!requests.isEmpty()) {
processRequests(requests);
}
if (requests.size() < processBatchSize) {
return;
}
}
}
/**
* Subdivides {@link DnsRefreshRequest} into buckets by lock index, enqueue a Cloud Tasks task per
* bucket, and then delete the requests in each bucket.
*/
void processRequests(Collection<DnsRefreshRequest> requests) {
int numPublishLocks = Tld.get(tld).getNumDnsPublishLocks();
requests.stream()
.collect(
toImmutableSetMultimap(
request -> getLockIndex(numPublishLocks, request), request -> request))
.asMap()
.forEach(
(lockIndex, bucketedRequests) -> {
try {
enqueueUpdates(lockIndex, numPublishLocks, bucketedRequests);
deleteRequests(bucketedRequests);
logger.atInfo().log(
"Processed %d DNS update requests for TLD %s.", bucketedRequests.size(), tld);
} catch (Exception e) {
// Log but continue to process the next bucket. The failed tasks will NOT be
// deleted and will be retried after the cooldown period has passed.
logger.atSevere().withCause(e).log(
"Error processing DNS update requests: %s", bucketedRequests);
}
});
}
/**
* Returns the lock index for a given {@link DnsRefreshRequest}.
*
* <p>We hash the second level domain for all records, to group in-bailiwick hosts (the only ones
* we refresh DNS for) with their superordinate domains. We use consistent hashing to determine
* the lock index because it gives us [0,N) bucketing properties out of the box, then add 1 to
* make indexes within [1,N].
*/
int getLockIndex(int numPublishLocks, DnsRefreshRequest request) {
String domain = getSecondLevelDomain(request.getName(), tld);
return Hashing.consistentHash(hashFunction.hashString(domain, UTF_8), numPublishLocks) + 1;
}
/** Creates DNS refresh tasks for all writers for the tld within a lock index. */
void enqueueUpdates(int lockIndex, int numPublishLocks, Collection<DnsRefreshRequest> requests) {
ImmutableList.Builder<String> domainsBuilder = new ImmutableList.Builder<>();
ImmutableList.Builder<String> hostsBuilder = new ImmutableList.Builder<>();
DateTime earliestRequestTime = END_OF_TIME;
for (DnsRefreshRequest request : requests) {
if (request.getRequestTime().isBefore(earliestRequestTime)) {
earliestRequestTime = request.getRequestTime();
}
String name = request.getName();
if (request.getType().equals(TargetType.DOMAIN)) {
domainsBuilder.add(name);
} else {
hostsBuilder.add(name);
}
}
ImmutableList<String> domains = domainsBuilder.build();
ImmutableList<String> hosts = hostsBuilder.build();
for (String dnsWriter : Tld.get(tld).getDnsWriters()) {
Task task =
cloudTasksUtils.createPostTaskWithJitter(
PublishDnsUpdatesAction.PATH,
Service.BACKEND,
ImmutableMultimap.<String, String>builder()
.put(PARAM_TLD, tld)
.put(PARAM_DNS_WRITER, dnsWriter)
.put(PARAM_LOCK_INDEX, Integer.toString(lockIndex))
.put(PARAM_NUM_PUBLISH_LOCKS, Integer.toString(numPublishLocks))
.put(PARAM_PUBLISH_TASK_ENQUEUED, clock.nowUtc().toString())
.put(PARAM_REFRESH_REQUEST_TIME, earliestRequestTime.toString())
.put(PARAM_DOMAINS, Joiner.on(',').join(domains))
.put(PARAM_HOSTS, Joiner.on(',').join(hosts))
.build(),
jitterSeconds);
cloudTasksUtils.enqueue(DNS_PUBLISH_PUSH_QUEUE_NAME, task);
logger.atInfo().log(
"Enqueued DNS update request for (TLD %s, lock %d) with %d domains and %d hosts.",
tld, lockIndex, domains.size(), hosts.size());
}
}
}

View File

@@ -14,9 +14,11 @@
package google.registry.dns;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.dns.DnsUtils.requestHostDnsRefresh;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import google.registry.dns.DnsConstants.TargetType;
import google.registry.dns.DnsUtils.TargetType;
import google.registry.model.EppResource;
import google.registry.model.EppResource.ForeignKeyedEppResource;
import google.registry.model.annotations.ExternalMessagingName;
@@ -39,7 +41,6 @@ import javax.inject.Inject;
public final class RefreshDnsAction implements Runnable {
private final Clock clock;
private final DnsQueue dnsQueue;
private final String domainOrHostName;
private final TargetType type;
@@ -47,12 +48,10 @@ public final class RefreshDnsAction implements Runnable {
RefreshDnsAction(
@Parameter("domainOrHostName") String domainOrHostName,
@Parameter("type") TargetType type,
Clock clock,
DnsQueue dnsQueue) {
Clock clock) {
this.domainOrHostName = domainOrHostName;
this.type = type;
this.clock = clock;
this.dnsQueue = dnsQueue;
}
@Override
@@ -63,11 +62,11 @@ public final class RefreshDnsAction implements Runnable {
switch (type) {
case DOMAIN:
loadAndVerifyExistence(Domain.class, domainOrHostName);
dnsQueue.addDomainRefreshTask(domainOrHostName);
requestDomainDnsRefresh(domainOrHostName);
break;
case HOST:
verifyHostIsSubordinate(loadAndVerifyExistence(Host.class, domainOrHostName));
dnsQueue.addHostRefreshTask(domainOrHostName);
requestHostDnsRefresh(domainOrHostName);
break;
default:
throw new BadRequestException("Unsupported type: " + type);

View File

@@ -14,6 +14,7 @@
package google.registry.dns;
import static google.registry.dns.DnsUtils.requestDomainDnsRefresh;
import static google.registry.dns.RefreshDnsOnHostRenameAction.PATH;
import static google.registry.model.EppResourceUtils.getLinkedDomainKeys;
import static google.registry.persistence.transaction.TransactionManagerFactory.tm;
@@ -45,14 +46,11 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
private final VKey<Host> hostKey;
private final Response response;
private final DnsQueue dnsQueue;
@Inject
RefreshDnsOnHostRenameAction(
@Parameter(PARAM_HOST_KEY) String hostKey, Response response, DnsQueue dnsQueue) {
RefreshDnsOnHostRenameAction(@Parameter(PARAM_HOST_KEY) String hostKey, Response response) {
this.hostKey = VKey.createEppVKeyFromString(hostKey);
this.response = response;
this.dnsQueue = dnsQueue;
}
@Override
@@ -76,7 +74,7 @@ public class RefreshDnsOnHostRenameAction implements Runnable {
.stream()
.map(domainKey -> tm().loadByKey(domainKey))
.filter(Domain::shouldPublishToDns)
.forEach(domain -> dnsQueue.addDomainRefreshTask(domain.getDomainName()));
.forEach(domain -> requestDomainDnsRefresh(domain.getDomainName()));
}
if (!hostValid) {

View File

@@ -16,6 +16,7 @@ package google.registry.dns.writer.clouddns;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.ImmutableSet.toImmutableSet;
import static google.registry.dns.DnsUtils.getDnsAPlusAAAATtlForHost;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import static google.registry.util.DomainNameUtils.getSecondLevelDomain;
@@ -40,6 +41,7 @@ import google.registry.model.domain.Domain;
import google.registry.model.domain.secdns.DomainDsData;
import google.registry.model.host.Host;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Tld;
import google.registry.util.Clock;
import google.registry.util.Concurrent;
import google.registry.util.Retrier;
@@ -131,6 +133,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
return;
}
Tld tld = Tld.get(domain.get().getTld());
ImmutableSet.Builder<ResourceRecordSet> domainRecords = new ImmutableSet.Builder<>();
// Construct DS records (if any).
@@ -145,7 +148,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteDomainName)
.setTtl((int) defaultDsTtl.getStandardSeconds())
.setTtl((int) tld.getDnsDsTtl().orElse(defaultDsTtl).getStandardSeconds())
.setType("DS")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(dsRrData)));
@@ -170,7 +173,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteDomainName)
.setTtl((int) defaultNsTtl.getStandardSeconds())
.setTtl((int) tld.getDnsNsTtl().orElse(defaultNsTtl).getStandardSeconds())
.setType("NS")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(nsRrData)));
@@ -216,7 +219,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteHostName)
.setTtl((int) defaultATtl.getStandardSeconds())
.setTtl((int) getDnsAPlusAAAATtlForHost(hostName, defaultATtl))
.setType("A")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(aRrData)));
@@ -226,7 +229,7 @@ public class CloudDnsWriter extends BaseDnsWriter {
domainRecords.add(
new ResourceRecordSet()
.setName(absoluteHostName)
.setTtl((int) defaultATtl.getStandardSeconds())
.setTtl((int) getDnsAPlusAAAATtlForHost(hostName, defaultATtl))
.setType("AAAA")
.setKind("dns#resourceRecordSet")
.setRrdatas(ImmutableList.copyOf(aaaaRrData)));
@@ -276,11 +279,12 @@ public class CloudDnsWriter extends BaseDnsWriter {
}
/** Returns the glue records for in-bailiwick nameservers for the given domain+records. */
private Stream<String> filterGlueRecords(String domainName, Stream<ResourceRecordSet> records) {
private static Stream<String> filterGlueRecords(
String domainName, Stream<ResourceRecordSet> records) {
return records
.filter(record -> record.getType().equals("NS"))
.filter(record -> "NS".equals(record.getType()))
.flatMap(record -> record.getRrdatas().stream())
.filter(hostName -> hostName.endsWith("." + domainName) && !hostName.equals(domainName));
.filter(hostName -> hostName.endsWith('.' + domainName) && !hostName.equals(domainName));
}
/** Mutate the zone with the provided map of hostnames to desired DNS records. */
@@ -363,8 +367,8 @@ public class CloudDnsWriter extends BaseDnsWriter {
* <p>This call should be used in conjunction with {@link #getResourceRecordsForDomains} in a
* get-and-set retry loop.
*
* <p>See {@link "https://cloud.google.com/dns/troubleshooting"} for a list of errors produced by
* the Google Cloud DNS API.
* <p>See {@link "<a href="https://cloud.google.com/dns/troubleshooting">Troubleshoot Cloud
* DNS</a>"} for a list of errors produced by the Google Cloud DNS API.
*
* @throws ZoneStateException if the operation could not be completely successfully because the
* records to delete do not exist, already exist or have been modified with different
@@ -417,12 +421,12 @@ public class CloudDnsWriter extends BaseDnsWriter {
* @param hostName the fully qualified hostname
*/
private static String getAbsoluteHostName(String hostName) {
return hostName.endsWith(".") ? hostName : hostName + ".";
return hostName.endsWith(".") ? hostName : hostName + '.';
}
/** Zone state on Cloud DNS does not match the expected state. */
static class ZoneStateException extends RuntimeException {
public ZoneStateException(String reason) {
ZoneStateException(String reason) {
super("Zone state on Cloud DNS does not match the expected state: " + reason);
}
}

View File

@@ -22,7 +22,7 @@ import dagger.Provides;
import dagger.multibindings.IntoMap;
import dagger.multibindings.IntoSet;
import dagger.multibindings.StringKey;
import google.registry.config.CredentialModule.DefaultCredential;
import google.registry.config.CredentialModule.ApplicationDefaultCredential;
import google.registry.config.RegistryConfig.Config;
import google.registry.dns.writer.DnsWriter;
import google.registry.util.GoogleCredentialsBundle;
@@ -35,7 +35,7 @@ public abstract class CloudDnsWriterModule {
@Provides
static Dns provideDns(
@DefaultCredential GoogleCredentialsBundle credentialsBundle,
@ApplicationDefaultCredential GoogleCredentialsBundle credentialsBundle,
@Config("projectId") String projectId,
@Config("cloudDnsRootUrl") Optional<String> rootUrl,
@Config("cloudDnsServicePath") Optional<String> servicePath) {

View File

@@ -18,6 +18,7 @@ import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Verify.verify;
import static com.google.common.collect.Sets.intersection;
import static com.google.common.collect.Sets.union;
import static google.registry.dns.DnsUtils.getDnsAPlusAAAATtlForHost;
import static google.registry.model.EppResourceUtils.loadByForeignKey;
import com.google.common.base.Joiner;
@@ -31,6 +32,7 @@ import google.registry.model.domain.Domain;
import google.registry.model.domain.secdns.DomainDsData;
import google.registry.model.host.Host;
import google.registry.model.tld.Registries;
import google.registry.model.tld.Tld;
import google.registry.util.Clock;
import java.io.IOException;
import java.net.Inet4Address;
@@ -185,12 +187,13 @@ public class DnsUpdateWriter extends BaseDnsWriter {
private RRset makeDelegationSignerSet(Domain domain) {
RRset signerSet = new RRset();
Tld tld = Tld.get(domain.getTld());
for (DomainDsData signerData : domain.getDsData()) {
DSRecord dsRecord =
new DSRecord(
toAbsoluteName(domain.getDomainName()),
DClass.IN,
dnsDefaultDsTtl.getStandardSeconds(),
tld.getDnsDsTtl().orElse(dnsDefaultDsTtl).getStandardSeconds(),
signerData.getKeyTag(),
signerData.getAlgorithm(),
signerData.getDigestType(),
@@ -224,12 +227,13 @@ public class DnsUpdateWriter extends BaseDnsWriter {
private RRset makeNameServerSet(Domain domain) {
RRset nameServerSet = new RRset();
Tld tld = Tld.get(domain.getTld());
for (String hostName : domain.loadNameserverHostNames()) {
NSRecord record =
new NSRecord(
toAbsoluteName(domain.getDomainName()),
DClass.IN,
dnsDefaultNsTtl.getStandardSeconds(),
tld.getDnsNsTtl().orElse(dnsDefaultNsTtl).getStandardSeconds(),
toAbsoluteName(hostName));
nameServerSet.addRR(record);
}
@@ -244,7 +248,7 @@ public class DnsUpdateWriter extends BaseDnsWriter {
new ARecord(
toAbsoluteName(host.getHostName()),
DClass.IN,
dnsDefaultATtl.getStandardSeconds(),
getDnsAPlusAAAATtlForHost(host.getHostName(), dnsDefaultATtl),
address);
addressSet.addRR(record);
}
@@ -260,7 +264,7 @@ public class DnsUpdateWriter extends BaseDnsWriter {
new AAAARecord(
toAbsoluteName(host.getHostName()),
DClass.IN,
dnsDefaultATtl.getStandardSeconds(),
getDnsAPlusAAAATtlForHost(host.getHostName(), dnsDefaultATtl),
address);
addressSet.addRR(record);
}

View File

@@ -1,159 +1,141 @@
<?xml version="1.0" encoding="UTF-8"?>
<cronentries>
<!--
/cron/fanout params:
queue=<QUEUE_NAME>
endpoint=<ENDPOINT_NAME> // URL Path of servlet, which may contain placeholders:
runInEmpty // Run once, with no tld parameter
forEachRealTld // Run for tlds with getTldType() == TldType.REAL
forEachTestTld // Run for tlds with getTldType() == TldType.TEST
exclude=TLD1[,TLD2] // exclude something otherwise included
-->
<cron>
<entries>
<task>
<url>/_dr/task/rdeStaging</url>
<name>rdeStaging</name>
<description>
This job generates a full RDE escrow deposit as a single gigantic XML document
and streams it to cloud storage. When this job has finished successfully, it'll
launch a separate task that uploads the deposit file to Iron Mountain via SFTP.
</description>
<schedule>every day 00:07</schedule>
<target>backend</target>
</cron>
<schedule>7 0 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=rde-upload&endpoint=/_dr/task/rdeUpload&forEachRealTld]]></url>
<name>rdeUpload</name>
<description>
This job is a no-op unless RdeUploadCursor falls behind for some reason.
</description>
<schedule>every 4 hours synchronized</schedule>
<target>backend</target>
</cron>
<schedule>0 */4 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=marksdb&endpoint=/_dr/task/tmchDnl&runInEmpty]]></url>
<name>tmchDnl</name>
<description>
This job downloads the latest DNL from MarksDB and inserts it into the database.
(See: TmchDnlServlet, ClaimsList)
(See: TmchDnlAction, ClaimsList)
</description>
<schedule>every 12 hours synchronized</schedule>
<target>backend</target>
</cron>
<schedule>0 */12 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=marksdb&endpoint=/_dr/task/tmchSmdrl&runInEmpty]]></url>
<name>tmchSmdrl</name>
<description>
This job downloads the latest SMDRL from MarksDB and inserts it into the database.
(See: TmchSmdrlServlet, SignedMarkRevocationList)
(See: TmchSmdrlAction, SignedMarkRevocationList)
</description>
<schedule>every 12 hours from 00:15 to 12:15</schedule>
<target>backend</target>
</cron>
<schedule>15 */12 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=marksdb&endpoint=/_dr/task/tmchCrl&runInEmpty]]></url>
<name>tmchCrl</name>
<description>
This job downloads the latest CRL from MarksDB and inserts it into the database.
(See: TmchCrlServlet)
(See: TmchCrlAction)
</description>
<schedule>every 12 hours synchronized</schedule>
<target>backend</target>
</cron>
<schedule>0 */12 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/syncGroupMembers&runInEmpty]]></url>
<name>syncGroupMembers</name>
<description>
Syncs RegistrarContact changes in the past hour to Google Groups.
</description>
<schedule>every 1 hours synchronized</schedule>
<target>backend</target>
</cron>
<schedule>0 */1 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=sheet&endpoint=/_dr/task/syncRegistrarsSheet&runInEmpty]]></url>
<name>syncRegistrarsSheet</name>
<description>
Synchronize Registrar entities to Google Spreadsheets.
</description>
<schedule>every 1 hours synchronized</schedule>
<target>backend</target>
</cron>
<schedule>0 */1 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/task/resaveAllEppResourcesPipeline?fast=true]]></url>
<name>resaveAllEppResourcesPipeline</name>
<description>
This job resaves all our resources, projected in time to "now".
</description>
<schedule>1st monday of month 09:00</schedule>
<target>backend</target>
</cron>
<!--Deviation from cron tasks schedule: 1st monday of month 09:00 is replaced
with 1st of the month 09:00 -->
<schedule>0 9 1 * *</schedule>
</task>
<cron>
<url><![CDATA[/_dr/task/expandRecurringBillingEvents]]></url>
<task>
<url><![CDATA[/_dr/task/expandBillingRecurrences?advanceCursor]]></url>
<name>expandBillingRecurrences</name>
<description>
This job runs an action that creates synthetic OneTime billing events from Recurring billing
events. Events are created for all instances of Recurring billing events that should exist
between the RECURRING_BILLING cursor's time and the execution time of the action.
This job runs an action that creates synthetic one-time billing events
from billing recurrences. Events are created for all recurrences that
should exist between the RECURRING_BILLING cursor's time and the execution
time of the action.
</description>
<schedule>every day 03:00</schedule>
<target>backend</target>
</cron>
<schedule>0 3 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/task/deleteExpiredDomains]]></url>
<name>deleteExpiredDomains</name>
<description>
This job runs an action that deletes domains that are past their
autorenew end date.
</description>
<schedule>every day 03:07</schedule>
<target>backend</target>
</cron>
<schedule>7 3 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/deleteProberData&runInEmpty]]></url>
<name>deleteProberData</name>
<description>
This job clears out data from probers and runs once a week.
</description>
<schedule>every monday 14:00</schedule>
<timezone>UTC</timezone>
<target>backend</target>
</cron>
<schedule>0 14 * * 1</schedule>
</task>
<!-- TODO: Add borgmon job to check that these files are created and updated successfully. -->
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportReservedTerms&forEachRealTld]]></url>
<name>exportReservedTerms</name>
<description>
Reserved terms export to Google Drive job for creating once-daily exports.
</description>
<schedule>every day 05:30</schedule>
<target>backend</target>
</cron>
<schedule>30 5 * * *</schedule>
</task>
<cron>
<task>
<url><![CDATA[/_dr/cron/fanout?queue=retryable-cron-tasks&endpoint=/_dr/task/exportPremiumTerms&forEachRealTld]]></url>
<name>exportPremiumTerms</name>
<description>
Premium terms export to Google Drive job for creating once-daily exports.
</description>
<schedule>every day 05:00</schedule>
<target>backend</target>
</cron>
<schedule>0 5 * * *</schedule>
</task>
<cron>
<url><![CDATA[/_dr/cron/readDnsQueue?jitterSeconds=45]]></url>
<task>
<url>
<![CDATA[/_dr/cron/fanout?queue=dns-refresh&forEachRealTld&forEachTestTld&endpoint=/_dr/task/readDnsRefreshRequests&dnsJitterSeconds=45]]></url>
<name>readDnsRefreshRequests</name>
<description>
Lease all tasks from the dns-pull queue, group by TLD, and invoke PublishDnsUpdates for each
group.
Enqueue a ReadDnsRefreshRequestAction for each TLD.
</description>
<schedule>every 1 minutes synchronized</schedule>
<target>backend</target>
</cron>
<cron>
<url><![CDATA[/_ah/sessioncleanup?clear]]></url>
<description>
Delete up to 100 expired _ah_SESSION entities from Datastore.
</description>
<schedule>every 15 minutes</schedule>
<target>backend</target>
</cron>
</cronentries>
<schedule>*/1 * * * *</schedule>
</task>
</entries>

View File

@@ -151,10 +151,10 @@
<url-pattern>/_dr/task/nordnVerify</url-pattern>
</servlet-mapping>
<!-- Reads the DNS push and pull queues and kick off the appropriate tasks to update zone. -->
<!-- Reads the DNS refresh requests and kick off the appropriate tasks to update zone. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/cron/readDnsQueue</url-pattern>
<url-pattern>/_dr/task/readDnsRefreshRequests</url-pattern>
</servlet-mapping>
<!-- Publishes DNS updates. -->
@@ -240,10 +240,10 @@
<url-pattern>/_dr/task/refreshDnsOnHostRename</url-pattern>
</servlet-mapping>
<!-- Action to expand recurring billing events into OneTimes. -->
<!-- Action to expand BillingRecurrences into BillingEvents. -->
<servlet-mapping>
<servlet-name>backend-servlet</servlet-name>
<url-pattern>/_dr/task/expandRecurringBillingEvents</url-pattern>
<url-pattern>/_dr/task/expandBillingRecurrences</url-pattern>
</servlet-mapping>
<!-- Background action to delete domains past end of autorenewal. -->

View File

@@ -0,0 +1,113 @@
<?xml version="1.0" encoding="UTF-8"?>
<entries>
<!-- Queue template with all supported params -->
<!-- More information - https://cloud.google.com/sdk/gcloud/reference/tasks/queues/create -->
<!--
<queue>
<name></name>
<max-attempts></max-attempts>
<max-backoff></max-backoff>
<max-concurrent-dispatches></max-concurrent-dispatches>
<max-dispatches-per-second></max-dispatches-per-second>
<max-doublings></max-doublings>
<max-retry-duration></max-retry-duration>
<min-backoff></min-backoff>
</queue>
-->
<!-- Queue for reading DNS update requests and batching them off to the dns-publish queue. -->
<queue>
<name>dns-refresh</name>
<max-dispatches-per-second>100</max-dispatches-per-second>
</queue>
<!-- Queue for publishing DNS updates in batches. -->
<queue>
<name>dns-publish</name>
<max-dispatches-per-second>100</max-dispatches-per-second>
<!-- 30 sec backoff increasing linearly up to 30 minutes. -->
<min-backoff>30s</min-backoff>
<max-backoff>1800s</max-backoff>
<max-doublings>0</max-doublings>
</queue>
<!-- Queue for uploading RDE deposits to the escrow provider. -->
<queue>
<name>rde-upload</name>
<max-dispatches-per-second>0.166666667</max-dispatches-per-second>
<max-concurrent-dispatches>5</max-concurrent-dispatches>
<max-retry-duration>14400s</max-retry-duration>
</queue>
<!-- Queue for uploading RDE reports to ICANN. -->
<queue>
<name>rde-report</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-concurrent-dispatches>1</max-concurrent-dispatches>
<max-retry-duration>14400s</max-retry-duration>
</queue>
<!-- Queue for copying BRDA deposits to GCS. -->
<queue>
<name>brda</name>
<max-dispatches-per-second>0.016666667</max-dispatches-per-second>
<max-concurrent-dispatches>10</max-concurrent-dispatches>
<max-retry-duration>82800s</max-retry-duration>
</queue>
<!-- Queue for tasks that trigger domain DNS update upon host rename. -->
<queue>
<name>async-host-rename</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
</queue>
<!-- Queue for tasks that wait for a Beam pipeline to complete (i.e. Spec11 and invoicing). -->
<queue>
<name>beam-reporting</name>
<max-dispatches-per-second>0.016666667</max-dispatches-per-second>
<max-concurrent-dispatches>1</max-concurrent-dispatches>
<max-attempts>5</max-attempts>
<min-backoff>180s</min-backoff>
<max-backoff>180s</max-backoff>
</queue>
<!-- Queue for tasks that communicate with TMCH MarksDB webserver. -->
<queue>
<name>marksdb</name>
<max-dispatches-per-second>0.016666667</max-dispatches-per-second>
<max-concurrent-dispatches>1</max-concurrent-dispatches>
<max-retry-duration>39600s</max-retry-duration> <!-- cron interval minus hour -->
</queue>
<!-- Queue for tasks to produce LORDN CSV reports, populated by a Cloud Scheduler fanout job. -->
<queue>
<name>nordn</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-concurrent-dispatches>10</max-concurrent-dispatches>
<max-retry-duration>39600s</max-retry-duration> <!-- cron interval minus hour -->
</queue>
<!-- Queue for tasks that sync data to Google Spreadsheets. -->
<queue>
<name>sheet</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<!-- max-concurrent-dispatches is intentionally omitted. -->
<max-retry-duration>3600s</max-retry-duration>
</queue>
<!-- Queue for infrequent cron tasks (i.e. hourly or less often) that should retry three times on failure. -->
<queue>
<name>retryable-cron-tasks</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-attempts>3</max-attempts>
</queue>
<!-- &lt;!&ndash; Queue for async actions that should be run at some point in the future. &ndash;&gt;-->
<queue>
<name>async-actions</name>
<max-dispatches-per-second>1</max-dispatches-per-second>
<max-concurrent-dispatches>5</max-concurrent-dispatches>
</queue>
</entries>

View File

@@ -1,98 +0,0 @@
<datastore-indexes autoGenerate="false">
<!-- For finding contact resources by registrar. -->
<datastore-index kind="Contact" ancestor="false" source="manual">
<property name="currentSponsorClientId" direction="asc"/>
<property name="deletionTime" direction="asc"/>
<property name="searchName" direction="asc"/>
</datastore-index>
<!-- For finding domain resources by registrar. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="currentSponsorClientId" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For finding domain resources by TLD. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="tld" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For finding domain resources by registrar. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="currentSponsorClientId" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For finding the most recently created domain resources. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="tld" direction="asc"/>
<property name="creationTime" direction="desc"/>
</datastore-index>
<!-- For finding host resources by registrar. -->
<datastore-index kind="Host" ancestor="false" source="manual">
<property name="currentSponsorClientId" direction="asc"/>
<property name="deletionTime" direction="asc"/>
<property name="fullyQualifiedHostName" direction="asc"/>
</datastore-index>
<!-- For determining the active domains linked to a given contact. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="allContacts.contact" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For determining the active domains linked to a given host. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="nsHosts" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For deleting expired not-previously-deleted domains. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="deletionTime" direction="asc"/>
<property name="autorenewEndTime" direction="asc"/>
</datastore-index>
<!-- For RDAP searches by linked nameserver. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="nsHosts" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For WHOIS IP address lookup -->
<datastore-index kind="Host" ancestor="false" source="manual">
<property name="inetAddresses" direction="asc"/>
<property name="deletionTime" direction="asc"/>
</datastore-index>
<!-- For Poll -->
<datastore-index kind="PollMessage" ancestor="false" source="manual">
<property name="clientId" direction="asc"/>
<property name="eventTime" direction="asc"/>
</datastore-index>
<datastore-index kind="PollMessage" ancestor="true" source="manual">
<property name="clientId" direction="asc"/>
<property name="eventTime" direction="asc"/>
</datastore-index>
<!-- For querying HistoryEntries. -->
<datastore-index kind="HistoryEntry" ancestor="true" source="manual">
<property name="modificationTime" direction="asc"/>
</datastore-index>
<datastore-index kind="HistoryEntry" ancestor="false" source="manual">
<property name="clientId" direction="asc"/>
<property name="modificationTime" direction="asc"/>
</datastore-index>
<!-- For RDAP. -->
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="currentSponsorClientId" direction="asc"/>
<property name="fullyQualifiedDomainName" direction="asc"/>
</datastore-index>
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="currentSponsorClientId" direction="asc"/>
<property name="tld" direction="asc"/>
<property name="fullyQualifiedDomainName" direction="asc"/>
</datastore-index>
<datastore-index kind="Domain" ancestor="false" source="manual">
<property name="tld" direction="asc"/>
<property name="fullyQualifiedDomainName" direction="asc"/>
</datastore-index>
<datastore-index kind="Host" ancestor="false" source="manual">
<property name="deletionTime" direction="asc"/>
<property name="fullyQualifiedHostName" direction="asc"/>
</datastore-index>
<datastore-index kind="Contact" ancestor="false" source="manual">
<property name="deletionTime" direction="asc"/>
<property name="searchName" direction="asc"/>
</datastore-index>
</datastore-indexes>

View File

@@ -1,11 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- TODO: @ptkach - Delete once Cloud Api deployer is up and running -->
<queue-entries>
<!-- Queue for reading DNS update requests and batching them off to the dns-publish queue. -->
<queue>
<name>dns-pull</name>
<mode>pull</mode>
<name>dns-refresh</name>
<rate>100/s</rate>
</queue>
<!-- Queue for publishing DNS updates in batches. -->
<queue>
<name>dns-publish</name>
<rate>100/s</rate>
@@ -18,6 +21,7 @@
</retry-parameters>
</queue>
<!-- Queue for uploading RDE deposits to the escrow provider. -->
<queue>
<name>rde-upload</name>
<rate>10/m</rate>
@@ -28,6 +32,7 @@
</retry-parameters>
</queue>
<!-- Queue for uploading RDE reports to ICANN. -->
<queue>
<name>rde-report</name>
<rate>1/s</rate>
@@ -37,6 +42,7 @@
</retry-parameters>
</queue>
<!-- Queue for copying BRDA deposits to GCS. -->
<queue>
<name>brda</name>
<rate>1/m</rate>
@@ -65,7 +71,6 @@
</queue>
<!-- Queue for tasks that communicate with TMCH MarksDB webserver. -->
<!-- TODO(b/17623181): Delete this once the queue implementation is live and working. -->
<queue>
<name>marksdb</name>
<rate>1/m</rate>
@@ -75,7 +80,7 @@
</retry-parameters>
</queue>
<!-- Queue for tasks to produce LORDN CSV reports, either by the query or queue method. -->
<!-- Queue for tasks to produce LORDN CSV reports, populated by a Cloud Scheduler fanout job. -->
<queue>
<name>nordn</name>
<rate>1/s</rate>
@@ -85,18 +90,6 @@
</retry-parameters>
</queue>
<!-- Queue for LORDN Claims CSV rows to be periodically queried and then uploaded in batches. -->
<queue>
<name>lordn-claims</name>
<mode>pull</mode>
</queue>
<!-- Queue for LORDN Sunrise CSV rows to be periodically queried and then uploaded in batches. -->
<queue>
<name>lordn-sunrise</name>
<mode>pull</mode>
</queue>
<!-- Queue for tasks that sync data to Google Spreadsheets. -->
<queue>
<name>sheet</name>

Some files were not shown because too many files have changed in this diff Show More