When RdeReportAction is invoked without a prefix parameter (as in the
case when it is kicked off by cron jobs for potential catch ups), we
need to used the same heuristics that's employed in RdeUploadAction to
find the most recent prefix for the given watermark, otherwise the job
will not find any deposits to upload.
Also renamed RdeUtil to RdeUtils, to be consistent with our naming
conventions.
IAP and regular OIDC auth mechanisms are unified under a base class that
produces either APP or USER level AuthResult based on the principal email
found in the OIDC token.
Also moved some enum classes to better organize code structure.
This encompasses most of the basic information that is viewable in the
existing console, basically, just viewing the base info of the Registrar
object.
Use the ApplicationDefaultCredential annotation instead.
The new annotation has been verified in sandbox and production using the
'executeCannedScript' endpoint. The verification code is removed in this
PR too.
This adds a possible configuration point "defaultServiceAccount" (which
in GAE will be the standard GAE service account). If this is configured,
CloudTasksUtils can create tasks with standard HTTP requests with an
OIDC token corresponding to that service account, as opposed to using
the AppEngine-specific request methods.
This also works with IAP, in that if IAP is on and we specify the IAP
client ID in the config, CloudTasksUtils will use the IAP client ID as
the token audience and the request will successfully be passed through
the IAP layer.
Tetsted in QA.
This column is used by the billing team to create invoices. Registrars
have asked that a single invoice be created for a given registrar,
instead of one per registrar-tld pair. This should have no other effect
on the billing pipeline as the invoice grouping key has a description
field that also contains the TLD, so the granularity as a whole does not
change.
We have been using it as a poor man's timed flag that triggers a system
behavior change after a certain time. We have no foreseeable future use
for it now that the DNS pull queue related code is deleted. If in the
future a need for such a flag arises, we are better off implementing a
proper flag system than hijacking this class any way.
* Prepare switch of credential annotation
Prepare the switch from DefaultCredential to ApplicationCredential.
In nomulus tools, start using the new annotation. This is tested by
successfully using the nomulus curl command, which actually needs a
valid credential to work.
For remaining use cases of the old annotation in Nomulus server, add
some code that relies on the new credential to work. Once these code
are tested in sandbox and production, we will switch the annotations.
If the user does, e.g. `--allowed_nameservers=` (or contact ids) that
shouldn't mean a list consisting solely of the empty string.
Using this parameter / converter allows us to ensure that lists of
strings look reasonable.
This includes renaming the billing classes to match the SQL table names,
as well as splitting them out into their own separate top-level classes.
The rest of the changes are mostly renaming variables and comments etc.
We now use `BillingBase` as the name of the common billing superclass,
because one-time events are called BillingEvents
* Allow rotation when updating registrar cert
When updating a registrar's primary cert, add a flag to activate
rotation of previous primary cert to failover.
This functionality is part of the prober ssl cert renewal automation.
The only method that is called from this class is setNumInstances. However we
don't current use `nomulus set_num_instances` anywhere. If we need to change
the number of instances, it is either done by updating appengine-web.xml, which
is deployed by Spinnaker, or doing it manually as a break-glass fix via gcloud
or on Pantheon.
Because we need to check if a contact history is the most recent for its
underlying contact resource, the query-wipe out-repeat loop no longer works
ideally due to the added overhead with the query.
Instead, we refactor the logic into a Beam pipeline where the query only
needs to be performed once and history entries eligible for wipe out are
handled individually in their own transforms. Because history entries
are otherwise immutable, we can run the pipeline in relatively relaxed
repeatable read isolation level. We also do not worry about batching for
performance, as we do not anticipate this operation to put a lot of
strains on the particular table.
This also adds README files to explain the two different IDN table locations
(which have different purposes). See http://b/278565478 for more information.
This includes changes to make sure that we use the proper per-TLD IDN
tables as well as setting/updating/removing them via the Create/Update
TLD commands.
There can be situations (anchor tenants, test tokens, other ways of
getting a domain to cost $0) where we may want to delete a domain during
the add grace period but the credit applied is 0. We should not fail on
those cases.
See b/277115241 for an example.
This new table has just been approved by ICANN. It is the same as our existing
Extended Latin table, except with the removal of some lesser-used characters
with diacritic marks that are confusable variants.
The filenames for the IDN tables are made explicit to improve code readability.
And this reverses the removal of G with stroke from the existing Extended Latin
table (see PR #1938), so that that table continues to accurately reflect the
state of our previously launched TLDs.
This is the full list of removed characters:
U+00E1 # LATIN SMALL LETTER A WITH ACUTE
U+0101 # LATIN SMALL LETTER A WITH MACRON
U+01CE # LATIN SMALL LETTER A WITH CARON
U+010B # LATIN SMALL LETTER C WITH DOT ABOVE
U+01E7 # LATIN SMALL LETTER G WITH CARON
U+0123 # LATIN SMALL LETTER G WITH CEDILLA
U+01E5 # LATIN SMALL LETTER G WITH STROKE
U+0131 # LATIN SMALL LETTER DOTLESS I
U+00ED # LATIN SMALL LETTER I WITH ACUTE
U+00EF # LATIN SMALL LETTER I WITH DIAERESIS
U+01D0 # LATIN SMALL LETTER I WITH CARON
U+0144 # LATIN SMALL LETTER N WITH ACUTE
U+014B # LATIN SMALL LETTER ENG
U+00F3 # LATIN SMALL LETTER O WITH ACUTE
U+014D # LATIN SMALL LETTER O WITH MACRON
U+01D2 # LATIN SMALL LETTER O WITH CARON
U+0157 # LATIN SMALL LETTER R WITH CEDILLA
U+0163 # LATIN SMALL LETTER T WITH CEDILLA
U+00FA # LATIN SMALL LETTER U WITH ACUTE
U+00FC # LATIN SMALL LETTER U WITH DIAERESIS
U+01D4 # LATIN SMALL LETTER U WITH CARON
U+1E83 # LATIN SMALL LETTER W WITH ACUTE
U+1E81 # LATIN SMALL LETTER W WITH GRAVE
U+1E85 # LATIN SMALL LETTER W WITH DIAERESIS
U+1EF3 # LATIN SMALL LETTER Y WITH GRAVE
U+017C # LATIN SMALL LETTER Z WITH DOT ABOVE
Makes the next run at the first Monday of December, which should give us
plenty of time to fix the issue with it wiping out PII in the most recent
contact history.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1982)
<!-- Reviewable:end -->
* Check allowedEppActions when validating tokens
* Reflect failed tokens in the fee check
* Add tests for domainCheckFlow
* Add hyphens to fee class name
* Add clarifying comment to catch block
* Add specific exception types
* Implement default tokens for the fee extension in domain check
* Add test for expired token
* Add test for alloc token and default token
* Fix formatting
* Always check for default tokens
* Change transaction time to passed in DateTime
Also adds a DnsUtils class to deal with adding, polling, and removing
DNS refresh requests (only adding is implemented for now). The class
also takes care of choosing which mechanism to use (pull queue vs. SQL)
based on the current time and the database migration schedule map.
When submitting tasks to Cloud Tasks, we will use the built-in OIDC
authentication which runs under the default service account (not the
cloud scheduler service account). We want either to work for app-level
auth.
This does nothing for now, but in the future this will allow us to refer
to the RegistryConfig and/or Service objects from the core project. This
will be necessary when changing CloudTasksUtils to not use the AppEngine
built-in connection (it will need to use a standard HTTP request
instead).
It seems like we're hitting App Engine capacity issues resulting in actual pages
now (for whatever reason, but likely one customer), and we obviously don't want
that.
The value of the column would be set to START_OF_TIME for new entries.
Every time a row is read, the value is updated to the current time. This
allows concurrent reads to not repeatedly read the same entry that has the
earliest request time, because they would only look for rows that have a value
of process time that is before current time - some padding time.
This basically fulfills the same function that LEASE_PADDING gives us
when using a pull queue, whereas a task would be leased for a certain
time, during which time they would not be leased by anyone else.
See: https://cs.opensource.google/nomulus/nomulus/+/master:core/src/main/java/google/registry/dns/ReadDnsQueueAction.java;l=99?q=readdnsqueue&ss=nomulus%2Fnomulus
This PR adds an alternative method to upload Lordn to Nordn server without
using App Engine pull queue. A new database migration stage is added to control
whether a new task is scheduled with the old or new method. The
NordnUploadAction is configured to process both kind of tasks. Once the tasks
scheduled for the old tasks are all processed, we can start using the
new method exclusively.
See: go/registry-pull-queue-redesign
There is no point comparing the old CRL to the new ones when the old one
is invalid. This could happen when the CA cert rotates, after which the
old CRL stop being valid as it fails signature verification against the
new cert.
This change will allow us to keep updating the CRL after a CA rotation without
having to manually delete the old CRL from the database.
See b/270983553.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1946)
<!-- Reviewable:end -->
ICANN doesn't like this character because it's confusable with a normal G (the
stroke tends to get lost in the visual clutter of the descender), and .com's
Extended Latin table doesn't use it either. Best to get rid of it.
For some reason `./gradlew clean build` on master is failing for me on
multiple machines due to a new org.json:json version triggering license
violations, even though the lock files are not changing.
Note that the old versions are still present because if I remove
"The JSON license", which the old versions use, the check also fails...
We might (likely will) modify some of the fiddly bits around this (maybe
the GSON serialization, where we do the actual authorization, etc) but
this should be a decent basic shell structure for endpoints that the new
registrar console can call to retrieve JSON results.
This is only generated and used if "iapClientId" is set in the proxy
config. If so, we use code similar to
https://cloud.google.com/iap/docs/authentication-howto#obtaining_an_oidc_token_for_the_default_service_account
to generate an ID token that is valid for IAP. We set the token on the
Proxy-Authorization header so that we can keep using the pre-existing
access token as well -- IAP allows for us to use either the
Authorization header or the Proxy-Authorization header.
We were under the mistaken impression before that there was a reliable
way to, out-of-band, get a GAIA ID for a particular email address.
Unfortunately, that isn't the case (at least, not in a scalable way or
one that support agents could use). As a result, we have to allow null
GAIA IDs in the database.
When we (or the support team) create new users, we will only specify the
email address and not the GAIA ID. Then, when the user logs in for the
first time, we will have the GAIA ID from the provided ID token, and we
can populate it then.
See b/260945047.
Also refactored the corresponding tests, which should future updates easier.
This change should be deployed at or around 2023-02-15T16:00:00Z.
* Modify DomainCreateFlow to check for an applicable defaultPromoToken
* Add handling for deleted tokens
* Change cache to allocation token cache
* Abstract away cache methods
* Use AllocationToken.getAll in create flow
* Filter out empty tokens
This is an 'easy' upgrade that requires a minor change in
common/build.gradle and the removal of an unnecessary import in buildSrc.
Gradle 7.4 and above has breaking changes that break the latest nebula lint plugin. We may have to wait a while.
Due to the way the beam pipeline is designed, it will expand an
recurring billing event when its event time is in scope for expansion,
instead of billing time. This means that the one time will be generated
45 days earlier. This would negate the need to check if the expansion is
finished when generating monthly invoices.
We will need to backfill the past 45 days of onetimes before the new
code is deployed. As an illustration, with the old code, a cursor time
of 2023-01-17 means that all auto-renewals whose billing time is before
2023-01-17 were created, which corresponds to an effective cursor time
of 2022-12-03 (45 days before 2023-01-17) for event time. This cursor
will need to be brought to 2023-01-17 to ensure that there is no gap in
generated event times when switching to use the new code.
Currently we synthesize a instance id which requires the use of App
Engine Module API. Given that we only have one version of code running
at one time, and HTTP is stateless, there is no point tracking exactly
which GAE "instance" is. We do lose information on which service (default,
backend, etc) is writing the metric, but that does not seem very
important.
Using a constant fake instance ID allows us to get rid of another GAE
dependency.
The temporary queue.xml file is not deleted in the afterEach() method,
likely causing some flaky tests that we saw due to overwriting of the
file by concurrent tests.
Most of its usage can be replaced by JpaIntegrationTestExtension. In
places where specific GAE APIs are still needed, namely when pull queue
or the User service is used, two simplifed extensions are used, which
makes them much easier to identify when the APIs are no longer used.
We've been using it in production for three weeks now. Everything seems
to be working fine. Removing the code related to checking the migration
state and using the override.
This will replace the ExpandRecurringBillingEventsAction, which has a
couple of issues:
1) The action starts with too many Recurrings that are later filtered out
because their expanded OneTimes are not actually in scope. This is due
to the Recurrings not recording its latest expanded event time, and
therefore many Recurrings that are not yet due for renewal get included
in the initial query.
2) The action works in sequence, which exacerbated the issue in 1) and
makes it very slow to run if the window of operation is wider than
one day, which in turn makes it impossible to run any catch-up
expansions with any significant gap to fill.
3) The action only expands the recurrence when the billing times because
due, but most of its logic works on event time, which is 45 days
before billing time, making the code hard to reason about and
error-prone. This has led to b/258822640 where a premature
optimization intended to fix 1) caused some autorenwals to not be
expanded correctly when subsequent manual renews within the autorenew
grace period closed the original recurrece.
As a result, the new pipeline addresses the above issues in the
following way:
1) Update the recurrenceLastExpansion field on the Recurring when a new
expansion occurs, and narrow down the Recurrings in scope for
expansion by only looking for the ones that have not been expanded for
more than a year.
2) Make it a Beam pipeline so expansions can happen in parallel. The
Recurrings are grouped into batches in order to not overwhelm the
database with writes for each expansion.
3) Create new expansions when the event time, as opposed to billing
time, is within the operation window. This streamlines the logic and
makes it clearer and easier to reason about. This also aligns with
how other (cancelllable) operations for which there are accompanying
grace periods are handled, when the corresponding data is always
speculatively created at event time. Lastly, doing this negates the
need to check if the expansion has finished running before generating
the monthly invoices, because the billing events are now created not
just-in-time, but 45 days in advance.
Note that this PR only adds the pipeline. It does not switch the default
behavior to using the pipeline, which is still done by
ExpandRecurringBillingEventsAction. We will first use this pipeline to
generate missing billing events and domain histories caused by
b/258822640. This also allows us to test it in production, as it
backfills data that will not affect ongoing invoice generation. If
anything goes wrong, we can always delete the generated billing events
and domain histories, based on the unique "reason" in them.
This pipeline can only run after we switch to use SQL sequence based ID
allocation, introduced in #1831.
We can use the saved refresh token associated with the nomulus tool to
request an ID token with an audience of the IAP client in order to
satisfy IAP with with the Nomulus tool.
Note: this requires that the user of the Nomulus tool, e.g.
"gbrodman@google.com" has a User object stored in SQL.
Tested on QA
This is similar to where we store the SQL files for beam pipelines, and
frankly makes more sense. Also streamlined the use of the API to read
SQL files from a jar.
The AppEngine thread factory is only useful if we can't create our own
(this is no longer the case) or if we need access to AppEngine APIs
(this is no longer the case).
The Concurrent class is only used by the DNS writer and the
CreateGroupsAction.
This parameter is misleading and does not do what it purports to do.
Namely, it does not impact the level of parallelism. Given the input n for this
parameter, and m for the batch size, the elements are divided (keyed) into n
groups, each of which are then spread evenly across all threads, which
are eventually in turn batched into batches with size m:
https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/GroupIntoBatches.java#L227
This is also evident in the implementation itself, where the ShardedKey
is determined by the unique number for a worker/thread combo and the
original key:
https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/GroupIntoBatches.java#L268
Using a more concrete example, suppose we have 100 elements and 10
worker threads, with a target batch size of 5. If the "shard" number is set to
1, we first spread the 100 elements across 10 threads, resulting in 10
elements per thread, each thread then batches the elements into 2
batches of size 5.
If the "shard" number is set to 2, the 100 elements are first divided into 2
"shards" of 50 each. Each "shard" is then distributed within the 10
threads, resulting in 5 elements per "shard" per thread. They then get
turned into 1 batch per "shard" per thread. In the end, each thread
still processes 2 batches, even though they are from 2 different "shards".
Therefore this "shard" number does not perform horizontal partitioning
that one normally associates with sharding, and provides no
performance benefits but rather confuses the user.
It is also suggested that using withShardedKey() alone is already
sufficient to achieve auto-sharding within the keyed group. There is no
need to manually divide the input by keying them differently based on
the "shard" number specified:
https://youtu.be/jses0W4Zalc?t=967
Some retriers are no longer needed because transactions are
automatically retried by the JPA transaction manager when there's a
transient exception.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1874)
<!-- Reviewable:end -->
We no longer use App Engine Remote API as of #1858.
The pipeline endpoint is only for GAE mapreduce, which we stopped doing
for a while.
TESTED=deployed to alpha and used nomulus tool built from master to
connect to the tools service.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1873)
<!-- Reviewable:end -->
* Add defaultPromoTokens to Registry
* Remove flyway files from this PR
* Fix merge conflicts
* Add back flyway file
* Add more info to error messages
* Change to a list
* Fix javadoc
* Change error message
* Add note to field declaration
These were always properly reflected in the actual creations, but when
running a check flow it would still show a non-zero cost even when using
an ANCHOR_TENANT allocation token. This changes it so that we accurately
show the $0.00 cost.
This is only used for contacting Datastore. With the removal of:
1. All standard usages of Datastore
2. Usage of Datastore for allocation of object IDs
3. Usage of Datastore for GAE user IDs
we can remove the remote API without affecting functionality.
This also allows us to just use SQL for every command since it's lazily
supplied. This simplifies the SQL setup and means that we remove a
possible situation where we forget the SQL setup.
So long, farewell, adios, ciao, sayonara, 再见!
TESTED=deployed to alpha and used `nomulus list_tlds` to confirm that the web app can receive and serve requests.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1863)
<!-- Reviewable:end -->
With newer versions of Java 11, javadoc fails to build due to unknown
tags in package-info.java files. These files are not important so we
exclude them.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1866)
<!-- Reviewable:end -->
Switch to using the login email address instead of GAE user ID to
identify console users. The primary use cases are:
1) When the user logged in the registrar console, need to figure out
which registrars they have access to (in
AuthenticatedReigstrarAccess).
2) When a user tries to apply a registry lock, needs to know if they
can (in RegistryLockGetAction).
Both cases are tested in alpha with a personal email address to ensure
it does not get the permission due to being a GAE admin account.
Also verified that the soy templates includes the hidden login email
form field instead of GAE user ID when registrars are displayed on the
console; and consequently when a contact update is posted to the server,
the login email is part of the JSON payload. Even though it does not
look like it is used in any way by RegistrarSettingsAction, which
receives the POST request. Like GAE user ID, the field is hidden, so
cannot be changed by the user from the console, it is also not used to
identify the RegistryPoc entity, whose composite keys are the contact
email and the registrar ID associated with it.
The login email address is backfilled for all RegistrarPocs that have a
non-null GAE user ID. The backfilled addresses converted to the same ID
as stored in the database.
It doesn't actually use any App Engine libraries or code -- it's just a
generic connection with authentication to a service. This also involves
changing that block of config to be "gcpProject" instead of "appEngine"
since it's more generic.
Note: this will require an internal PR as well to change the
corresponding private config block
* Remove package token on manual transfer approval
* remove extra variables
* Add back white space
* Don't overwrite existingDomain
* Format fixes, use available helper variables
* Use PACKAGE allocation tokens in tests
* Refactor
* Fix merge conflicts
* Dont overwrite existingRecurring
* Remove names from packages on automatic transfers
* Add more tests
* Remove unneccesary local variable
* Eliminate unnecessary api call
* Reformat if blocks
* Don't overwrite existingRecurring
This PR fixes the issue where the onetime billing event for an autorenew
is not correctly created if the recurrence of the autorenew is closed
during the autorenew grace period, such as the case if a manual renew
happens during the same grace period.
The detailed analysis of the issue is captured in b/258822640. Note that
this is a quick and dirty fix to make ongoing billing event expanse work
correctly in the future. It does not fix the missing events in the past,
nor can it be used to reconstruct the missing ones (by providing a
different cursor time), due to timeout when triggering the action from
nomulus curl.
Per Weimin, the recurrences that fits the new condition along, based on
the current cursor, would increase from 382k to 430k, a 12% increase. I
checked last nights cron job run, which starts on 22:00 EST and seemed
to finish at 22:15 EST (when the last log for this request was
recorded), so it should definitely still finish in time for the nightly
runs with the new condition.
This is not a complete removal of ofy as we still a dependency on it
(GaeUserIdConverter). But this PR removed it from a lot of places where
it's no longer needed.
* Fix Gradle dependency version pinning
In Gradle 7, version labels require '!!' at the end to be free from
any forced upgrade.
Hibernate min version needs to be advanced past 5.6.12, which is buggy.
Upgraded most dependencies to the latest version.
Therefore this PR also removed several classes and related tests that
support the setup and verification of Ofy entities.
In addition, support for creating a VKey from a string is limited to
VKey<? extends EppResource> only because it is the only use case (to
pass a key to an EPP resource in a web safe way to facilitate resave),
and we do not want to keep an extra simple name to class mapping, in
addition to what persistence.xml contains. I looked into using
PersistenceXmlUtility to obtain the mapping, but the xml file contains
classes with the same simple name (namely OneTime from both PollMessage
and BillingEvent). It doesn't seem like a worthwhile investment to write
more code to deal with that, when the fact is that we only need to
consider EppResource.
* Remove Cloud KMS from Nomulus Server
Removed Cloud KMS from the Nomulus (:core) since it is no longer used.
Renamed remaining classes to reflect their use of the SecretManager.
Updated the config instructions to use a new codename for the keyring:
KMS to CSM. This PR works with both codenames. Will drop 'KMS' after
the internal repo is updated.
Add a implementation of Delegated credential without using downloaded private key.
This is a stop-gap implementation while waiting for a solution from the Java auth library.
Also added a verifier action to test the new credential in production. Testing is helpful because:
Configuration is per-environment, therefore, success in alpha does not fully validate prod.
The relevant use case is triggered by low-frequency activities. Problem may not pop out for hours or longer.
This PR removes all Ofy related cruft around `HistoryEntry` and its three subclasses in order to support dual-write to datastore and SQL. The class structure was refactored to take advantage of inheritance to reduce code duplication and improve clarity.
Note that for the embedded EPP resources, either their columns are all empty (for pre-3.0 entities imported into SQL), including their unique foreign key (domain name, host name, contact id) and the update timestamp; or they are filled as expected (for entities that were written since dual writing was implemented).
Therefore the check for foreign key column nullness in the various `@PostLoad` methods in the original code is an no-op as the EPP resource would have been loaded as null. In another word, there is no case where the update timestamp is null but other columns are not.
See the following query for the most recent entries in each table where the foreign key column or the update timestamp are null -- they are the same.
```
[I]postgres=> select MAX(history_modification_time) from "DomainHistory" where update_timestamp is null;
max
----------------------------
2021-09-27 15:56:52.502+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "DomainHistory" where domain_name is null;
max
----------------------------
2021-09-27 15:56:52.502+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "ContactHistory" where update_timestamp is null;
max
----------------------------
2021-09-27 15:56:04.311+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "ContactHistory" where contact_id is null;
max
----------------------------
2021-09-27 15:56:04.311+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "HostHistory" where update_timestamp is null;
max
----------------------------
2021-09-27 15:52:16.517+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "HostHistory" where host_name is null;
max
----------------------------
2021-09-27 15:52:16.517+00
(1 row)
```
These alternative ORMs are introduced as a way to make querying large number of
domains and domain histories more efficient through bulk loading from several
to-be-joined tables separately, then in-memory re-assembly of the final entity,
bypassing the need to query multiple tables each time an entity is queried.
Their primary use case is loading these entities for comparison between
datastore and SQL during the migration, which has been completed. The
code remain unused as of now and their existence makes refactoring and
general maintenance more complicated than necessary due to the need to keep
them up to date.
Therefore we remove the related code.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1834)
<!-- Reviewable:end -->
Also removed the ability to disable update timestamp auto update as it
was only needed during the migration.
Lastly, rectified the use of raw Coder in RegistryJpaIO.
Passing in an already-existing instance is an antipattern because it can
lead to race conditions where something else modified the object in
between when the pipeline loaded it and when you're saving it. The Write
action should only be writing new entities.
We cannot check IDs for the objects (some IDs are not autogenerated so
they might exist already). We also cannot call `insert` on the objects
because the underlying JPA `persist` call adds the input object to the
persistence context, meaning that any modifications (e.g.
updateTimestamp) are reflected in the input object. Beam doesn't allow
modification of input objects.
* Does not self allocate IDs in Beam by default.
Per b/250948425, it is dangerous to implicitly allow all Beam pipelines
to create buildables by self allocating the IDs. This change makes it so
that one has to explicitly request self allocation in Beam.
A boolean is added to the pipeline option so that it can be passed to
the beam worker initializer that controls the behavior of the JVM on
each worker. Note that we did not add the option in the metadata.json file
because we did not want people to use the override at run time when launching
a pipeline, due to the risk. As shown in RdePipeline.java, we instead
explicitly hard-code the option in the pipeline. There is nothing that
stops one to supply that option when launching the pipeline, but it's
not advised.
Tested=deployed the pipeline alpha and ran it.
* Properly create and use default credential
This PR consists of the following changes:
- Stopped adding scopes to the default credential when using it to access other
non-workspace GCP APIs. Scopes are not needed here.
- Started applying scopes to the default credential when using to access
Drive and Sheets APIs.
- Upgraded Drive access from the deprecated credential lib to the
up-to-date one
- Switched Sheet access from the exported json credential to the
scoped default credential.
This PR requires that the affected files be writable to the default
service account (project-name@appspot.gserviceaccount.com) of the
project.
- This is already the case for exported files (premium terms, reserved
terms, and domain list).
- The registrar sync sheets in alpha, sandbox, and production have been
updated with the new permissions.
All impacted operations have been tested in alpha.
* Properly create and use default credential
This PR consists of the following changes:
- Added a new method to generate scope-less default credential when using it to
access other non-workspace GCP APIs. Scopes are not needed here.
- Started to use the new credential in the SecreteManager.
- Will migrate other usages to this new credential gradually.
- Marked the old DefaultCredential as deprecated.
- Started applying scopes to the default credential when using to access Drive
and Sheets APIs.
- Upgraded Drive access from the deprecated credentials lib
- Switched Sheet access from the exported json credential to the scoped
default credential.
This PR requires that the affected files be writable to the default service
account (project-name@appspot.gserviceaccount.com) of the project.
- This is already the case for exported files (premium terms, reserved terms,
and domain list).
- The registrar sync sheets in alpha, sandbox, and production have been
updated with the new permissions.
All impacted operations have been tested in alpha.
This reverts commit 1ab077d267.
Apparently the new version of Spinnaker that is compatible with this doesn't
work for our release, so we need to roll this back for now. (Again!)
We started getting failures because some of the tests used October. In
general we should freeze the clock for testing as much as possible.
Same thing with the Get*Commands
Basically, any time we're loading a bunch of linked objects that might
change, we want to have REPEATABLE_READ so that another transaction
doesn't come along and smush whatever we think we're loading.
The following instances of READ_COMMITTED haven't changed:
- RdePipeline (it only loads immutable objects like histories)
- Invoicing pipeline (only immutable objects like BillingEvents)
- Spec11 (doesn't use any linked info from Domain)
This also changes the PersistenceModule to use REPEATABLE_READ by
default on the replica JPA TM, for the standard reasoning.
This runs over all domains that weren't deleted as of September 5. This
will fix most of b/248112997, which is itself caused by b/245940594 --
creating synthetic history objects means that the RDE pipeline will look
at those instead of the potentially-no-longer-valid data in the old
history objects.
Also fixed a bug introduced in #1785 where identity checked were performed instead of equality. This resulted in two sets containing the same elements not being regarded as equal and subsequent DNS updated being unnecessarily enqueued.
The old class is modeled after datastore with some logic jammed in for it to work with SQL as well. As of #1777, the ofy related logic is deleted, however the general structure of the class remained datastore oriented.
This PR refactors the existing class into a ForeignKeyUtils helper class that does away wit the index subclasses and provides static helper methods to do the same, in a SQL-idiomatic fashion.
Some minor changes are made to the EPP resource classes to make it possible to create them in a SQL only environment in tests.
`TransferData` is currently a generic class with a complicated type parameter that designate the `Builder` class of its concrete subclass, on order to facilitate returning the said `Builder` from an instance loosely typed to the superclass (`TransferData`) itself.
While this works, in most all places that a `TransferData` is used, the raw, un-generic type is declared, resulting a lot of warnings, not to mention the fact that type safety not actually checked when raw type is used.
In this PR, we make it so that the concrete `Builder` is returned through a protected abstract method that is implemented by the subclasses. The type information therefore no longer needs to be embedded in the superclass type signature, and reflection is not necessary to create the `Builder` either. Overall, it makes `TransferData` a much cleaner class without the messiness of generics.
This uses the GoogleIdTokenVerifier to verify ID tokens passed in
(presumably from a front end) via cookies. This isn't used anywhere yet
but it will be used for front-end API calls for the new console.
First, we create a sequence of User IDs in Postgres and assign it to the
User ID field, meaning that Hibernate can autogenerate IDs.
Next, add an update timestamp.
Next, add a constraint that we can't have multiple Users with the same
email address.
Finally, create a DAO since we'll usually want to query by that email
address (at least for now).
FKI used to be persisted in datastore to help speed up loading by foreign key.
Now it is just a helper class to do the same thing in SQL because
indexing is natively supported in SQL.
- Create a sequence to generate IDs for the user (this allows us to have
Long ID types so that Hibernate can autogenerate IDs)
- Add an update timestamp column so we can extend BackupGroupRoot
- Add a restriction that there can't be multiple users with the same
email address
* Rename ContactResource -> Contact
This is a follow-up to PR #1725 and #1733. Now all EPP resource entity class
names have been rationalized to match with their SQL table names.
This means that LegacyAuthenticationMechanism or a to-be-created
OAuth2AuthenticationMechanism) can return a UserAuthInfo object that
contains either the GAE User or the console User as appropriate. The
goal is that the non-auth flows shouldn't have to know about which user
type it is. Note: the registry lock flow (for now) needs to know about
the separate types of auth because it is a separate level of auth from
the standard AuthenticatedRegistrarAccessor.
The AuthenticatedRegistrarAccessor code is a bit odd because the new
role system doesn't quite fit neatly into the old registrar ->
OWNER,ADMIN system but this is a fine approximation. Basically, any
new registrar role will map to the old OWNER role.
* Add the PackagePromotion table
* Add long id
* Add NOT NULL
* fix formatting
* make package price non null
* Add not nulls to java file
* Fix broken tests from merge conflicts
Also deletes the autorenew poll message history revision id field in
Domain, which is only needed to recreate the ofy key for the poll
message. The column already contains null values in it, making it
impossible to depend on it. The column itself will be deleted from the
schema after this PR is deployed.
The logic to update autorenew recurrence end time is changed
accordingly: When a poll message already exists, we simply update the
endtime, but when it no longer exists, i. e. when it's deleted
speculatively after a transfer request, we recreate one using the
history entry id that resulted in its creation (e. g. cancelled or rejected
transfer).
This should fix b/240984498. Though the exact reason for that bug is
still unclear to me. Namely, it throws an NPE at this line during an
explicit domain transfer approval:
https://cs.opensource.google/nomulus/nomulus/+/master:core/src/main/java/google/registry/flows/domain/DomainFlowUtils.java;l=603;bpv=1;bpt=0;drc=ede919d7dcdb7f209b074563b3d449ebee19118a
The domain in question has a null autorenewPollMessageHistoryId, but
that in itself should not have caused an NPE because we are not
operating on the null pointer. On that line the only possible way to
throw an NPE is for the domain itself to be null, but if that were the
case, the NPE would have been thrown at line 599 where we called a
method on the domain object.
Regardless of the cause, with this PR we are using an explicitly
provided history id and checking for its nullness before using it. If a
similar issue arises again, we should have a better idea why.
Lastly, the way poll message id is constructed is largely simplified in
PollMessageExternalKeyConverter as a result of the removal ofy parent
keys in PollMessage. This does present a possibility of failure when
immediately before deployment, a registrar requests a poll message and
received the old id, but by the time the registrar acks the id, the new
version is deployed and therefore does not recognize the old key. The
likelihood of this happening should be slim, and we could have prevented
it by letting the converter recognize both the old and the new key.
However, we would like to eventually phase out the old key, and in
theory a registrar could ack a poll message at any time after it was
requested. So, there is not a safe time by which all the old ids are
acked, lest we develop some elaborate scheme to keep track of which
messages were sent with an old id when requested and which of these old
ids are acked. Only then can we be truly safe to phase out the old id.
The benefit does not seem to warrant the effort. If a registrar does
encounter a situation like this, they could open a support bug to have
us manually ack the poll message for them.
Basically, what's happening here is that some flow tests are adding
things to the claims list cache which is stored statically, meaning that
some other tests can pick those up when they shouldn't. By adding the
extension in RFTC, it'll clear out the caches after each test.
* Allow anchor tenant creation via allocation token behavior
This also enforces that non-superusers cannot create registrations on
trademarked names prior to the sunrise period, even if they have an
allocation token with ANCHOR_TENANT behavior.
* Create a registry lock permission and corresponding account manager role
This allows us to distinguish between standard account managers and
users that might have the registry lock permission. This will make the
registry lock password-setting flow easier (user can reset their
password iff they have the REGISTRY_LOCK permission, instead of having a
separate boolean) and allows us to easily determine whether or not a
user should have access to registry lock views in the UI.
We cache the ClaimsList Java object for six hours (we don't expect it to
change frequently, and the cron job to update it only runs every twelve
hours). Subsequent calls to ClaimsListDao::get will return the cached
value.
Within the ClaimsList Java object itself, we cache any labels that we
have retrieved. While we already have a form of a cache here in the
"labelsToKeys" map, that only handles situations where we've loaded the
entire map from the database. We want to have a non-guaranteed cache in
order to make repeated calls to getClaimKey fast.
* Use the new renewal price logic in transfer flow
* Fix build
* Add renewal handling on all transfer flows
* Merge branch 'master' into transfer-retain-renewal-price
* Merge branch 'master' into transfer-retain-renewal-price
* Add more tests
* Add base object classes for new user/role permissioning model
- Adds the permissions themselves
- Adds the six roles that a user may have -- three global, three
per-registrar
- Adds the mapping from role -> set of permissions
- Adds a UserRoles object to encapsulate the answer to the question of
"does this user have this permission?"
- Adds a User class as a base to show how we will use the new UserRoles
object
longer included in the generated WAR, even though the deploy_jar
configuration is specified as a dependency.
I could not figure out a way to tweak the configuration dependency to
have core.jar pulled into the .war, so I decided to just explicitly pick
it from its known location.
TESTED=deployed to alpha and verified that the instances can start.
* Rename DomainContent -> DomainBase
This is a follow-up to PR #1725 which renamed DomainBase to Domain. Now, the
class naming hierarchy has the same structure as ContactBase/HostBase.
* Rename DomainBase -> Domain
This was a long time coming, but we couldn't do it until we left Datastore, as
the Java class name has to match the Datastore entity name.
Subsequent PRs will rename ContactResource to Contact and HostResource to Host,
so that everything matches the SQL table names (and is shorter!).
* Merge branch 'master' into rename-domainbase
The fix is based on b/240627423. I tested locally and was able to build
with the -PenableCrossReferencing=true flag successfully.
TESTED=run the kythe GCB pipeline locally.
This PR turns out to be more massive than I would have liked but it
deals with all billing event related stuff, which are more or link all
intertwined:
* Remove all billing events as Ofy entities.
* Add a temporary annotation to allow BillingEvent's ID to be
auto-allocated by ofy while not lacking the Ofy @Id annotation.
* Remove Modification, which is only used in ofy.
* Remove BillingVKey, as we do not need to store the ofy key parent
information anymore. The VKey for a billing event now just contain
its primary key, and can be converted by VKeyConverter.
* Remove BigQuery related code in the billing pipeline.
Note that after BillingVKey is removed, several columns in
BillingCancellation are no longer needed. The change to database schema
will be handled in https://github.com/google/nomulus/pull/1721 after
this PR is deployed to production.
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1710)
<!-- Reviewable:end -->
* Revert "Upgrade App Engine Standard to Java 17 w/ bundled APIs (#1714)"
This partially reverts commit d8e77e2ab2 (it keeps
intact unrelated version upgrades).
We need to temporarily revert this because Spinnaker isn't quite yet playing
nice with the new <app-engine-apis> configuration option in appengine-web.xml
(it seems like this was added recently and Spinnaker is still stuck on App
Engine SDK version 1.9.82 which predates it). Hopefully we can get that
dependency updated in Spinnaker soon and then we can re-upgrade to Java 17.
* Flyway files for adding allocationToken column to Domain table
* Rename column to current_package_token
* Update er diagram
* Add foreign key to DomainHistory
This shouldn't matter for billing or anything like that because the
actual actions performed that month are still correct, but before this
PR we're including all domains ever created in the total_domains number,
including deleted domains
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1713)
<!-- Reviewable:end -->
* Upgrade App Engine Standard to Java 17 w/ bundled APIs
Note that this doesn't yet upgrade our actual Gradle scripts to use a more
recent of Java (that will happen separately); this solely affects the GAE
instances.
I followed the instructions here:
https://cloud.google.com/appengine/docs/standard/java-gen2/services/access
And note that I removed threadsafe true from appengine's XML config because
that doesn't do anything anymore and was just throwing errors (the new
instances handle multiple requests in parallel by default, no configuration
necessary).
* Convert to gradle 7.
* More fixes, regenerated lockfiles.
* Update lockfiles for dependency update.
* Fix show_upgrade_diff for new lockfile format
* Add property for allowInsecureProtocol
Allow us to override the restriction against use of plain HTTP for
communication to dependency repositories. We need this to be able to use a
local proxy for dependency gathering.
* Checking in missing gradle.lockfile
* Split failing dns update batches and kill after 10 retries
* format fixes
* Add another test
* Switch to CloudTasks
* Change back to app engine header
* Change to immutableList and other changes
* Change to optional header
* Add bug ID to todo
* Switch to constructor injection
* Remove old queue
* Set response status
* Change to Optional<Integer>
* Rename action status
* Switched to use CLoudTaskHelper
* Remove spy in test
This is, as of now, unused but we can use it for b/237683906 and
b/237800445 in the future to allow for special behavior dictated by
allocation tokens rather than having to reserve specific domains.
Note that we enforce a tied domain for ANCHOR_TENANT tokens (because
they should be matched to a domain) but not for BYPASS_TLD_STATE tokens.
This deletes LastSqlTransaction and ReplayGap in Datastore, as well as
SqlReplayCheckpoint and TransactionEntity in SQL. These are all related
to replay and are no longer used.
* Remove columns for unused ofy key reconstitution
Remove the columns in Domain, DomainHistory, GracePeriod and
GracePeriodHistory that were only used for Ofy key reconstitution.
All uses of these were removed in #1660.
* Add forgotten flyway file.
* Re-add database migration state commands
These were removed in PR #1661, but we do still need them for the time being
until we complete the ID migration as well.
This uses the Apache commons CSV parsing instead of rolling our own.
Annoyingly, the results that we're given aren't exactly proper CSVs
since they have a non-standard line of data at the top, and the header
is actually the second line.
Fortunately this no longer requires a log-in, we can just send a GET
request and receive a CSV result in return.
This also adds the apache-commons CSV parser to the dependencies
See https://b.corp.google.com/issues/237784559 for more details
* Resolve conflict
* Fix setup for existing test cases in info and check flow
* Revise info flow test cases
* Fix lint
* Merge branch 'master' into handlefeerequest-renew
* Address code review comments myself
* Merge branch 'master' into handlefeerequest-renew
* Get test passing
* Add check flow tests
* Format, consolidate test helpers
* Don't unnecessarily specify XML name
Now that ofy keys aren't necessarily being restored, it seemed prudent to
audit existing uses to verify that we aren't relying on any keys that may now
be null.
This fixes one case that appeared to be potentially problematic (in
ResourceFlowUtils), removes a few methods that call getOfyKey() but are no
longer used, and adds comments to one use of the key that appears to be safe
after visual inspection.
This includes:
- deletion of helper DB methods in tests
- deletion of various old Datastore-only classes and removal of any
endpoints
- removal of the dual-database test concept
- removal of 'ofy' from the AppEngineExtension
This removes the code that converts between ofy fields and SQL fields in DomainContent and a number of related core classes (basically anything that also needed modification to support the removal from DomainContent).
Cursor was originally envisioned to support arbitary ImmutableObject
scopes. However, in practice only the Registry scope is used. The SQL
representation of Cursor assumes that and the schema uses a composite ID
with a string column for the primary key of the scope object. Without a
schema migration to persist the VKey of the scope, we cannot support any
ImmutableObject other than those with a primitive string primary key.
Given the complexity involved and the limited use case, the scope is now
explictly limited to Registry only.
Also removed mapreduces that depends on Ofy keys of Cursors, and made
some code quality improvement based on IntelliJ suggestions on modified
files.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1672)
<!-- Reviewable:end -->
These tests use Ofy exclusively and should not run anymore, as any class
they test also use Ofy and should be deleted.
More importantly, running tests in Ofy mode makes it hard to remove Ofy
from entities, especially Registrar and RegistrarContact, as some of
them are created as canned data when tests are initiated, and the
creation would fail if they are not registered as Ofy entities.
It is therefore a prerequisite to disable these tests before we can
remove Ofy from those entities. We could have deleted them, but I think
that should be done when the corresponding classes tested by them are
deleted.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1679)
<!-- Reviewable:end -->
GPG1 is deprecated and stuck in v1.4 from 2018. GPG2 is recommended. We
only use the GPG binary in tests and when the host system has both
versions it causes problems because we hardcode the GPG import command
in GpgSystemCommandExension to use the binary named "gpg", which could
be linked to either GPG1 or GPG2, causing the other test to fail when
the version of GPG that runs in tests is incompatible with the version of GPG
that imports the keys.
With this PR we only support GPG2 from now on.
Added methods that exist in AppEngineExtension that preload some canned
data. This data is loaded by default and a lot of tests rely on them. As
we migrate away from App Engine, it is helpful to have them in the JPA
test extension which will replace the app engine extension that is
used to set up the database in dual database tests.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1673)
<!-- Reviewable:end -->
I'm not 100% sure that this is strictly necessary, but for now we can
replicate the ability to generate zonefiles for any point in time in the
recent past.
* Migrate ReadDnsQueueAction to use CloudTasksUtils
Also marked TaskQueueUtils as deprecated and fixed a few linter errors.
Note that DNS pull queue still requires the use of the GAE Task Queue API.
* Fix a test failure
* Remove TaskQueueUtils from VKeyTest
* Remove the @error exception that was inadvertently pulled in
One of the more significant changes introduced in this PR is that we use
SQL as the backing database in all tests unless otherwise specified,
e.g. by using the TmOverrideExtension. We change various ofy-related
tests to use this.
This includes various changes:
- Deletion of SqlEntity/DatastoreEntity and related classes. Includes
any necessary changes because of that (e.g. getting a nice SQL key on
error in RegistryJpaIO).
- Deletion of classes that used libraries from the init-sql code
(RefreshDnsOnHostRenameAction)
- Removal of the JpaTransactionManager's backup implementation
- Modification of RegistryJpaWriteTest to not use init-sql code
- Removal of the Transaction class and related classes, however it does
not remove the TransactionEntity class as that would require DB
changes
- Removal of anything related to the actual usage of the database
migration schedule or read-only phases
- Various test changes and fixes to account for the differences in SQL
(like how foreign keys need to exist)
This deliberately doesn't do anything to alter the objects actually
stored in the DB yet, just how we use them
This is the only user of the ofy code that will stick around at least
until we move to the new registrar console. By removing references to
the transaction manager, we will be able to delete all the tm code
without interfering with this.
* Remove version pin for java-diff-utils dependency
Latest version of the lib introduces a small behavior change/bug fix.
It no longer ignores empty lines. This actually makes sense.
Update the test data to reflect this change.
The main purpose of this PR is to help debug b/234189023, where a
registrar reported that in sandbox they observed seemingly successful EPP
update responses to delete NS records, which are not actually deleted after
the commands executed.
To actually load the persisted domain resource after an update would
require us to execute another transaction immediately after the update
transaction and that can only be achieved outside the flow (i. e. in
FlowRunner or EppController) and we need to test for the type of flows
before logging, which seems unnecessarily complex.
For now we are just adding logs inside the update transaction itself to
validate that:
1. The NS records to delete are as expected.
2. The Current NS records are as expected.
3. The new NS records to persist are as expected.
The EPP success reply is the default reply when no errors are thrown in
a transaction. If we see a success reply (which means that the
transaction finished successfully) and expected logs from the transaction, the
only explanation could be that somewhere in the ORM layer the java
representation of what the entity is is different from what is being
presented to the database. I think that signals a much bigger and
fundamental problem, which is quite unlikely given how isolated the
issue under consideration is.
In any case we would like to add the logging functionality in sandbox and ask
the registrar to report again when they see similar issues.
Also made some typo and linting fixes.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1663)
<!-- Reviewable:end -->
* Fix some small transactional issues in SQL mode
These weren't caught until I switched the default database type in tests
to be SQL (separate PR). Fortunately these don't seem to be catastrophic
This includes:
- removing the actions that do the replay
- removing the tests for the replay
- removing the ReplayExtension and adjusting the various tests that used
it appropriately
- removing functionality relating to "things that happen during replay",
e.g. beforeSqlSaveOnReplay
This does not include:
- removing the InitSqlPipeline or similar tasks
- removing e.g. SqlEntity (it's used in other places)
- removing Transforms/RegistryJpaIO and other SQL-pipeline-creation code
* Remove bracket around varname in CloudBuild script
Due to spinnaker restriction: it cannot handle variable references where the var name has brackets around it.
Added spinnaker error message to the comments
This included removing ofy-specific code from various tests. Also, some
of the other tests (e.g. RdapDomainActionTest) had to be configured to
use only SQL -- otherwise, as it currently stands, they were trying to
use ofy.
We also delete the CreateSyntheticHistoryEntriesAction and pipeline
because they're no longer relevant, and impossible to test (the goal of
the actions were to create objects in ofy which doesn't happen any
more).
We're running into issues pulling 2.1.3 from maven, possibly due to
vulnerabilities in dependencies, so this updates it to the most recent
version of 2.2.6.
* Add a new recurrenceLastExpansion column to the BillingRecurrence table
This will be used to determine which recurrences are in scope for
expansion. Anything that has already been expanded within the past year is
automatically out of scope.
Note that the default value is set to just over a year ago, so that, initially,
everything is in scope for expansion, and then will gradually be winnowed down
over time so that most recurrences at any given point are out of scope. Newly
created recurrings (after the subsequent code change goes in) will have their
last expansion time set to the same as the event time of when the recurring is
written, such that they'll first be considered in-scope precisely one year
later.
* Summarize schema related tests
Document existing schema-related tests including presubmit tests and
the schema-verify predeployment test newly added to Spinnaker.
* Inject a DomainPricingLogic into ExpandRecurringBillingEventsAction
This will be used in other PRs to set the renewal price correctly based on the
renewal price behavior of the BillingRecurrence event.
Note that, in order for this to work, a not-null constraint has been lifted on
the EPP flow state when the DomainPricingCustomLogic is being constructed, as
the pricing here will occur in a backend action outside the context of any EPP
flow.
* Slightly improve performance of ExpandRecurringBillingEventsAction
We don't need to log every single no-op batch of 50 Recurrences that are
processed (considering we have 1.5M total in our system), and we also don't need
to process Recurrences that already ended prior to the Cursor time (this gets us
down to 420k from 1.5M).
* Disable Ofy tests.
This change just turns off the Ofy tests at the root, by removing processing
for dual tests and disassociating the TestOfyOnly annotation from test
annotations.
This is far less comprehensive than #1631, but it's probably worth entering as
a stopgap solution just because it should speed up our test runs and unblock a
lot of other cleanup work.
* Fix DualDatabaseTestInvocationContextProviderTest
* Optimize RDAP entity event query
For each EPP entity, directly load the latest HistoryEntry per event type
instead of loading all events through the HistoryEntryDao.
Although most entities have a small number of history entries, there are
a few entities with many entries, enough to cause OutOfMemory error.
We have backend max-instances set to 100, which apparently exceeds the default
quota for GAE. Add info on updating the quota or changing this parameter to
the configuration doc.
* Add batching to ExpandRecurringBillingEventsAction
It's OOMing on trying to load every single BillingRecurrence that needs to be
expanded simultaneously (which is to be expected). So this processes them in
transactional batches of 50.
* Cleanup gpg-agent instances and home directories
The GpgSystemCommandException leaks home directories, but more importantly it
leaks gpg-agent instances. This can cause problems with inotify limits, since
the agent seems to make use of inotify. Do a proper cleanup in afterEach().
* Don't fail if we can't kill the agent
We'll delete the associated code soon enough too, but it's safer to delete the
cron jobs first and run in that state for a week, so we can still run them
manually if need be.
* Reduce the number of manually scaled instances for default/pubapi
This is in the spirit of "not always running significantly over-provisioned",
which helps to save costs and also expose potential scaling issues when they are
still small rather than all at once when they're a big problem.
This can always be reverted if necessary, and can be instantaneously adjusted by
running the `nomulus set_num_instances` command.
* Don't enforce billing account map check on TEST TLDs
This was affecting monitoring (i.e. prober TLDs). Note that test TLDs are
already excluded from the billing account map check in the Registrar builder()
method (see PR #1601), but we forgot to make that same test TLD exclusion in the
EPP flows check (see PR #1605).
* Add test for Java 8 Compatibility
Add a test to check for Java 8 compatibility of jars deployed to
AppEngine.
It is not enough to run existing tests with Java 8 VM, since many API
jars are not exercised by tests. For example, those for GCP services
like the SecretManager.
We take the conservative approach and verify that every class in every
jar are compiled for Java 8.
We would like to re-use the build cache when building RCs for different
environments. There's not much practical use in doing a "clean" for
every build when Gradle should be able to figure out which artifacts
need to be rebuilt. It also does not make sense to build each
environment in a separate step, which also introduces redunency because
not all artifacts are cached across steps. The build cache is enabled by
default.
Lastly, the cache needs to be inside the /workspace folder, which is the
default persisted storage location.
TESTED=tried to build the RCs on alpha and saved about 10 min.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1619)
<!-- Reviewable:end -->
* Add missing transaction for whois lookups
Nameserver whois lookups are failing under SQL for hosts with superordinate
domains because the query in this case is not done in a transaction. We
missed this during testing because a) we didn't have a test for lookups of
hosts with superordinate domains and b) we missed converting
NameserverWhoisResponseTest to a DualDatabaseTest.
This PR fixes the problem and adds the requisite testing.
* Use a single transaction to get host registrars
* Replace streaming with Maps.toMap()
* Downgrade dependencies that no longer support Java8
Downgrade two dependencies whose latest versions no longer support
java8.
A follow up PR will add java8 compatibility to presubmit tests.
We have recently started to routinely breach the 1h timeout. Increasing
this value to 2h. We should also look into reusing the artifacts when
building RCs for different environments.
* Use Gradle dependency dynamic versioning
Use dynamic versioning for Gradle dependencies when possible.
Please refer to go/dr-dependency-upgrade for more information about the
automation plan.
This PR calls out all dependencies that must be pinned to specific
versions for various reasons. The remaining ones are converted to
open-ended version ranges ("[version_str,)").
* Check PAK on domain create
* Add unit test
* update docs
* Remove unneccesary setup
* Fix blank line
* Add check and test to all relevant flows
* Change error message
* Ignore version UIDs during txn deserialization
When deserializing transactions for replay to datastore, ignore class version
UIDs that don't match those of the local classes and just use the local class
descriptors instead. This is a simple solution for the problem of persisted
VKeys containing references to classes where the class has been updated and
the serial version UID has changed.
Also add a "replay_txns" command that replays the transactions from a given
start point so we can verify all transactions are deserializable.
TESTED:
Ran replay_txns against all transactions on sandbox beginning with
transaction id 1828385, which includes Recurring billing events containing
both the old and the new serial version UIDs.
* Change check for root directory during rollback
`rollback_tool` tries to infer the root of the nomulus tree by checking for a
directory named "nomulus". This is potentially problematic (and, indeed, was
for me) since there is no guarantee what that directory will be named.
There are a number of features that characterize the root directory. Check
for the presence of the `rollback_tool` wrapper script, as this is both at
root level and tightly coupled to the python code, so hopefully we won't
move it without testing that the script still works.
This will require edits to a substantial number of registrars on sandbox (nearly
all of them) because almost all of them have access to at least one TLD, but
almost none of them have any billing accounts set. Until this is set, any updates
to the existing registrars that aren't adding the billing accounts will cause
failures.
Unfortunately, there wasn't any less invasive foolproof way to implement this
change, and we already had one attempt to implement it on create registrar
command that wasn't working (because allowed TLDs tend not to be added on
initial registrar creation, but rather, afterwards as an update).
* Downgrade Caffeine to 2.9.3
Apparently Caffeine >=3.* requires Java 11, and we're still stuck on Java 8
because of App Engine Standard. Fortunately this doesn't affect the exposed
interface we're using, so we can simply go back to the newest Caffeine version
once Registry 3.0 Phase 3 (GKE migration) is completed.
Now that SQL is the default, we do not need this side job to run
alongside the main one. Its purpose was to validate the BEAM pipeline
while Datastore was primary.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1599)
<!-- Reviewable:end -->
* Create a Dataflow pipeline to resave EPP resources
This has two modes.
If `fast` is false, then we will just load all EPP resources, project them to the current time, and save them.
If `fast` is true, we will attempt to intelligently load and save only resources that we expect to have changes applied when we project them to the current time. This means resources with pending transfers that have expired, domains with expired grace periods, and non-deleted domains that have expired (we expect that they autorenewed).
For some inexplicable reason, the RDE beam pipeline in both sandbox and
production has been broken for the past week or so. Our investigations
revealed that during the CoGropuByKey stage, some repo ID -> revision ID
pairs were duplicated. This may be a problem with the Dataflow runtime
which somehow introduced the duplicate during reshuffling.
This PR attempts to fix the symptom only by deduping the revision IDs. We
will do some more investigation and possibly follow up with the Dataflow
team if we determine it is an upstream issue.
TESTED=deployed the pipeline and successfully run sandbox RDE with it.
* Begin migration from Guava Cache to Caffeine
Caffeine is apparently strictly superior to the older Guava Cache (and is even
recommended in lieu of Guava Cache on Guava Cache's own documentation).
This adds the relevant dependencies and switch over just a single call site to
use the new Caffeine cache. It also implements a new pattern, asynchronously
refreshing the cache value starting from half of our configuration time. For
frequently accessed entities this will allow us to NEVER block on a load, as it
will be asynchronously refreshed in the background long before it ever expires
synchronously during a read operation.
* Add new columns to BillingEvent.java
* Improve PR and modifyJodaMoneyType to handle null currency in override
* Add test cases for edge cases of nullSafeGet in JodaMoneyType
* Improve assertions
* Remove dos.xml from the configs
We don't have dos config right now, and applying dos from "gcloud app
deploy" is deprecated and has started causing problems.
If we add dos configs, it should be using "gcloud app firewall-rules".
* Build Java8-compatible release
Use the new options.release Gradle property to make sure builds are
compatible with Java 8, which is the runtime on Appengine.
This new property replaces sourceCompatibility, targetCompatibility, and
bootclasspath (wasn't previously set, which is the reason why we
couldn't detect Java9 api usage when building).
* Ignore read-only when saving commit logs
Ignore read-only when saving commit logs and commit log mutations so that we
can safely replicate in read-only mode. This should be safe, as we only ever
to the situation of saving commit logs and mutations when something has
already actually been modified in a transaction, meaning that we should have hit
the "read only" sentinel already.
This also introduces the ability to set the Clock in the
TransactionManagerFactory so that we can test this functionality.
* Changes per review
* Fix issues affecting tests
- Restore clobbered async phase in testNoInMigrationState_doesNothing
- Restore system clock to TransactionManagerFactory to avoid affecting other
tests.
* Change billingIdentifier to BillingAccountMap in invoicing pipeline
* Add a default for billing account map
* Throw error on missing PAK
* Add unit test
Check for a PSQLException referencing a failed connection to "google:5433",
which likely indicates that there is another nomulus tool instance running.
It's worth giving this hint because in cases like this it's not at all obvious
that the other instance of nomulus is problematic.
* Add a no-async actions DB migration phase
This needs to be set several hours prior to entering the READONLY stage. This is
not a read-only stage; all synchronous actions under Datastore (such as domain
creates) will continue to succeed. The only thing that will fail is host
deletes, host renames, and contact deletes, as these three actions require a
mapreduce to run before they are complete, and we don't want mapreduces hanging
around and executing during what is supposed to be a short duration READONLY
period.
2022-04-01 16:55:51 -04:00
2093 changed files with 109030 additions and 148271 deletions
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name:Checkout repository
uses:actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name:Initialize CodeQL
uses:github/codeql-action/init@v2
with:
languages:${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
queries:security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name:Autobuild
uses:github/codeql-action/autobuild@v2
# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
|[](https://storage.googleapis.com/domain-registry-kokoro/internal/index.html)|[](https://storage.googleapis.com/domain-registry-kokoro/foss/index.html)|[](https://lgtm.com/projects/g/google/nomulus/alerts/)|[](https://github.com/google/nomulus/blob/master/LICENSE)|[](https://cs.opensource.google/nomulus/nomulus)|
|[](https://storage.googleapis.com/domain-registry-kokoro/internal/index.html)|[](https://storage.googleapis.com/domain-registry-kokoro/foss/index.html)|[](https://github.com/google/nomulus/blob/master/LICENSE)|[](https://cs.opensource.google/nomulus/nomulus)|
@@ -43,12 +43,6 @@ by Joshua Bloch in his book Effective Java -->
<propertyname="message"value='Your Javadocs appear to use invalid <a> tag syntax in @see tags. Please use the correct syntax: @see <a href="http(s)://your_url">url_description</a>'/>
</module>
<!-- Checks that our Ofy wrapper is used instead of the "real" ofy(). -->
// Copyright 2022 The Nomulus Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import{Component}from'@angular/core';
exportinterfaceActivityRecord{
eventType: string;
userName: string;
registrarName: string;
timestamp: string;
details: string
}
constMOCK_DATA: ActivityRecord[]=[
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Export DUMS",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Update Contact",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
{eventType:"Delete Domain",userName:"user3",registrarName:"registrar1",timestamp:"2022-03-15T19:46:39.007",details:"All Domains under management exported as .csv file"},
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.