* Add retrier to retry exceptions we've seen so far
* Check for nested exceptions
* fix formatting
* check for nested transactions
* add JDBCConnectionException to isFailedQueryRetriable
* Add retrier to methods with suppliers
- Reuse DS record format processing from the create/update domain commands
(BIND format, commonly used in URS requests)
- Remove the CLIENT_HOLD status from domains that have it (this blocks us from
serving the new nameservers and DS record)
* Replace jpaTm with a JpaSupplierFactory
* Style
* Style
* Pipeline takes in a SerializableSupplier instead
* Change the ordering of imports
* Test a good domain in addition to a bad one
* Rename and check good domain for Transact Answer
* Use standard Mockito verify
* Verify transact call and no more interactions
* Remove Answer comment
* Naming chsnges
* Deploy Spec 11 pipeline correctly
* Fix formatting of deploy file
* Use a file to persist state across Cloud Build steps
Co-authored-by: Gus Brodman <gbrodman@google.com>
* Refactor DomainBase into DomainContent and create DomainHistory
This is similar to #587 and #634, but for domains.
One caveat is that we refactor some of the Domain* instance methods to
be static so that they can be called either on DomainBase or
DomainContent, returning the appropriate type each time.
Note that we set DomainHistory to use the same revision ID sequence as
HostHistory and ContactHistory.
In addition, we refactor the tests to the History objects a bit to
reduce duplicate code and because we cannot guarantee yet that the
SQL-stored VKeys are symmetrical -- the ofy keys are not persisted at
the moment.
In addition, rename the DomainHost table to the default Domain_nsHosts so that it automatically creates two separate nsHosts tables for us -- one foreign-keyed on the domain repo ID, and one foreign-keyed on the history revision ID
* Use access hackery to allow manual names for nsHosts tables
* Clean up post merge artifacts
* Add unused setters that Hibernate requires
* Fix the tests and semantic merge conflicts
* Change ns_hosts to ns_host everywhere
* Rename ns_host to host_repo_id
* V42 -> V44
* Add use of interval data type
* Add support for Millis
* Use Java-object type
* Change column type for relock_duration
* add years and months
* Add tests for hours, minutes, and seconds
* Add javadoc describing how joda duration is stored
* Add test for lots of days
1. The Gradle apt plugin is no longer needed to process annotations.
2. Without the apt plugin, Gralde puts the source files generated by
annotation processors in build/generated/sources/annotationProcessor.
3. Change the location of custom generated files to be consistent.
4. Fix a javadoc formatting error.
* Run InitSqlPipeline
Added the main() method to InitSqlPipeline.
Added a Gradle task to run InitSqlPipeline from command line. This
task is meant for testing and experiments.
Corrected the file name prefix of Datastore export files. Should
be 'output-', defined as 'input-'.
Apparently, in domain check responses, `avail=false, reason=Allocation token
required` was not sufficiently understood by all registrars. This changes it to
`avail=false, reason=Reserved; alloc. token required` to hopefully make it
crystal clear that the domain in question is reserved, i.e. if you were supposed
to be able to register this domain you'd already know it because we'd have
already given you the requisite allocation token.
This makes it easier to later migrate the package to Java 11. If we move
and migrate in a single PR, because of the portion of the contents that
s changed, git will have trouble recognizing that some files are
renamed *and* modified and treat them as distinct files, making code
review difficult.
* Integrate transaction persistence into JpaTM
Store the serialized transaction whenever we commit from the JPA transaction
manager. This change also adds:
- The Transaction table.
- The TransactionEntity which is stored in it.
- Changes to the test infrastructure to register the TransactionEntity for
tests where we don't load the nomulus schema.
- A new configuration variable to allow us to turn the transaction
persistence functionality on and off (default is "off").
* Changes for review.
* Incremented sequence number of flyway file
* Upgrade App Engine and webserver tests from JUnit 4 to 5
* Fix most errors
* Merge branch 'master' into junit5ification
* Fix test server by extracting non-test setup/tear-down
* Merge branch 'master' into junit5ification
* Fix backup tests
* Don't createFile(); asCharSink does it
* Increase the timeout for all WebDriver tests to 60s (helps w/ flakiness)
* Set up deployment of the Spec11 pipeline with JPA TM
* Remove unnecessarily pipeline options setting
* Use enviroment name in BeamJpaModuleTest
* Fix checkstyle error
* Run the (Un)lockDomainCommand in an outer JPA txn
There are a couple things going on here in this commit.
First, we add an external JPA transaction in the
LockOrUnlockDomainCommand class. This doesn't appear to do much, but it
avoids a situation similar to deadlock if an error occurs in Datastore
when saving the domain object. Specifically, DomainLockUtils relies on
the fact that any error in Datastore will be re-thrown in the JPA
transaction, meaning that any Datastore error will back out of the SQL
transaction as well. However, this is no longer true if we are already
in a Datastore transaction when calling DomainLockUtils (unless, again,
we are also in a JPA transaction). Basically, we require that the outer
transaction is the JPA one.
Secondly, this just allows for more breakglass operations in the lock or
unlock domain commands -- in a situation where things possibly go
haywire, we should allow admins to make sure with certainty that a
domain is locked or unlocked.
* Add more robustness and tests for admins locking locked domains
* Fix expected exception message in tests
* End-to-end Datastore to SQL pipeline
Defined InitSqlPipeline that performs end-to-end migration from
a Datastore backup to a SQL database.
Also fixed/refined multiple tests related to this migration.
* Fix some SQL credential issues identified when deploying Beam pipelines
There are two issues fixed here.
1. Without calling `FileSystems.setDefaultPipelineOptions(PipelineOptionsFactory.create()), the Nomulus tool doesn't know how to handle gs:// scheme files. Thus, if you try to deploy (for instance) the Spec11 pipeline using a GCS credential file, it fails.
2. There was a misunderstanding before about what the credential file
actually refers to -- there is a credential file in JSON format that is
used for gcloud authorization, and there is a space-delimited SQL access
info file that has the instance name, username, and password. These are
separate options and should have separate command-line params.
* Actually we don't need this for remote deployment
* Add a 'Host' parameter to the relock action enqueuer
I believe this is why we are seeing 404s currently -- we should be
specifying the backend host as the target like we do for the
resave-entity async action.
* Add lastUpdateTime column to epp resources
Property was inadvertently left out.
Renamed getter and setter to match the property name.
Added a test helper to compare EppResources while ignoring
lastUpdateTime, which changes every time an instance is persisted.
* Write one PCollection to SQL
Defined a transform that writes a PCollection of entities to SQL using
JPA. Allows configuring parallelism level and batch size.
* Change Subdomain class to contain domainRepoId
* Remove jpaTm from Spec11PipelineTest and change clientId -> registrarId
* Remove 'client' from a comment
* Include changes to Spec11Pipeline
* add SafeBrowsingTransforms
* Run style
* Double the # of pubapi instances to better handle traffic spikes
We may also consider switching to an automatic scaling mode soon, on the hope
that it's working better than the last time we tried it (it would help to keep
resource costs down at least).
* Create ContactHistory class + table
This is similar to #587, but with contacts instead of hosts.
This also includes a couple cleanups for HostHistoryTest and RegistryLockDaoTest, just making code more proper (we shouldn't be referencing constant revision IDs when using a sequence that is used by multiple classes, and RLDT can extend EntityTest)
Note as well that we set ContactHistory to use the same revision ID sequence as HostHistory.
* Move ContactResource -> ContactBase
* Alter ContactBase and ContactResource
* Verify that the RegistryLock input has the correct registrar ID
We already verify (correctly) that the user has access to the registrar
they specify, but nowhere did we verify that the registrar ID they used
is actually the current sponsor ID for the domain in question. This is
an oversight caused by the fact that our testing framework only uses
admin accounts, which by the nature of things have access to all
registrars and domains.
In addition, rename "clientId" to "registrarId" in the RLPA object
* Change the wording on the incorrect-registrar message
* Output PO number in detailed report
The PO number header was added during the beam migration but we forgot
to print the actual data in the corresponding column. This resulted in a
misalignment of columns in the detailed report.
This PR fixes it. Note that we cannot drop PO number from the header (as is not
useful in the detailed report) because the header represents all fields
that are to be parsed from the SQL query results, and PO number *is*
needed when generating the invoice itself. By dual-purposing the header
(both as the required fields in the parser and the first line in the
detailed report) we have to include the value of PO number in the
detailed report CSV as well.
* Disambiguate injected Cloud SQL parameter names
This allows us to also inject the BeamJpaModule into RegistryTool, which
allows us to use the SocketJpaTransactionManager in Beam pipelines.
Some side effects of this include:
- duplication of KMS connections -- one standard, one Beam
- duplication of the creation of the partial Hibernate SQL configs
- removal of ambiguity between credentialFileName, credentialFilename,
and credentialFilePath -- we now use the latter.
- Performing the credential null check when instantiating the SQL
connection rather than when instantiating the module object. See the code
comments for more details on this.
I verified that this compiles and the tests run successfully when
injecting a @SocketFactoryJpaTm into a Beam pipeline.
* Remove two unnecessary config points and change the name of two params
* Use @Config instead of @Named and change the pool size
* Replace non-visible link with code
* Load Datastore snapshot from backup files
Defined a composite transform that loads from a Datastore export and
concurrent CommitLog files, identify entities that still exist at the
end of the time window, and resolve their latest states in the window.