* Add mosapi client to intract with ICANN's monitoring system
This change introduces a comprehensive client to interact with ICANN's Monitoring System API (MoSAPI). This provides direct, automated access to critical registry health and compliance data, moving Nomulus towards a more proactive monitoring posture.
A core, stateless MosApiClient that manages communication and authentication with the MoSAPI service using TLS client certificates.
* Resolve review feedback & upgrade to OkHttp3 client
This commit addresses and resolves all outstanding review comments, primarily encompassing a shift to OkHttp3, security configuration cleanup, and general code improvements.
* **Review:** Addressed and resolved all pending review comments.
* **Feature:** Switched the underlying HTTP client implementation to [OkHttp3](https://square.github.io/okhttp/).
* **Configuration:** Consolidated TLS Certificates-related configuration into the dedicated configuration area.
* **Cleanup:** Removed unused components (`HttpUtils` and `HttpModule`) and performed general code cleanup.
* **Quality:** Improved exception handling logic for better robustness.
* Refactor and fix Mosapi exception handling
Addresses code review feedback and resulting test failures.
- Flattens package structure by moving MosApiException and its test.
- Corrects exception handling to ensure MosApiAuthorizationException
propagates correctly, before the general exception handler.
- Adds a default case to the MosApiException factory for robustness.
- Uses lowercase for placeholder TLDs in default-config.yaml.
* Refactor and improve Mosapi client implementation
Simplifying URL validation with Guava
Preconditions and refining exception handling to use `Throwables`.
* Refactor precondition checks using project specific utility
These are old/pointless now that we've migrated to GKE. Note that this
doesn't update anything in the docs/ folder, as that's a much larger
project that should be done on its own.
This increases hikari fetch size to 40 from 20 in order to decrease the
amount of round trips
This also sets lower CPU as we seem to have overshot CPU consumtion
This also set min replicas to 8 for EPP and max to 16 as we've been
running on 8-10 for the last week
This reverts commit 5cef2dd8b5.
We faced CPU quota issue with standard machine type, so rolling back to c4
for now to monitor server performance and decide if we want to try to
downgrade again in the future.
nodeSelector can limit scheduling capabilities of k8s, which leads to delays in assigning new workloads. Since we do not require and particular machine for execution it can be removed.
ALl deployments received update to averageUtilization cpu. This should allow us to stay ahead of the curve of traffic and create instances before we cpu reached the limit.
Frontend cpu allocation has caused "noise neighbors" problem with pods assigned to nodes where there's not enough bursting capacity, so I increased it.
Adjusted rest of the deployments according to their utilization.
Now that we have effective global sessions thanks to #2734, there is no
longer a need to keep the number of pods on the EPP service static.
We are also not vulnerable to random pod restarts. K8s never guarantees
perpetual pod lifetime anyway, and not having to be at its mercy is
certainly a relief.
This allows us to easily tell which tag was deployed.
Also set the gateway to use named address so they are stable, and so
that we can attach an IPv6 record to it. Auto-provisioned addresses are
IPv4 only.
We are required to respond to HTTP(S) requests on port 80/443 on the
same domain where we serve port 43 WHOIS requests. The proxy already
does this by redirecting to the web WHOIS lookup page on the marketing
website.
This PR makes it so that requests to port 80/443 can be routed to the
proxy for redirect.
TESTED=tested on crash and the redirect works.
k8s does not have a way to expose a global load balancer with TCP
endpoints, and setting up node port-based routing is a chore, even with
Terraform (which is what we did with the standalone proxy).
We will use Cloud DNS's geolocation routing policy to ensure that
clients connect to the endpoint closest to them.