When we append an entry to a list with the same user-defined
timestamp, the behaviour is actually undefined. If the append
is processed by the same coordinator as the one that accepted
the existing entry, then it gets the same timeuuid as the list key,
and replaces (potentially) the existing list valiue. Then it
gets a timeuuid which maybe both larger and smaller than the existing
key's timeuuid, and then turns either to an append or a prepend.
The part of the timestamp responsible for the result is the shard
id's spoof node address implemented in scope of fixing Scylla's
timeuuid uniqueness. When the test was implemented all spoof node ids
where 0 on all shards and all coordinators. Later the difference
in behaviour was dormant because cql_repl would always execute
the append on the same shard.
We could fix Scylla to use a zero spoof node address in case a user
timestamp is supplied, but the purpose of this is unclear, it
may actually be to the contrary of the user's intent.
Before this patch, approval tests (test/cql/*) were using a C++
application called cql_repl, which is a seastar app running
Scylla, reading commands from the standard input and producing
results in Json format in the standard output. The rationale for
this was to avoid running a standalone Scylla which could leak
more resources such as open sockets.
Now that other suites already start and stop Scylla servers, it
makes more sense to run CQL commands in approval tests against an
existing running server. It saves us from building a one more
binary and allows to better format the output. Specifically, we
would like to see Scylla output in tabular format in approval
tests, which is difficult to do when C++ formatting libraries
are used.
Implement a pytest which would run CQL commands against
a scylla server and pretty print server output.
Will be used in existing Approval tests in subsequent patches.
Manage scylla servers for rest_api and cql-pytest suites
using PythonTestSuite. The pool size determines the max
number of servers test.py would run concurrently per
suite. For tiny suites (rest_api) the cost of starting
the servers overweights the cost of running tests so keep
it at a minimum. cql-pytest cas dozens of tests, so run them
in 4 parallel tracks.
Track running tests in the suite.
Cleanup after each suite (after all tests
in the suite end).
Cleanup all artifacts before exit. Don't drop server logs if
there is at least one failed test.
Allow starting clusters of Scylla servers. Chain up the next
server start to the end of the previous one, and set the next
server's seed to the previous server.
As a workaround for a race between token dissemination through
gossip and streaming, change schema version to force a gossip
round and make sure all tokens end up at the joining node in time.
Make sure scylla start is not race prone.
auth::standard_role_manager creates "cassandra" role in an async loop
auth::do_after_system_ready(), which retries role creation with an
exponential back-off. In other words, even after CQL port is up, Scylla
may still be initializing.
This race condition could lead to spurious errors during cluster
bootstrap or during a test under CI.
When the role is ready, queries begin to work, so rely on this "side
effect".
To start or stop servers, use a new class, ScyllaCluster,
which encapsulates multiple servers united into a cluster.
In it, validate that a test case cleans up after itself.
Additionally, swallow startup errors and throw them when
the test is actually used.
When entry loading fails and there is another request blocked on the
same page, attempt to erase the failed entry will abort because that
would violate entry_ptr guarantees, which is supposed to keep the
entry alive.
The fix in 92727ac36c was incomplete. It
only helped for the case of a single loader. This patch makes a more
general approach by relaxing the assert.
The assert manifested like this:
scylla: ./sstables/partition_index_cache.hh:71: sstables::partition_index_cache::entry::~entry(): Assertion `!is_referenced()' failed.
Fixes#10617Closes#10653
The docs test dislike the gdbinit link because it refers out of
the source tree. Unconfuse the tests by removing the link. It's
sad, but the file is more easily used by referring to it rather
than viewing it, so give a hint about that too.
Closes#10650
msg_proc_guard is a guard that makes sure _msg_processing is always
decreased. We can use regular defer() to achieve the same.
Message-Id: <YoZTQPbTMWAdCObs@scylladb.com>
Extend the reconfiguration nemesis to send `modify_config` requests as
well as `reconfigure` requests. It chooses one or the other with
probability 1/2.
Fix a bunch of problems that surfaced during testing.
Closes#10544
* github.com:scylladb/scylla:
test: raft: randomized_nemesis_test: send `modify_config` requests in reconfiguration nemsesis
test: raft: randomized_nemesis_test: fix `rpc` reply ID generation
test: raft: randomized_nemesis_test: during bouncing call, allow a leader to reroute to itself
test: raft: randomized_nemesis_test: handle timed_out_error from modify_config
service: raft: rpc: don't call `execute...` functions after `abort()`
raft: server: fix bad_variant_access in `modify_config`
Extend the reconfiguration nemesis to send `modify_config` requests as
well as `reconfigure` requests. It chooses one or the other with
probability 1/2.
When `rpc` wants to perform a two-way RPC call it sends a message
containing a `reply_id`. The other side will send the `reply_id` back
when answering, so the original side can match the response to the promise
corresponding to the future being waited on by the RPC caller.
Previously each instance of `rpc` generated reply IDs independently as
increasing integers starting from 0. The network delivers messages
based on Raft server IDs. A response message may thus be delievered not
to the original instance which invoked the RPC, but to a new instance
which uses the same Raft server ID (after we simulated a server
crash/stop and restart, creating a new server with the same ID that
reuses the previous instance's `persistence` instance but has a new `rpc`).
The new instance could have started a new RPC call using the same
`reply_id` as one currently being in-flight that was started by the
previous instance. The new instance could then receive and handle a
response that was intended for the previous instance, leading to weird
bugs.
Fix this by replacing the local reply ID counters by a global counter so
that every two-way RPC call gets a unique reply ID.
A server executing a `modify_config` call, even if it initially was a
leader and accepted the request, may end up throwing a `not_a_leader`
error, rerouting the caller to a new leader - but this new leader may be
that same server. This happens because `execute_modify_config`
translates certain errors that it considers transient (such as
`conf_change_in_progress`) into `not_a_leader{last_known_leader}`,
in attempt to notify the caller that they should retry the request; but
when this translation happens, the `last_known_leader` may be that same
server (it could have even lost leadership and then regained it back
while the request was being handled).
This is not strictly an error, and it should be safe for the client to
retry the request by sending it to the same server. The nemesis test
assumed that a server never returns `not_a_leader{itself}`; this commit
drops the assumption.
An alternative solution would be to extend the error types that are now
translated to `not_a_leader` so they include information about the last
known leader. This way the client does not lose information about the
original error and still gets a potential contact point for retry.
The functions are called from RPC when a follower forwards a request to
a leader (`add_entry`, `modify_config`, `read_barrier`). The call may be
attempted during shutdown. The Raft shutdown code cleans up data structures
created by those requests. Make sure that they are not updated
concurrently with shutdown. This can lead to problems such as using the
server object after it was aborted, or even destroyed.
After this change, the RPC implementation may wait for a `execute_modify_config`
call to finish before finishing abort. That call in turn may be stuck on
`wait_for_entry`. Thus the waiter may prevent RPC from aborting. Fix
this be moving the wait on the future returned from `_rpc->abort()` in
`server::abort()` until after waiters were destroyed.
`modify_config` would call `execute_modify_config` or
`_rpc->send_modify_config`, which returned a reply of type
`add_entry_reply`. This is a variant of 3 options: `entry_id`,
`not_a_leader`, or `commit_status_unknown`. The code would check
for the `entry_id` option and otherwise assume that it was `not_a_leader`.
During nemesis testing however, the reply was sometimes
`commit_status_unknown`, which caused a `bad_variant_access` exception
during `std::get` call. Fix this.
There is a similar piece of code in `add_entry`, but there it should be
impossible to obtain `commit_status_unknown` even though the types don't
enforce it. Make it more explicit with a comment and an assertion.
Scylla has a long-standing bug (issue #7620) where having many
tombstones in the schema table significantly slows down further
schema operations.
Many cql-pytest tests use new_test_table() to create a temporary test
table with a specific schema. Before this patch, each temporary table
was created with a random name, and deleted after the test. When
running many tests on the same Scylla server, this results in a lot
of tombstones in the schema tables, and really slow schema operations.
For example, look at home much time it takes to run the same test file
N times:
$ test/cql-pytest/run --count N test_filtering.py
N=25 - 16 seconds (total time for the N repetitions)
N=50 - 41 seconds
N=100 - 122 seconds
Notice how progressively slower each repetition is becoming - the
total test time should have been linear in N, but it isn't!
In this patch, we keep a cache of already-deleted table names (not the
tables, just their names!) so as to reuse the same name when we can
instead of inventing a new random name. With this patch, the performance
improvement after some repetitions is amazing (compare to the table above):
N=25 - 14 seconds
N=50 - 29 seconds
N=100 - 46 seconds
Note how the testing time is now more-or-less linear in the number of
repetitions, as expected.
The table-name recycling trick is the same trick I already used in the
past for the translated Cassandra tests (test/cql-pytest/cassandra_tests).
The problem was even more obvious there because those tests create a
lot of different tables. But the same problem also exists in cql-pytest
in general, so let's solve it here too.
Refs #7620
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#10635
Consider:
- n1 and n2 in the cluster
- n3 bootstraps to join
- n1 does not hear gossip update from n3 due to network issue
- n1 removes n3 from gossip and pending node list
- stream between n1 and n3 fails
- n1 and n3 network issue is fixed
- n3 retry the stream with n1
- n3 finishes the stream with n1
- n3 advertises normal to join the cluster
The problem is that n1 will not treat n3 as the pending node so writes
will not route to n3 once n1 removes n3.
Another problem is that when n1 gets normal gossip status update from
n3. The gossip listener will fail because n1 has removed n3 so n1 could
not find the host id for n3. This will cause n1 to abort.
To fix, disable the retry logic in range_streamer so that once a stream
with existing fails the bootstrap fails.
The downside is that we lose the ability to restream caused by temporary
network issue but since we have repair based node operation. We can use
it to resume the previous failed node operations.
Fixes: #9805Closes#9806
Currently we support queries like:
```cql
SELECT * FROM ks.tab WHERE p IN (1, 2, null, 4);
```
Nothing can be equal to null so this is equivalent to:
```cql
SELECT * FROM ks.tab WHERE p IN (1, 2, 4);
```
Cassandra doesn't support it at all.
```cql
> SELECT * FROM ks.tab WHERE p IN (1, 2, null, 4)
Error: DbError(Invalid, "Invalid null value in condition for column p")
> SELECT * FROM ks.tab WHERE p IN (1, 2, ?, 4) # ? is NULL
Error: DbError(Invalid, "Invalid null value in condition for column p")
> SELECT * FROM ks.tab WHERE p IN ? # ? is (1, 2, null, 4)
Error: DbError(Invalid, "Invalid null value in condition for column p")
```
It makes little sense to send a null inside list of IN values and supporting it is a bit cumbersome.
Supporting it causes trouble because internally the values are represented as a list, not a tuple, and lists can't contain nulls.
Because of that code requires exceptions because in this single case there can be a null inside of a collection.
This PR starts treating a llist of IN values the same as any other list and as result nulls are forbidden inside them.
In case of a null the message is the same as any other collection:
```
null is not supported inside collections
```
I'm not entirely happy about it - someone could be confused if they received this message after a query that didn't involve any collections.
The problem with making a prettier error message is that once again we would have to give `evaluate` additional information that it's now evaluating a list of IN values. And we would end up back with `evaluate_IN_list`
I think we could consider adding some kind of generic context to evaluate. The context would contain the whole expression and a mark on the part that we are currently evaluating. Then in case of error we could use this context and use it to create more helpful error messages, e.g. point to the part of the expression where a problem occured. But that's outside of the scope of this PR.
Fixes#10579Closes#10620
* github.com:scylladb/scylla:
cql: Add test for null in IN list
cql: Forbid null in lists of IN values
We used to allow nulls in lists of IN values,
i.e. a query like this would be valid:
SELECT * FROM tab WHERE pk IN (1, null, 2);
This is an old feature that isn't really used
and is already forbidden in Cassandra.
Additionally the current implementation
doesn't allow for nulls inside the list
if it's sent as a bound value.
So something like:
SELECT * FROM tab WHERE pk IN ?;
would throw an error if ? was (1, null, 2).
This is inconsistent.
Allowing it made writing code cumbersome because
this was the only case where having a null
inside of a collection was allowed.
Because of it there needed to be
separate code paths to handle regular lists
and lists of NULL values.
Forbidding it makes the code nicer and consistent
at the cost of a feature that isn't really
important.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
This patch set adds two commits to allow trigger off strategy early for node operations.
*) repair: Repair table by table internally
This patch changes the way a repair job walks through tables and ranges
if multiple tables and ranges are requested by users.
Before:
```
for range in ranges
for table in tables
repair(range, table)
```
After:
```
for table in tables
for range in ranges
repair(range, table)
```
The motivation for this change is to allow off-strategy compaction to trigger
early, as soon as a table is finished. This allows to reduce the number of
temporary sstables on disk. For example, if there are 50 tables and 256 ranges
to repair, each range will generate one sstable. Before this change, there will
be 50 * 256 sstables on disk before off-strategy compaction triggers. After this
change, once a table is finished, off-strategy compaction can compact the 256
sstables. As a result, this would reduce the number of sstables by 50X.
This is very useful for repair based node operations since multiple ranges and
tables can be requested in a single repair job.
Refs: #10462
*) repair: Trigger off strategy compaction after all ranges of a table is repaired
When the repair reason is not repair, which means the repair reason is
node operations (bootstrap, replace and so on), a single repair job contains all
the ranges of a table that need to be repaired.
To trigger off strategy compaction early and reduce the number of
temporary sstable files on disk, we can trigger the compaction as soon
as a table is finished.
Refs: #10462Closes#10551
* github.com:scylladb/scylla:
repair: Trigger off strategy compaction after all ranges of a table is repaired
repair: Repair table by table internally
"
There are several issues with it
- it's scattered between main() and storage_service methods
- yet another incarnation of it also sits in the cql-test-env
- the prepare_to_join() and join_token_ring() names are lying to readers,
as sometimes node joins the ring in prepare- stage
- storage service has to carry several private fields to keep the state
between prepare- and join- parts
- some storage service dependencies are only needed to satisfy joining,
but since they cannot start early enough, they are pushed to storage
service uninitialized "in the hope" that it won't use them until join
This patch puts joining steps in one place and enlightens storage service
not to carry unneeded dependencies/state onboard. And eliminates one more
usage of global proxy instance while at it.
branch: https://github.com/xemul/scylla/tree/br-merge-init-server-and-join-cluster
tests: https://jenkins.scylladb.com/job/releng/job/Scylla-CI/466/
refs: #2795
"
* 'br-merge-init-server-and-join-cluster' of https://github.com/xemul/scylla:
storage_service: Remove global proxy call
storage_service: Remove sys_dist_ks from storage_service dependencies
storage_service: Remove cdc_gen_service from storage_service dependencies
storage_service: Make _cdc_gen_id local variable
storage_service: Make _bootstrap_tokens local variable
storage_service: Merge prepare- and join- private members
storage_service: Move some code up the file
storage_service: Coroutinize join_token_ring
storage_service: Fix indentation after previous patch
storage_service: Execute its .bootstrap() into async()
storage_service: Dont assume async context in mark_existing_views_as_built
storage_service: Merge init-server and join-cluster
main, storage_service: Move wait for gossip to settle
main, storage_service: Move passive announce subscription
main, storage_service: Move early group0 join call
An overload of storage_proxy::query_mutations_locally was declared in
a35136533d which takes a vector of
partition ranges as an argument, but it was never defined. This commit
removes the unused overload declaration.
Closes#10610
Since 9b49d27a8 ("cql3: expr: Remove shape_type from bind_variable"),
bind variables no longer remember their context (e.g. if they are
in a scalar or vector comparison, or if they are in an IN or
other relation. Exploit that my merging all of the productions that
generate a bind variables (that are now exactly equal) into a single
marker production.
Closes#10624
Storage service needs it to calculate schema version on join. The proxy
at this point can be passed as an argument to the joining helper.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The service in question is only needed join_cluster-time, no need to
keep it in the dependencies list. This also solves the dependency
trouble -- the distributed keyspace is sharded::start-ed after it's
passed to storage_service initialization.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This service is only needed join-time, it's better to pass it as
argument to join_cluster(). This solves current reversed dependency
issuse -- the cdc_gen_svc is now started after it's passed to storage
service initialization.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Same as with _bootstrap_tokens -- this variable is only needed
throughout a single function invocation, so it doesn't have to be a
class member.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Now it's a member on storage_service, but it was such just to carry the
set of tokens between to subsequent calls. Now when all the joining
happens in one function, the set can become local variable.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
These two are the real code that does preparation and joining. They are
called in async() context by public storage_service methods that had
been merged recently, so this patch merges the internals.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
No logic change, this is to keep join_token_ring next to
prepare_to_join so that the patch merging them becomes clean and small.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Next patch will merge this method with prepare_to_join() which is
already coroutinized. To make it happen -- coroutinize it in advance.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Next patches will coroutinize join_cluster(), so the .bootstrap() method
should return a future. It's worth coroutinizing it as well, but that's
a huge change, so for now -- keep it in its own explicit async().
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Now they always follow one another both in main and cql-test-env.
Also, despite the name, init-server does joins the cluster when it's
just a normal node restarting, so join-cluster is called when the
cluster is already joind. This merge make the function be named as
what it really does.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
And make cql-test-env configure to skip it not to slow down tests in
vain. Another side effect is that cql-test-env would trigger features
enabling at this point, but that's OK, they are enabled anyway.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Storage service already has a vector of random subscription scope
holders, this becomes yet another one. This partially reverts
e4f35e2139, which's half-step backwards, but so far I've no better
ideas where to track that scope guard.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It happens right after the prepare to join, moving it at the end of the
latter call doesn't change the code logic. A side effect -- this removes
a silly join_group0() one-line helper.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When a user runs a script and presses control-C, a SIGINT (signal 2)
gets sent to every process in the script's "process group". By default,
every subprocess started by a script joins the parent's process group.
Our test/*/run test-runner scripts typically start two processes: scylla
and pytest. If we keep them in the same process group, a control-C
would kill them in a random order and that is ugly - if Scylla is
killed before pytest, we'll see a few test failures before pytest
is finally killed. So the existing code put Scylla in its own process
group, and killed it on exit after killing pytest.
But there were a few inconsistencies in our implementation, leading
to some annoying behaviors:
1. Doing "kill -2" to the runner's process (not a control-C which sends
a signal to the process group) caused scylla and pytest to be killed
on exit. So far so good. But, we should kill their entire process
groups, not just the one process. This is important when pytest starts
its own subprocesses (as happens in cql-pytest/test_tools.py),
otherwise they just remain running.
We need to call pgkill() instead of kill(), but also we forgot
to start a new process group for the pytest run - so this patch
fixes it.
2. Our exit handler - which kills the subprocesses - only gets called
on signals which Python catches, and this is only SIGINT. Killing
the test runner with SIGTERM or SIGHUP before this patch caused
the subprocesses to be left running. In this patch we also catch
SIGTERM and SIGHUP, so our exit handler is also run in that case.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#10629
This small series includes a few more CQL tests in the cql-pytest
framework.
The main patch is a translation of a unit test from Cassandra that
checks the behavior of restrictions (WHERE expressions, filtering) in
different cases. It turns out that Cassandra didn't implement some
cases - for example filtering on unfrozen UDTs - but Scylla does
implement them. So in the translated test, the
checks-that-these-features-generate-an-error from Cassandra are
commented out, and this series also includes separate tests for these
Scylla-unique features to check that they actually work correctly and
not just that they exist.
Closes#10611
* github.com:scylladb/scylla:
cql-pytest: translate Cassandra's tests for relations
test/cql-pytest: add test for filtering UDTs
test/cql-pytest: tests for IN restrictions and filtering
test/cql-pytest: test more cases of overlapping restrictions
* abseil f70eadad...9e408e05 (109):
> Cord: workaround a GCC 12.1 bug that triggers a spurious warning
> Change workaround for MSVC bug regarding compile-time initialization to trigger from MSC_VER 1910 to 1930. 1929 is the last _MSC_VER for Visual Studio 2019.
> Don't default to the unscaled cycle clock on any Apple targets.
> Use SSE instructions for prefetch when __builtin_prefetch is unavailable
> Replace direct uses of __builtin_prefetch from SwissTable with the wrapper functions.
> Cast away an unused variable to play nice with -Wunused-but-set-variable.
> Use NullSafeStringView for const char* args to absl::StrCat, treating null pointers as "" Fixes#1167
> raw_logging: Extract the inlined no-hook-registered behavior for LogPrefixHook to a default implementation.
> absl: fix use-after-free in Mutex/CondVar
> absl: fix live-lock in CondVar
> Add a stress test for base_internal::ThreadIdentity reuse.
> Improve compiler errors for mismatched ParsedFormat inputs.
> Internal change
> Fix an msan warning in cord_ringbuffer_test
> Fix spelling error "charachter"
> Document that Consume(Prefix|Suffix)() don't modify the input on failure
> Fixes for C++20 support when not using std::optional.
> raw_logging: Document that AbortHook's buffers live for as long as the process remains alive.
> raw_logging: Rename SafeWriteToStderr to indicate what about it is safe (answer: it's async-signal-safe).
> Correct the comment about the probe sequence. It's (i/2 + i)/2 not (i/2 - i)/2.
> Improve analysis of the number of extra `==` operations, which was overly complicated, slightly incorrect.
> In btree, move rightmost_ into the CompressedTuple instead of root_.
> raw_logging: Rename LogPrefixHook to reflect the other half of it's job (filtering by severity).
> Don't construct/destroy object twice
> Rename function_ref_benchmark.cc into more generic function_type_benchmark.cc, add missing includes
> Fixed typo in `try_emplace` comment.
> Fix a typo in a comment.
> Adds ABSL_CONST_INIT to initializing declarations where it is missing
> Automated visibility attribute cleanup.
> Fix typo in absl/time/time.h
> Fix typo: "a the condition" -> "a condition".
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Fix build with uclibc-ng (#1145)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Replace the implementation of the Mix function in arm64 back to 128bit multiplication (#1094)
> Support for QNX (#1147)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Exclude unsupported x64 intrinsics from ARM64EC (#1135)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Add NetBSD support (#1121)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Some trivial OpenBSD-related fixes (#1113)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Add support of loongarch64 (#1110)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Disable ABSL_INTERNAL_ENABLE_FORMAT_CHECKER under VsCode/Intellisense (#1097)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> macos: support Apple Universal 2 builds (#1086)
> cmake: make `random_mocking_bit_gen` library public. (#1084)
> cmake: use target aliases from local Google Test checkout. (#1083)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> cmake: add ABSL_BUILD_TESTING option (#1057)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Fix googletest URL in CMakeLists.txt (#1062)
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Export of internal Abseil changes
> Fix Randen and PCG on Big Endian platforms (#1031)
> Export of internal Abseil changes
Closes#10630