Most likely 817fdad uncovered the fact that our choice of primary replica was resonating with tablet allocation and we were ending up picking the same replica as primary within a scope instead of rotating primaryship among all replicas in the scope.
This created situations where for instance, restoring into a 9 nodes with primary_replica_only=true would put all data into 3 nodes, leaving the other 6 unused. The balancing of the dataset was performed by the subsequent repair step.
This PR fixes this by changing the formula for picking up the primary replica out of a set of eligible replicas from within the passed scope.
The PR also extends the testing scenarios in `test_backup.py` so we get to run restore for a set of topologies, for all combinations of scope, primary_replica_only and min_tablet_counts.
Most of the work was done by @bhalevy [here](https://github.com/scylladb/scylladb/compare/master...bhalevy:scylla:load-balance-primary-replica), this PR just splitted it and did touchups here and there.
Fixes#27281Closesscylladb/scylladb#27397
* github.com:scylladb/scylladb:
test: reduce dataset and number of test cases or debug builds
test: bump repair timeout up, it's sometimes not enough in CI
test: refactor test_refresh.py to match test_restore_with_streaming_scopes.
test: extend test_restore_with_streaming_scopes
test: Adjust test_restore_primary_replica_different_dc_scope_all
test: Refactor restoring code in test_backup to match SM pattern
test: add check_mutation_replicas calls after fresh creation of dataset
test: extend create_dataset to accept consistency_level
test: refactor check_mutation_replicas so it's more readable
test: make create_dataset async and refactor so it's configurable
test: use defaultdict in collect_mutations
test: add log marks to facilitate reusing server for restore
locator: tablets: Distribute data evenly among primary replicas during restore