Compare commits
8 Commits
debug_form
...
copilot/fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
309f1994db | ||
|
|
70e83c7687 | ||
|
|
0f0d30cc34 | ||
|
|
5faee478c4 | ||
|
|
88fb5130b3 | ||
|
|
e5db893125 | ||
|
|
829b85ee46 | ||
|
|
009ab048f1 |
9
.github/CODEOWNERS
vendored
9
.github/CODEOWNERS
vendored
@@ -1,5 +1,5 @@
|
||||
# AUTH
|
||||
auth/* @nuivall
|
||||
auth/* @nuivall @ptrsmrn
|
||||
|
||||
# CACHE
|
||||
row_cache* @tgrabiec
|
||||
@@ -25,11 +25,11 @@ compaction/* @raphaelsc
|
||||
transport/*
|
||||
|
||||
# CQL QUERY LANGUAGE
|
||||
cql3/* @tgrabiec @nuivall
|
||||
cql3/* @tgrabiec @nuivall @ptrsmrn
|
||||
|
||||
# COUNTERS
|
||||
counters* @nuivall
|
||||
tests/counter_test* @nuivall
|
||||
counters* @nuivall @ptrsmrn
|
||||
tests/counter_test* @nuivall @ptrsmrn
|
||||
|
||||
# DOCS
|
||||
docs/* @annastuchlik @tzach
|
||||
@@ -57,6 +57,7 @@ repair/* @tgrabiec @asias
|
||||
|
||||
# SCHEMA MANAGEMENT
|
||||
db/schema_tables* @tgrabiec
|
||||
db/legacy_schema_migrator* @tgrabiec
|
||||
service/migration* @tgrabiec
|
||||
schema* @tgrabiec
|
||||
|
||||
|
||||
101
.github/copilot-instructions.md
vendored
101
.github/copilot-instructions.md
vendored
@@ -1,101 +0,0 @@
|
||||
# ScyllaDB Development Instructions
|
||||
|
||||
## Project Context
|
||||
High-performance distributed NoSQL database. Core values: performance, correctness, readability.
|
||||
|
||||
## Build System
|
||||
|
||||
### Modern Build (configure.py + ninja)
|
||||
```bash
|
||||
# Configure (run once per mode, or when switching modes)
|
||||
./configure.py --mode=<mode> # mode: dev, debug, release, sanitize
|
||||
|
||||
# Build everything
|
||||
ninja <mode>-build # e.g., ninja dev-build
|
||||
|
||||
# Build Scylla binary only (sufficient for Python integration tests)
|
||||
ninja build/<mode>/scylla
|
||||
|
||||
# Build specific test
|
||||
ninja build/<mode>/test/boost/<test_name>
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### C++ Unit Tests
|
||||
```bash
|
||||
# Run all tests in a file
|
||||
./test.py --mode=<mode> test/<suite>/<test_name>.cc
|
||||
|
||||
# Run a single test case from a file
|
||||
./test.py --mode=<mode> test/<suite>/<test_name>.cc::<test_case_name>
|
||||
|
||||
# Examples
|
||||
./test.py --mode=dev test/boost/memtable_test.cc
|
||||
./test.py --mode=dev test/raft/raft_server_test.cc::test_check_abort_on_client_api
|
||||
```
|
||||
|
||||
**Important:**
|
||||
- Use full path with `.cc` extension (e.g., `test/boost/test_name.cc`, not `boost/test_name`)
|
||||
- To run a single test case, append `::<test_case_name>` to the file path
|
||||
- If you encounter permission issues with cgroup metric gathering, add `--no-gather-metrics` flag
|
||||
|
||||
**Rebuilding Tests:**
|
||||
- test.py does NOT automatically rebuild when test source files are modified
|
||||
- Many tests are part of composite binaries (e.g., `combined_tests` in test/boost contains multiple test files)
|
||||
- To find which binary contains a test, check `configure.py` in the repository root (primary source) or `test/<suite>/CMakeLists.txt`
|
||||
- To rebuild a specific test binary: `ninja build/<mode>/test/<suite>/<binary_name>`
|
||||
- Examples:
|
||||
- `ninja build/dev/test/boost/combined_tests` (contains group0_voter_calculator_test.cc and others)
|
||||
- `ninja build/dev/test/raft/replication_test` (standalone Raft test)
|
||||
|
||||
### Python Integration Tests
|
||||
```bash
|
||||
# Only requires Scylla binary (full build usually not needed)
|
||||
ninja build/<mode>/scylla
|
||||
|
||||
# Run all tests in a file
|
||||
./test.py --mode=<mode> test/<suite>/<test_name>.py
|
||||
|
||||
# Run a single test case from a file
|
||||
./test.py --mode=<mode> test/<suite>/<test_name>.py::<test_function_name>
|
||||
|
||||
# Run all tests in a directory
|
||||
./test.py --mode=<mode> test/<suite>/
|
||||
|
||||
# Examples
|
||||
./test.py --mode=dev test/alternator/
|
||||
./test.py --mode=dev test/cluster/test_raft_voters.py::test_raft_limited_voters_retain_coordinator
|
||||
./test.py --mode=dev test/cqlpy/test_json.py
|
||||
|
||||
# Optional flags
|
||||
./test.py --mode=dev test/cluster/test_raft_no_quorum.py -v # Verbose output
|
||||
./test.py --mode=dev test/cluster/test_raft_no_quorum.py --repeat 5 # Repeat test 5 times
|
||||
```
|
||||
|
||||
**Important:**
|
||||
- Use full path with `.py` extension (e.g., `test/cluster/test_raft_no_quorum.py`, not `cluster/test_raft_no_quorum`)
|
||||
- To run a single test case, append `::<test_function_name>` to the file path
|
||||
- Add `-v` for verbose output
|
||||
- Add `--repeat <num>` to repeat a test multiple times
|
||||
- After modifying C++ source files, only rebuild the Scylla binary for Python tests - building the entire repository is unnecessary
|
||||
|
||||
## Code Philosophy
|
||||
- Performance matters in hot paths (data read/write, inner loops)
|
||||
- Self-documenting code through clear naming
|
||||
- Comments explain "why", not "what"
|
||||
- Prefer standard library over custom implementations
|
||||
- Strive for simplicity and clarity, add complexity only when clearly justified
|
||||
- Question requests: don't blindly implement requests - evaluate trade-offs, identify issues, and suggest better alternatives when appropriate
|
||||
- Consider different approaches, weigh pros and cons, and recommend the best fit for the specific context
|
||||
|
||||
## Test Philosophy
|
||||
- Performance matters. Tests should run as quickly as possible. Sleeps in the code are highly discouraged and should be avoided, to reduce run time and flakiness.
|
||||
- Stability matters. Tests should be stable. New tests should be executed 100 times at least to ensure they pass 100 out of 100 times. (use --repeat 100 --max-failures 1 when running it)
|
||||
- Unit tests should ideally test one thing and one thing only.
|
||||
- Tests for bug fixes should run before the fix - and show the failure and after the fix - and show they now pass.
|
||||
- Tests for bug fixes should have in their comments which bug fixes (GitHub or JIRA issue) they test.
|
||||
- Tests in debug are always slower, so if needed, reduce number of iterations, rows, data used, cycles, etc. in debug mode.
|
||||
- Tests should strive to be repeatable, and not use random input that will make their results unpredictable.
|
||||
- Tests should consume as little resources as possible. Prefer running tests on a single node if it is sufficient, for example.
|
||||
|
||||
2
.github/dependabot.yml
vendored
2
.github/dependabot.yml
vendored
@@ -1,6 +1,6 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "uv"
|
||||
- package-ecosystem: "pip"
|
||||
directory: "/docs"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
115
.github/instructions/cpp.instructions.md
vendored
115
.github/instructions/cpp.instructions.md
vendored
@@ -1,115 +0,0 @@
|
||||
---
|
||||
applyTo: "**/*.{cc,hh}"
|
||||
---
|
||||
|
||||
# C++ Guidelines
|
||||
|
||||
**Important:** Always match the style and conventions of existing code in the file and directory.
|
||||
|
||||
## Memory Management
|
||||
- Prefer stack allocation whenever possible
|
||||
- Use `std::unique_ptr` by default for dynamic allocations
|
||||
- `new`/`delete` are forbidden (use RAII)
|
||||
- Use `seastar::lw_shared_ptr` or `seastar::shared_ptr` for shared ownership within same shard
|
||||
- Use `seastar::foreign_ptr` for cross-shard sharing
|
||||
- Avoid `std::shared_ptr` except when interfacing with external C++ APIs
|
||||
- Avoid raw pointers except for non-owning references or C API interop
|
||||
|
||||
## Seastar Asynchronous Programming
|
||||
- Use `seastar::future<T>` for all async operations
|
||||
- Prefer coroutines (`co_await`, `co_return`) over `.then()` chains for readability
|
||||
- Coroutines are preferred over `seastar::do_with()` for managing temporary state
|
||||
- In hot paths where futures are ready, continuations may be more efficient than coroutines
|
||||
- Chain futures with `.then()`, don't block with `.get()` (unless in `seastar::thread` context)
|
||||
- All I/O must be asynchronous (no blocking calls)
|
||||
- Use `seastar::gate` for shutdown coordination
|
||||
- Use `seastar::semaphore` for resource limiting (not `std::mutex`)
|
||||
- Break long loops with `maybe_yield()` to avoid reactor stalls
|
||||
|
||||
## Coroutines
|
||||
```cpp
|
||||
seastar::future<T> func() {
|
||||
auto result = co_await async_operation();
|
||||
co_return result;
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
- Throw exceptions for errors (futures propagate them automatically)
|
||||
- In data path: avoid exceptions, use `std::expected` (or `boost::outcome`) instead
|
||||
- Use standard exceptions (`std::runtime_error`, `std::invalid_argument`)
|
||||
- Database-specific: throw appropriate schema/query exceptions
|
||||
|
||||
## Performance
|
||||
- Pass large objects by `const&` or `&&` (move semantics)
|
||||
- Use `std::string_view` for non-owning string references
|
||||
- Avoid copies: prefer move semantics
|
||||
- Use `utils::chunked_vector` instead of `std::vector` for large allocations (>128KB)
|
||||
- Minimize dynamic allocations in hot paths
|
||||
|
||||
## Database-Specific Types
|
||||
- Use `schema_ptr` for schema references
|
||||
- Use `mutation` and `mutation_partition` for data modifications
|
||||
- Use `partition_key` and `clustering_key` for keys
|
||||
- Use `api::timestamp_type` for database timestamps
|
||||
- Use `gc_clock` for garbage collection timing
|
||||
|
||||
## Style
|
||||
- C++23 standard (prefer modern features, especially coroutines)
|
||||
- Use `auto` when type is obvious from RHS
|
||||
- Avoid `auto` when it obscures the type
|
||||
- Use range-based for loops: `for (const auto& item : container)`
|
||||
- Use standard algorithms when they clearly simplify code (e.g., replacing 10-line loops)
|
||||
- Avoid chaining multiple algorithms if a straightforward loop is clearer
|
||||
- Mark functions and variables `const` whenever possible
|
||||
- Use scoped enums: `enum class` (not unscoped `enum`)
|
||||
|
||||
## Headers
|
||||
- Use `#pragma once`
|
||||
- Include order: own header, C++ std, Seastar, Boost, project headers
|
||||
- Forward declare when possible
|
||||
- Never `using namespace` in headers (exception: `using namespace seastar` is globally available via `seastarx.hh`)
|
||||
|
||||
## Documentation
|
||||
- Public APIs require clear documentation
|
||||
- Implementation details should be self-evident from code
|
||||
- Use `///` or Doxygen `/** */` for public documentation, `//` for implementation notes - follow the existing style
|
||||
|
||||
## Naming
|
||||
- `snake_case` for most identifiers (classes, functions, variables, namespaces)
|
||||
- Template parameters: `CamelCase` (e.g., `template<typename ValueType>`)
|
||||
- Member variables: prefix with `_` (e.g., `int _count;`)
|
||||
- Structs (value-only): no `_` prefix on members
|
||||
- Constants and `constexpr`: `snake_case` (e.g., `static constexpr int max_size = 100;`)
|
||||
- Files: `.hh` for headers, `.cc` for source
|
||||
|
||||
## Formatting
|
||||
- 4 spaces indentation, never tabs
|
||||
- Opening braces on same line as control structure (except namespaces)
|
||||
- Space after keywords: `if (`, `while (`, `return `
|
||||
- Whitespace around operators matches precedence: `*a + *b` not `* a+* b`
|
||||
- Line length: keep reasonable (<160 chars), use continuation lines with double indent if needed
|
||||
- Brace all nested scopes, even single statements
|
||||
- Minimal patches: only format code you modify, never reformat entire files
|
||||
|
||||
## Logging
|
||||
- Use structured logging with appropriate levels: DEBUG, INFO, WARN, ERROR
|
||||
- Include context in log messages (e.g., request IDs)
|
||||
- Never log sensitive data (credentials, PII)
|
||||
|
||||
## Forbidden
|
||||
- `malloc`/`free`
|
||||
- `printf` family (use logging or fmt)
|
||||
- Raw pointers for ownership
|
||||
- `using namespace` in headers
|
||||
- Blocking operations: `std::sleep`, `std::read`, `std::mutex` (use Seastar equivalents)
|
||||
- `std::atomic` (reserved for very special circumstances only)
|
||||
- Macros (use `inline`, `constexpr`, or templates instead)
|
||||
|
||||
## Testing
|
||||
When modifying existing code, follow TDD: create/update test first, then implement.
|
||||
- Examine existing tests for style and structure
|
||||
- Use Boost.Test framework
|
||||
- Use `SEASTAR_THREAD_TEST_CASE` for Seastar asynchronous tests
|
||||
- Aim for high code coverage, especially for new features and bug fixes
|
||||
- Maintain bisectability: all tests must pass in every commit. Mark failing tests with `BOOST_FAIL()` or similar, then fix in subsequent commit
|
||||
51
.github/instructions/python.instructions.md
vendored
51
.github/instructions/python.instructions.md
vendored
@@ -1,51 +0,0 @@
|
||||
---
|
||||
applyTo: "**/*.py"
|
||||
---
|
||||
|
||||
# Python Guidelines
|
||||
|
||||
**Important:** Match existing code style. Some directories (like `test/cqlpy` and `test/alternator`) prefer simplicity over type hints and docstrings.
|
||||
|
||||
## Style
|
||||
- Follow PEP 8
|
||||
- Use type hints for function signatures (unless directory style omits them)
|
||||
- Use f-strings for formatting
|
||||
- Line length: 160 characters max
|
||||
- 4 spaces for indentation
|
||||
|
||||
## Imports
|
||||
Order: standard library, third-party, local imports
|
||||
```python
|
||||
import os
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
from cassandra.cluster import Cluster
|
||||
|
||||
from test.utils import setup_keyspace
|
||||
```
|
||||
|
||||
Never use `from module import *`
|
||||
|
||||
## Documentation
|
||||
All public functions/classes need docstrings (unless the current directory conventions omit them):
|
||||
```python
|
||||
def my_function(arg1: str, arg2: int) -> bool:
|
||||
"""
|
||||
Brief summary of function purpose.
|
||||
|
||||
Args:
|
||||
arg1: Description of first argument.
|
||||
arg2: Description of second argument.
|
||||
|
||||
Returns:
|
||||
Description of return value.
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
## Testing Best Practices
|
||||
- Maintain bisectability: all tests must pass in every commit
|
||||
- Mark currently-failing tests with `@pytest.mark.xfail`, unmark when fixed
|
||||
- Use descriptive names that convey intent
|
||||
- Docstrings/comments should explain what the test verifies and why, and if it reproduces a specific issue or how it fits into the larger test suite
|
||||
37
.github/scripts/auto-backport.py
vendored
37
.github/scripts/auto-backport.py
vendored
@@ -62,7 +62,7 @@ def create_pull_request(repo, new_branch_name, base_branch_name, pr, backport_pr
|
||||
if is_draft:
|
||||
labels_to_add.append("conflicts")
|
||||
pr_comment = f"@{pr.user.login} - This PR was marked as draft because it has conflicts\n"
|
||||
pr_comment += "Please resolve them and remove the 'conflicts' label. The PR will be made ready for review automatically."
|
||||
pr_comment += "Please resolve them and mark this PR as ready for review"
|
||||
backport_pr.create_issue_comment(pr_comment)
|
||||
|
||||
# Apply all labels at once if we have any
|
||||
@@ -142,31 +142,20 @@ def backport(repo, pr, version, commits, backport_base_branch, is_collaborator):
|
||||
|
||||
|
||||
def with_github_keyword_prefix(repo, pr):
|
||||
# GitHub issue pattern: #123, scylladb/scylladb#123, or full GitHub URLs
|
||||
github_pattern = rf"(?:fix(?:|es|ed))\s*:?\s*(?:(?:(?:{repo.full_name})?#)|https://github\.com/{repo.full_name}/issues/)(\d+)"
|
||||
|
||||
# JIRA issue pattern: PKG-92 or https://scylladb.atlassian.net/browse/PKG-92
|
||||
jira_pattern = r"(?:fix(?:|es|ed))\s*:?\s*(?:(?:https://scylladb\.atlassian\.net/browse/)?([A-Z]+-\d+))"
|
||||
|
||||
# Check PR body for GitHub issues
|
||||
github_match = re.findall(github_pattern, pr.body, re.IGNORECASE)
|
||||
# Check PR body for JIRA issues
|
||||
jira_match = re.findall(jira_pattern, pr.body, re.IGNORECASE)
|
||||
|
||||
match = github_match or jira_match
|
||||
|
||||
if match:
|
||||
pattern = rf"(?:fix(?:|es|ed))\s*:?\s*(?:(?:(?:{repo.full_name})?#)|https://github\.com/{repo.full_name}/issues/)(\d+)"
|
||||
match = re.findall(pattern, pr.body, re.IGNORECASE)
|
||||
if not match:
|
||||
for commit in pr.get_commits():
|
||||
match = re.findall(pattern, commit.commit.message, re.IGNORECASE)
|
||||
if match:
|
||||
print(f'{pr.number} has a valid close reference in commit message {commit.sha}')
|
||||
break
|
||||
if not match:
|
||||
print(f'No valid close reference for {pr.number}')
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
for commit in pr.get_commits():
|
||||
github_match = re.findall(github_pattern, commit.commit.message, re.IGNORECASE)
|
||||
jira_match = re.findall(jira_pattern, commit.commit.message, re.IGNORECASE)
|
||||
if github_match or jira_match:
|
||||
print(f'{pr.number} has a valid close reference in commit message {commit.sha}')
|
||||
return True
|
||||
|
||||
print(f'No valid close reference for {pr.number}')
|
||||
return False
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
@@ -8,9 +8,6 @@ on:
|
||||
jobs:
|
||||
check-fixes-prefix:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
steps:
|
||||
- name: Check PR body for "Fixes" prefix patterns
|
||||
uses: actions/github-script@v7
|
||||
@@ -21,7 +18,7 @@ jobs:
|
||||
|
||||
// Regular expression pattern to check for "Fixes" prefix
|
||||
// Adjusted to dynamically insert the repository full name
|
||||
const pattern = `Fixes:? ((?:#|${repo.replace('/', '\\/')}#|https://github\\.com/${repo.replace('/', '\\/')}/issues/)(\\d+)|(?:https://scylladb\\.atlassian\\.net/browse/)?([A-Z]+-\\d+))`;
|
||||
const pattern = `Fixes:? (?:#|${repo.replace('/', '\\/')}#|https://github\\.com/${repo.replace('/', '\\/')}/issues/)(\\d+)`;
|
||||
const regex = new RegExp(pattern);
|
||||
|
||||
if (!regex.test(body)) {
|
||||
|
||||
53
.github/workflows/call_backport_with_jira.yaml
vendored
53
.github/workflows/call_backport_with_jira.yaml
vendored
@@ -1,53 +0,0 @@
|
||||
name: Backport with Jira Integration
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
- next-*.*
|
||||
- branch-*.*
|
||||
pull_request_target:
|
||||
types: [labeled, closed]
|
||||
branches:
|
||||
- master
|
||||
- next
|
||||
- next-*.*
|
||||
- branch-*.*
|
||||
|
||||
jobs:
|
||||
backport-on-push:
|
||||
if: github.event_name == 'push'
|
||||
uses: scylladb/github-automation/.github/workflows/backport-with-jira.yaml@main
|
||||
with:
|
||||
event_type: 'push'
|
||||
base_branch: ${{ github.ref }}
|
||||
commits: ${{ github.event.before }}..${{ github.sha }}
|
||||
secrets:
|
||||
gh_token: ${{ secrets.AUTO_BACKPORT_TOKEN }}
|
||||
jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
|
||||
backport-on-label:
|
||||
if: github.event_name == 'pull_request_target' && github.event.action == 'labeled'
|
||||
uses: scylladb/github-automation/.github/workflows/backport-with-jira.yaml@main
|
||||
with:
|
||||
event_type: 'labeled'
|
||||
base_branch: refs/heads/${{ github.event.pull_request.base.ref }}
|
||||
pull_request_number: ${{ github.event.pull_request.number }}
|
||||
head_commit: ${{ github.event.pull_request.base.sha }}
|
||||
label_name: ${{ github.event.label.name }}
|
||||
pr_state: ${{ github.event.pull_request.state }}
|
||||
secrets:
|
||||
gh_token: ${{ secrets.AUTO_BACKPORT_TOKEN }}
|
||||
jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
|
||||
backport-chain:
|
||||
if: github.event_name == 'pull_request_target' && github.event.action == 'closed' && github.event.pull_request.merged == true
|
||||
uses: scylladb/github-automation/.github/workflows/backport-with-jira.yaml@main
|
||||
with:
|
||||
event_type: 'chain'
|
||||
base_branch: refs/heads/${{ github.event.pull_request.base.ref }}
|
||||
pull_request_number: ${{ github.event.pull_request.number }}
|
||||
pr_body: ${{ github.event.pull_request.body }}
|
||||
secrets:
|
||||
gh_token: ${{ secrets.AUTO_BACKPORT_TOKEN }}
|
||||
jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
12
.github/workflows/call_jira_status_in_progress.yml
vendored
Normal file
12
.github/workflows/call_jira_status_in_progress.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
name: Call Jira Status In Progress
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
call-jira-status-in-progress:
|
||||
uses: scylladb/github-automation/.github/workflows/main_update_jira_status_to_in_progress.yml@main
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
|
||||
12
.github/workflows/call_jira_status_in_review.yml
vendored
Normal file
12
.github/workflows/call_jira_status_in_review.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
name: Call Jira Status In Review
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [ready_for_review, review_requested]
|
||||
|
||||
jobs:
|
||||
call-jira-status-in-review:
|
||||
uses: scylladb/github-automation/.github/workflows/main_update_jira_status_to_in_review.yml@main
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
|
||||
12
.github/workflows/call_jira_status_ready_for_merge.yml
vendored
Normal file
12
.github/workflows/call_jira_status_ready_for_merge.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
name: Call Jira Status Ready For Merge
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
call-jira-status-update:
|
||||
uses: scylladb/github-automation/.github/workflows/main_update_jira_status_to_ready_for_merge.yml@main
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
|
||||
18
.github/workflows/call_jira_sync.yml
vendored
18
.github/workflows/call_jira_sync.yml
vendored
@@ -1,18 +0,0 @@
|
||||
name: Sync Jira Based on PR Events
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, edited, ready_for_review, review_requested, labeled, unlabeled, closed]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
jira-sync:
|
||||
uses: scylladb/github-automation/.github/workflows/main_pr_events_jira_sync.yml@main
|
||||
with:
|
||||
caller_action: ${{ github.event.action }}
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
@@ -1,22 +0,0 @@
|
||||
name: Sync Jira Based on PR Milestone Events
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [milestoned, demilestoned]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
jira-sync-milestone-set:
|
||||
if: github.event.action == 'milestoned'
|
||||
uses: scylladb/github-automation/.github/workflows/main_jira_sync_pr_milestone_set.yml@main
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
|
||||
jira-sync-milestone-removed:
|
||||
if: github.event.action == 'demilestoned'
|
||||
uses: scylladb/github-automation/.github/workflows/main_jira_sync_pr_milestone_removed.yml@main
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
@@ -1,14 +0,0 @@
|
||||
name: Call Jira release creation for new milestone
|
||||
|
||||
on:
|
||||
milestone:
|
||||
types: [created, closed]
|
||||
|
||||
jobs:
|
||||
sync-milestone-to-jira:
|
||||
uses: scylladb/github-automation/.github/workflows/main_sync_milestone_to_jira_release.yml@main
|
||||
with:
|
||||
# Comma-separated list of Jira project keys
|
||||
jira_project_keys: "SCYLLADB,CUSTOMER,SMI,RELENG,VECTOR"
|
||||
secrets:
|
||||
caller_jira_auth: ${{ secrets.USER_AND_KEY_FOR_JIRA_AUTOMATION }}
|
||||
@@ -1,13 +0,0 @@
|
||||
name: validate_pr_author_email
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types:
|
||||
- opened
|
||||
- synchronize
|
||||
- reopened
|
||||
|
||||
jobs:
|
||||
validate_pr_author_email:
|
||||
uses: scylladb/github-automation/.github/workflows/validate_pr_author_email.yml@main
|
||||
|
||||
@@ -1,62 +0,0 @@
|
||||
name: Close issues created by Scylla associates
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, reopened]
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
comment-and-close:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Comment and close if author email is scylladb.com
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
const issue = context.payload.issue;
|
||||
const actor = context.actor;
|
||||
|
||||
// Get user data (only public email is available)
|
||||
const { data: user } = await github.rest.users.getByUsername({
|
||||
username: actor,
|
||||
});
|
||||
|
||||
const email = user.email || "";
|
||||
console.log(`Actor: ${actor}, public email: ${email || "<none>"}`);
|
||||
|
||||
// Only continue if email exists and ends with @scylladb.com
|
||||
if (!email || !email.toLowerCase().endsWith("@scylladb.com")) {
|
||||
console.log("User is not a scylladb.com email (or email not public); skipping.");
|
||||
return;
|
||||
}
|
||||
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const issue_number = issue.number;
|
||||
|
||||
const body = "Issues in this repository are closed automatically. Scylla associates should use Jira to manage issues.\nPlease move this issue to Jira https://scylladb.atlassian.net/jira/software/c/projects/SCYLLADB/list";
|
||||
|
||||
// Add the comment
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
body,
|
||||
});
|
||||
|
||||
console.log(`Comment added to #${issue_number}`);
|
||||
|
||||
// Close the issue
|
||||
await github.rest.issues.update({
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
state: "closed",
|
||||
state_reason: "not_planned"
|
||||
});
|
||||
|
||||
console.log(`Issue #${issue_number} closed.`);
|
||||
2
.github/workflows/codespell.yaml
vendored
2
.github/workflows/codespell.yaml
vendored
@@ -13,5 +13,5 @@ jobs:
|
||||
- uses: codespell-project/actions-codespell@master
|
||||
with:
|
||||
only_warn: 1
|
||||
ignore_words_list: "ans,datas,fo,ser,ue,crate,nd,reenable,strat,stap,te,raison,iif,tread"
|
||||
ignore_words_list: "ans,datas,fo,ser,ue,crate,nd,reenable,strat,stap,te,raison"
|
||||
skip: "./.git,./build,./tools,*.js,*.lock,./test,./licenses,./redis/lolwut.cc,*.svg"
|
||||
|
||||
8
.github/workflows/docs-pages.yaml
vendored
8
.github/workflows/docs-pages.yaml
vendored
@@ -18,10 +18,6 @@ on:
|
||||
|
||||
jobs:
|
||||
release:
|
||||
permissions:
|
||||
pages: write
|
||||
id-token: write
|
||||
contents: write
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
@@ -33,9 +29,7 @@ jobs:
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v6
|
||||
python-version: "3.10"
|
||||
- name: Set up env
|
||||
run: make -C docs FLAG="${{ env.FLAG }}" setupenv
|
||||
- name: Build docs
|
||||
|
||||
7
.github/workflows/docs-pr.yaml
vendored
7
.github/workflows/docs-pr.yaml
vendored
@@ -2,9 +2,6 @@ name: "Docs / Build PR"
|
||||
# For more information,
|
||||
# see https://sphinx-theme.scylladb.com/stable/deployment/production.html#available-workflows
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
FLAG: ${{ github.repository == 'scylladb/scylla-enterprise' && 'enterprise' || 'opensource' }}
|
||||
|
||||
@@ -29,9 +26,7 @@ jobs:
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v6
|
||||
python-version: "3.10"
|
||||
- name: Set up env
|
||||
run: make -C docs FLAG="${{ env.FLAG }}" setupenv
|
||||
- name: Build docs
|
||||
|
||||
37
.github/workflows/docs-validate-metrics.yml
vendored
37
.github/workflows/docs-validate-metrics.yml
vendored
@@ -1,37 +0,0 @@
|
||||
name: Docs / Validate metrics
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
- enterprise
|
||||
paths:
|
||||
- '**/*.cc'
|
||||
- 'scripts/metrics-config.yml'
|
||||
- 'scripts/get_description.py'
|
||||
- 'docs/_ext/scylladb_metrics.py'
|
||||
|
||||
jobs:
|
||||
validate-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
name: Check metrics documentation coverage
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: true
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install PyYAML
|
||||
|
||||
- name: Validate metrics
|
||||
run: python3 scripts/get_description.py --validate -c scripts/metrics-config.yml
|
||||
5
.github/workflows/iwyu.yaml
vendored
5
.github/workflows/iwyu.yaml
vendored
@@ -14,8 +14,7 @@ env:
|
||||
CLEANER_DIRS: test/unit exceptions alternator api auth cdc compaction db dht gms index lang message mutation mutation_writer node_ops raft redis replica service
|
||||
SEASTAR_BAD_INCLUDE_OUTPUT_PATH: build/seastar-bad-include.log
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
permissions: {}
|
||||
|
||||
# cancel the in-progress run upon a repush
|
||||
concurrency:
|
||||
@@ -35,6 +34,8 @@ jobs:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: true
|
||||
- run: |
|
||||
sudo dnf -y install clang-tools-extra
|
||||
- name: Generate compilation database
|
||||
run: |
|
||||
cmake \
|
||||
|
||||
2
.github/workflows/read-toolchain.yaml
vendored
2
.github/workflows/read-toolchain.yaml
vendored
@@ -10,8 +10,6 @@ on:
|
||||
jobs:
|
||||
read-toolchain:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
image: ${{ steps.read.outputs.image }}
|
||||
steps:
|
||||
|
||||
53
.github/workflows/trigger-scylla-ci.yaml
vendored
53
.github/workflows/trigger-scylla-ci.yaml
vendored
@@ -1,66 +1,21 @@
|
||||
name: Trigger Scylla CI Route
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_target:
|
||||
types:
|
||||
- unlabeled
|
||||
|
||||
jobs:
|
||||
trigger-jenkins:
|
||||
if: (github.event_name == 'issue_comment' && github.event.comment.user.login != 'scylladbbot') || github.event.label.name == 'conflicts'
|
||||
if: github.event.comment.user.login != 'scylladbbot' && contains(github.event.comment.body, '@scylladbbot') && contains(github.event.comment.body, 'trigger-ci')
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Verify Org Membership
|
||||
id: verify_author
|
||||
env:
|
||||
EVENT_NAME: ${{ github.event_name }}
|
||||
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
|
||||
PR_ASSOCIATION: ${{ github.event.pull_request.author_association }}
|
||||
COMMENT_AUTHOR: ${{ github.event.comment.user.login }}
|
||||
COMMENT_ASSOCIATION: ${{ github.event.comment.author_association }}
|
||||
shell: bash
|
||||
run: |
|
||||
if [[ "$EVENT_NAME" == "pull_request_target" ]]; then
|
||||
AUTHOR="$PR_AUTHOR"
|
||||
ASSOCIATION="$PR_ASSOCIATION"
|
||||
else
|
||||
AUTHOR="$COMMENT_AUTHOR"
|
||||
ASSOCIATION="$COMMENT_ASSOCIATION"
|
||||
fi
|
||||
if [[ "$ASSOCIATION" == "MEMBER" || "$ASSOCIATION" == "OWNER" ]]; then
|
||||
echo "member=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "::warning::${AUTHOR} is not a member of scylladb (association: ${ASSOCIATION}); skipping CI trigger."
|
||||
echo "member=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Validate Comment Trigger
|
||||
if: github.event_name == 'issue_comment'
|
||||
id: verify_comment
|
||||
env:
|
||||
COMMENT_BODY: ${{ github.event.comment.body }}
|
||||
shell: bash
|
||||
run: |
|
||||
CLEAN_BODY=$(echo "$COMMENT_BODY" | grep -v '^[[:space:]]*>')
|
||||
|
||||
if echo "$CLEAN_BODY" | grep -qi '@scylladbbot' && echo "$CLEAN_BODY" | grep -qi 'trigger-ci'; then
|
||||
echo "trigger=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "trigger=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Trigger Scylla-CI-Route Jenkins Job
|
||||
if: steps.verify_author.outputs.member == 'true' && (github.event_name == 'pull_request_target' || steps.verify_comment.outputs.trigger == 'true')
|
||||
env:
|
||||
JENKINS_USER: ${{ secrets.JENKINS_USERNAME }}
|
||||
JENKINS_API_TOKEN: ${{ secrets.JENKINS_TOKEN }}
|
||||
JENKINS_URL: "https://jenkins.scylladb.com"
|
||||
PR_NUMBER: "${{ github.event.issue.number || github.event.pull_request.number }}"
|
||||
PR_REPO_NAME: "${{ github.event.repository.full_name }}"
|
||||
run: |
|
||||
PR_NUMBER=${{ github.event.issue.number }}
|
||||
PR_REPO_NAME=${{ github.event.repository.full_name }}
|
||||
curl -X POST "$JENKINS_URL/job/releng/job/Scylla-CI-Route/buildWithParameters?PR_NUMBER=$PR_NUMBER&PR_REPO_NAME=$PR_REPO_NAME" \
|
||||
--user "$JENKINS_USER:$JENKINS_API_TOKEN" --fail
|
||||
--user "$JENKINS_USER:$JENKINS_API_TOKEN" --fail -i -v
|
||||
|
||||
242
.github/workflows/trigger_ci.yaml
vendored
242
.github/workflows/trigger_ci.yaml
vendored
@@ -1,242 +0,0 @@
|
||||
name: Trigger next gating
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, reopened, synchronize]
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
trigger-ci:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Dump GitHub context
|
||||
env:
|
||||
GITHUB_CONTEXT: ${{ toJson(github) }}
|
||||
run: echo "$GITHUB_CONTEXT"
|
||||
- name: Checkout PR code
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0 # Needed to access full history
|
||||
ref: ${{ github.event.pull_request.head.ref }}
|
||||
|
||||
- name: Fetch before commit if needed
|
||||
run: |
|
||||
if ! git cat-file -e ${{ github.event.before }} 2>/dev/null; then
|
||||
echo "Fetching before commit ${{ github.event.before }}"
|
||||
git fetch --depth=1 origin ${{ github.event.before }}
|
||||
fi
|
||||
|
||||
- name: Compare commits for file changes
|
||||
if: github.action == 'synchronize'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
echo "Base: ${{ github.event.before }}"
|
||||
echo "Head: ${{ github.event.after }}"
|
||||
|
||||
TREE_BEFORE=$(git show -s --format=%T ${{ github.event.before }})
|
||||
TREE_AFTER=$(git show -s --format=%T ${{ github.event.after }})
|
||||
|
||||
echo "TREE_BEFORE=$TREE_BEFORE" >> $GITHUB_ENV
|
||||
echo "TREE_AFTER=$TREE_AFTER" >> $GITHUB_ENV
|
||||
|
||||
- name: Check if last push has file changes
|
||||
run: |
|
||||
if [[ "${{ env.TREE_BEFORE }}" == "${{ env.TREE_AFTER }}" ]]; then
|
||||
echo "No file changes detected in the last push, only commit message edit."
|
||||
echo "has_file_changes=false" >> $GITHUB_ENV
|
||||
else
|
||||
echo "File changes detected in the last push."
|
||||
echo "has_file_changes=true" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- name: Rule 1 - Check PR draft or conflict status
|
||||
run: |
|
||||
# Check if PR is in draft mode
|
||||
IS_DRAFT="${{ github.event.pull_request.draft }}"
|
||||
|
||||
# Check if PR has 'conflict' label
|
||||
HAS_CONFLICT_LABEL="false"
|
||||
LABELS='${{ toJson(github.event.pull_request.labels) }}'
|
||||
if echo "$LABELS" | jq -r '.[].name' | grep -q "^conflict$"; then
|
||||
HAS_CONFLICT_LABEL="true"
|
||||
fi
|
||||
|
||||
# Set draft_or_conflict variable
|
||||
if [[ "$IS_DRAFT" == "true" || "$HAS_CONFLICT_LABEL" == "true" ]]; then
|
||||
echo "draft_or_conflict=true" >> $GITHUB_ENV
|
||||
echo "✅ Rule 1: PR is in draft mode or has conflict label - setting draft_or_conflict=true"
|
||||
else
|
||||
echo "draft_or_conflict=false" >> $GITHUB_ENV
|
||||
echo "✅ Rule 1: PR is ready and has no conflict label - setting draft_or_conflict=false"
|
||||
fi
|
||||
|
||||
echo "Draft status: $IS_DRAFT"
|
||||
echo "Has conflict label: $HAS_CONFLICT_LABEL"
|
||||
echo "Result: draft_or_conflict = $draft_or_conflict"
|
||||
|
||||
- name: Rule 2 - Check labels
|
||||
run: |
|
||||
# Check if PR has P0 or P1 labels
|
||||
HAS_P0_P1_LABEL="false"
|
||||
LABELS='${{ toJson(github.event.pull_request.labels) }}'
|
||||
if echo "$LABELS" | jq -r '.[].name' | grep -E "^(P0|P1)$" > /dev/null; then
|
||||
HAS_P0_P1_LABEL="true"
|
||||
fi
|
||||
|
||||
# Check if PR already has force_on_cloud label
|
||||
echo "HAS_FORCE_ON_CLOUD_LABEL=false" >> $GITHUB_ENV
|
||||
if echo "$LABELS" | jq -r '.[].name' | grep -q "^force_on_cloud$"; then
|
||||
HAS_FORCE_ON_CLOUD_LABEL="true"
|
||||
echo "HAS_FORCE_ON_CLOUD_LABEL=true" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
echo "Has P0/P1 label: $HAS_P0_P1_LABEL"
|
||||
echo "Has force_on_cloud label: $HAS_FORCE_ON_CLOUD_LABEL"
|
||||
|
||||
# Add force_on_cloud label if PR has P0/P1 and doesn't already have force_on_cloud
|
||||
if [[ "$HAS_P0_P1_LABEL" == "true" && "$HAS_FORCE_ON_CLOUD_LABEL" == "false" ]]; then
|
||||
echo "✅ Rule 2: PR has P0 or P1 label - adding force_on_cloud label"
|
||||
curl -X POST \
|
||||
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/labels" \
|
||||
-d '{"labels":["force_on_cloud"]}'
|
||||
elif [[ "$HAS_P0_P1_LABEL" == "true" && "$HAS_FORCE_ON_CLOUD_LABEL" == "true" ]]; then
|
||||
echo "✅ Rule 2: PR has P0 or P1 label and already has force_on_cloud label - no action needed"
|
||||
else
|
||||
echo "✅ Rule 2: PR does not have P0 or P1 label - no force_on_cloud label needed"
|
||||
fi
|
||||
|
||||
SKIP_UNIT_TEST_CUSTOM="false"
|
||||
if echo "$LABELS" | jq -r '.[].name' | grep -q "^ci/skip_unit-tests_custom$"; then
|
||||
SKIP_UNIT_TEST_CUSTOM="true"
|
||||
fi
|
||||
echo "SKIP_UNIT_TEST_CUSTOM=$SKIP_UNIT_TEST_CUSTOM" >> $GITHUB_ENV
|
||||
|
||||
- name: Rule 3 - Analyze changed files and set build requirements
|
||||
run: |
|
||||
# Get list of changed files
|
||||
CHANGED_FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.event.pull_request.head.sha }})
|
||||
echo "Changed files:"
|
||||
echo "$CHANGED_FILES"
|
||||
echo ""
|
||||
|
||||
# Initialize all requirements to false
|
||||
REQUIRE_BUILD="false"
|
||||
REQUIRE_DTEST="false"
|
||||
REQUIRE_UNITTEST="false"
|
||||
REQUIRE_ARTIFACTS="false"
|
||||
REQUIRE_SCYLLA_GDB="false"
|
||||
|
||||
# Check each file against patterns
|
||||
while IFS= read -r file; do
|
||||
if [[ -n "$file" ]]; then
|
||||
echo "Checking file: $file"
|
||||
|
||||
# Build pattern: ^(?!scripts\/pull_github_pr.sh).*$
|
||||
# Everything except scripts/pull_github_pr.sh
|
||||
if [[ "$file" != "scripts/pull_github_pr.sh" ]]; then
|
||||
REQUIRE_BUILD="true"
|
||||
echo " ✓ Matches build pattern"
|
||||
fi
|
||||
|
||||
# Dtest pattern: ^(?!test(.py|\/)|dist\/docker\/|dist\/common\/scripts\/).*$
|
||||
# Everything except test files, dist/docker/, dist/common/scripts/
|
||||
if [[ ! "$file" =~ ^test\.(py|/).*$ ]] && [[ ! "$file" =~ ^dist/docker/.*$ ]] && [[ ! "$file" =~ ^dist/common/scripts/.*$ ]]; then
|
||||
REQUIRE_DTEST="true"
|
||||
echo " ✓ Matches dtest pattern"
|
||||
fi
|
||||
|
||||
# Unittest pattern: ^(?!dist\/docker\/|dist\/common\/scripts).*$
|
||||
# Everything except dist/docker/, dist/common/scripts/
|
||||
if [[ ! "$file" =~ ^dist/docker/.*$ ]] && [[ ! "$file" =~ ^dist/common/scripts.*$ ]]; then
|
||||
REQUIRE_UNITTEST="true"
|
||||
echo " ✓ Matches unittest pattern"
|
||||
fi
|
||||
|
||||
# Artifacts pattern: ^(?:dist|tools\/toolchain).*$
|
||||
# Files starting with dist or tools/toolchain
|
||||
if [[ "$file" =~ ^dist.*$ ]] || [[ "$file" =~ ^tools/toolchain.*$ ]]; then
|
||||
REQUIRE_ARTIFACTS="true"
|
||||
echo " ✓ Matches artifacts pattern"
|
||||
fi
|
||||
|
||||
# Scylla GDB pattern: ^(scylla-gdb.py).*$
|
||||
# Files starting with scylla-gdb.py
|
||||
if [[ "$file" =~ ^scylla-gdb\.py.*$ ]]; then
|
||||
REQUIRE_SCYLLA_GDB="true"
|
||||
echo " ✓ Matches scylla_gdb pattern"
|
||||
fi
|
||||
fi
|
||||
done <<< "$CHANGED_FILES"
|
||||
|
||||
# Set environment variables
|
||||
echo "requireBuild=$REQUIRE_BUILD" >> $GITHUB_ENV
|
||||
echo "requireDtest=$REQUIRE_DTEST" >> $GITHUB_ENV
|
||||
echo "requireUnittest=$REQUIRE_UNITTEST" >> $GITHUB_ENV
|
||||
echo "requireArtifacts=$REQUIRE_ARTIFACTS" >> $GITHUB_ENV
|
||||
echo "requireScyllaGdb=$REQUIRE_SCYLLA_GDB" >> $GITHUB_ENV
|
||||
|
||||
echo ""
|
||||
echo "✅ Rule 3: File analysis complete"
|
||||
echo "Build required: $REQUIRE_BUILD"
|
||||
echo "Dtest required: $REQUIRE_DTEST"
|
||||
echo "Unittest required: $REQUIRE_UNITTEST"
|
||||
echo "Artifacts required: $REQUIRE_ARTIFACTS"
|
||||
echo "Scylla GDB required: $REQUIRE_SCYLLA_GDB"
|
||||
|
||||
- name: Determine Jenkins Job Name
|
||||
run: |
|
||||
if [[ "${{ github.ref_name }}" == "next" ]]; then
|
||||
FOLDER_NAME="scylla-master"
|
||||
elif [[ "${{ github.ref_name }}" == "next-enterprise" ]]; then
|
||||
FOLDER_NAME="scylla-enterprise"
|
||||
else
|
||||
VERSION=$(echo "${{ github.ref_name }}" | awk -F'-' '{print $2}')
|
||||
if [[ "$VERSION" =~ ^202[0-4]\.[0-9]+$ ]]; then
|
||||
FOLDER_NAME="enterprise-$VERSION"
|
||||
elif [[ "$VERSION" =~ ^[0-9]+\.[0-9]+$ ]]; then
|
||||
FOLDER_NAME="scylla-$VERSION"
|
||||
fi
|
||||
fi
|
||||
echo "JOB_NAME=${FOLDER_NAME}/job/scylla-ci" >> $GITHUB_ENV
|
||||
|
||||
- name: Trigger Jenkins Job
|
||||
if: env.draft_or_conflict == 'false' && env.has_file_changes == 'true' && github.action == 'opened' || github.action == 'reopened'
|
||||
env:
|
||||
JENKINS_USER: ${{ secrets.JENKINS_USERNAME }}
|
||||
JENKINS_API_TOKEN: ${{ secrets.JENKINS_TOKEN }}
|
||||
JENKINS_URL: "https://jenkins.scylladb.com"
|
||||
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}
|
||||
run: |
|
||||
PR_NUMBER=${{ github.event.issue.number }}
|
||||
PR_REPO_NAME=${{ github.event.repository.full_name }}
|
||||
echo "Triggering Jenkins Job: $JOB_NAME"
|
||||
curl -X POST \
|
||||
"$JENKINS_URL/job/$JOB_NAME/buildWithParameters? \
|
||||
PR_NUMBER=$PR_NUMBER& \
|
||||
RUN_DTEST=$REQUIRE_DTEST& \
|
||||
RUN_ONLY_SCYLLA_GDB=$REQUIRE_SCYLLA_GDB& \
|
||||
RUN_UNIT_TEST=$REQUIRE_UNITTEST& \
|
||||
FORCE_ON_CLOUD=$HAS_FORCE_ON_CLOUD_LABEL& \
|
||||
SKIP_UNIT_TEST_CUSTOM=$SKIP_UNIT_TEST_CUSTOM& \
|
||||
RUN_ARTIFACT_TESTS=$REQUIRE_ARTIFACTS" \
|
||||
--fail \
|
||||
--user "$JENKINS_USER:$JENKINS_API_TOKEN" \
|
||||
-i -v
|
||||
trigger-ci-via-comment:
|
||||
if: github.event.comment.user.login != 'scylladbbot' && contains(github.event.comment.body, '@scylladbbot') && contains(github.event.comment.body, 'trigger-ci')
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger Scylla-CI Jenkins Job
|
||||
env:
|
||||
JENKINS_USER: ${{ secrets.JENKINS_USERNAME }}
|
||||
JENKINS_API_TOKEN: ${{ secrets.JENKINS_TOKEN }}
|
||||
JENKINS_URL: "https://jenkins.scylladb.com"
|
||||
run: |
|
||||
PR_NUMBER=${{ github.event.issue.number }}
|
||||
PR_REPO_NAME=${{ github.event.repository.full_name }}
|
||||
curl -X POST "$JENKINS_URL/job/$JOB_NAME/buildWithParameters?PR_NUMBER=$PR_NUMBER" \
|
||||
--user "$JENKINS_USER:$JENKINS_API_TOKEN" --fail -i -v
|
||||
3
.github/workflows/trigger_jenkins.yaml
vendored
3
.github/workflows/trigger_jenkins.yaml
vendored
@@ -1,8 +1,5 @@
|
||||
name: Trigger next gating
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -37,3 +37,4 @@ clang_build
|
||||
.idea/
|
||||
nuke
|
||||
rust/target
|
||||
|
||||
|
||||
@@ -116,7 +116,6 @@ list(APPEND absl_cxx_flags
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
list(APPEND ABSL_GCC_FLAGS ${absl_cxx_flags})
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
||||
list(APPEND absl_cxx_flags "-Wno-deprecated-builtins")
|
||||
list(APPEND ABSL_LLVM_FLAGS ${absl_cxx_flags})
|
||||
endif()
|
||||
set(ABSL_DEFAULT_LINKOPTS
|
||||
@@ -164,45 +163,7 @@ file(MAKE_DIRECTORY "${scylla_gen_build_dir}")
|
||||
include(add_version_library)
|
||||
generate_scylla_version()
|
||||
|
||||
option(Scylla_USE_PRECOMPILED_HEADER "Use precompiled header for Scylla" ON)
|
||||
add_library(scylla-precompiled-header STATIC exported_templates.cc)
|
||||
target_link_libraries(scylla-precompiled-header PRIVATE
|
||||
absl::headers
|
||||
absl::btree
|
||||
absl::hash
|
||||
absl::raw_hash_set
|
||||
Seastar::seastar
|
||||
Snappy::snappy
|
||||
systemd
|
||||
ZLIB::ZLIB
|
||||
lz4::lz4_static
|
||||
zstd::zstd_static)
|
||||
if (Scylla_USE_PRECOMPILED_HEADER)
|
||||
set(Scylla_USE_PRECOMPILED_HEADER_USE ON)
|
||||
find_program(DISTCC_EXEC NAMES distcc OPTIONAL)
|
||||
if (DISTCC_EXEC)
|
||||
if(DEFINED ENV{DISTCC_HOSTS})
|
||||
set(Scylla_USE_PRECOMPILED_HEADER_USE OFF)
|
||||
message(STATUS "Disabling precompiled header usage because distcc exists and DISTCC_HOSTS is set, assuming you're using distributed compilation.")
|
||||
else()
|
||||
file(REAL_PATH "~/.distcc/hosts" DIST_CC_HOSTS_PATH EXPAND_TILDE)
|
||||
if (EXISTS ${DIST_CC_HOSTS_PATH})
|
||||
set(Scylla_USE_PRECOMPILED_HEADER_USE OFF)
|
||||
message(STATUS "Disabling precompiled header usage because distcc and ~/.distcc/hosts exists, assuming you're using distributed compilation.")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
if (Scylla_USE_PRECOMPILED_HEADER_USE)
|
||||
message(STATUS "Using precompiled header for Scylla - remember to add `sloppiness = pch_defines,time_macros` to ccache.conf, if you're using ccache.")
|
||||
target_precompile_headers(scylla-precompiled-header PRIVATE "stdafx.hh")
|
||||
target_compile_definitions(scylla-precompiled-header PRIVATE SCYLLA_USE_PRECOMPILED_HEADER)
|
||||
endif()
|
||||
else()
|
||||
set(Scylla_USE_PRECOMPILED_HEADER_USE OFF)
|
||||
endif()
|
||||
|
||||
add_library(scylla-main STATIC)
|
||||
|
||||
target_sources(scylla-main
|
||||
PRIVATE
|
||||
absl-flat_hash_map.cc
|
||||
@@ -247,7 +208,6 @@ target_link_libraries(scylla-main
|
||||
ZLIB::ZLIB
|
||||
lz4::lz4_static
|
||||
zstd::zstd_static
|
||||
scylla-precompiled-header
|
||||
)
|
||||
|
||||
option(Scylla_CHECK_HEADERS
|
||||
@@ -300,6 +260,7 @@ add_subdirectory(locator)
|
||||
add_subdirectory(message)
|
||||
add_subdirectory(mutation)
|
||||
add_subdirectory(mutation_writer)
|
||||
add_subdirectory(node_ops)
|
||||
add_subdirectory(readers)
|
||||
add_subdirectory(replica)
|
||||
add_subdirectory(raft)
|
||||
|
||||
@@ -43,7 +43,7 @@ For further information, please see:
|
||||
|
||||
[developer documentation]: HACKING.md
|
||||
[build documentation]: docs/dev/building.md
|
||||
[docker image build documentation]: dist/docker/redhat/README.md
|
||||
[docker image build documentation]: dist/docker/debian/README.md
|
||||
|
||||
## Running Scylla
|
||||
|
||||
|
||||
@@ -78,7 +78,7 @@ fi
|
||||
|
||||
# Default scylla product/version tags
|
||||
PRODUCT=scylla
|
||||
VERSION=2026.2.0-dev
|
||||
VERSION=2026.1.0-dev
|
||||
|
||||
if test -f version
|
||||
then
|
||||
|
||||
@@ -18,7 +18,6 @@ target_sources(alternator
|
||||
consumed_capacity.cc
|
||||
ttl.cc
|
||||
parsed_expression_cache.cc
|
||||
http_compression.cc
|
||||
${cql_grammar_srcs})
|
||||
target_include_directories(alternator
|
||||
PUBLIC
|
||||
@@ -35,8 +34,5 @@ target_link_libraries(alternator
|
||||
idl
|
||||
absl::headers)
|
||||
|
||||
if (Scylla_USE_PRECOMPILED_HEADER_USE)
|
||||
target_precompile_headers(alternator REUSE_FROM scylla-precompiled-header)
|
||||
endif()
|
||||
check_headers(check-headers alternator
|
||||
GLOB_RECURSE ${CMAKE_CURRENT_SOURCE_DIR}/*.hh)
|
||||
|
||||
@@ -13,8 +13,7 @@
|
||||
#include <string_view>
|
||||
#include "alternator/auth.hh"
|
||||
#include <fmt/format.h>
|
||||
#include "db/consistency_level_type.hh"
|
||||
#include "db/system_keyspace.hh"
|
||||
#include "auth/password_authenticator.hh"
|
||||
#include "service/storage_proxy.hh"
|
||||
#include "alternator/executor.hh"
|
||||
#include "cql3/selection/selection.hh"
|
||||
@@ -26,8 +25,8 @@ namespace alternator {
|
||||
|
||||
static logging::logger alogger("alternator-auth");
|
||||
|
||||
future<std::string> get_key_from_roles(service::storage_proxy& proxy, std::string username) {
|
||||
schema_ptr schema = proxy.data_dictionary().find_schema(db::system_keyspace::NAME, "roles");
|
||||
future<std::string> get_key_from_roles(service::storage_proxy& proxy, auth::service& as, std::string username) {
|
||||
schema_ptr schema = proxy.data_dictionary().find_schema(auth::get_auth_ks_name(as.query_processor()), "roles");
|
||||
partition_key pk = partition_key::from_single_value(*schema, utf8_type->decompose(username));
|
||||
dht::partition_range_vector partition_ranges{dht::partition_range(dht::decorate_key(*schema, pk))};
|
||||
std::vector<query::clustering_range> bounds{query::clustering_range::make_open_ended_both_sides()};
|
||||
@@ -40,7 +39,7 @@ future<std::string> get_key_from_roles(service::storage_proxy& proxy, std::strin
|
||||
auto partition_slice = query::partition_slice(std::move(bounds), {}, query::column_id_vector{salted_hash_col->id, can_login_col->id}, selection->get_query_options());
|
||||
auto command = ::make_lw_shared<query::read_command>(schema->id(), schema->version(), partition_slice,
|
||||
proxy.get_max_result_size(partition_slice), query::tombstone_limit(proxy.get_tombstone_limit()));
|
||||
auto cl = db::consistency_level::LOCAL_ONE;
|
||||
auto cl = auth::password_authenticator::consistency_for_user(username);
|
||||
|
||||
service::client_state client_state{service::client_state::internal_tag()};
|
||||
service::storage_proxy::coordinator_query_result qr = co_await proxy.query(schema, std::move(command), std::move(partition_ranges), cl,
|
||||
|
||||
@@ -20,6 +20,6 @@ namespace alternator {
|
||||
|
||||
using key_cache = utils::loading_cache<std::string, std::string, 1>;
|
||||
|
||||
future<std::string> get_key_from_roles(service::storage_proxy& proxy, std::string username);
|
||||
future<std::string> get_key_from_roles(service::storage_proxy& proxy, auth::service& as, std::string username);
|
||||
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@ comparison_operator_type get_comparison_operator(const rjson::value& comparison_
|
||||
if (!comparison_operator.IsString()) {
|
||||
throw api_error::validation(fmt::format("Invalid comparison operator definition {}", rjson::print(comparison_operator)));
|
||||
}
|
||||
std::string op = rjson::to_string(comparison_operator);
|
||||
std::string op = comparison_operator.GetString();
|
||||
auto it = ops.find(op);
|
||||
if (it == ops.end()) {
|
||||
throw api_error::validation(fmt::format("Unsupported comparison operator {}", op));
|
||||
@@ -377,8 +377,8 @@ bool check_compare(const rjson::value* v1, const rjson::value& v2, const Compara
|
||||
return cmp(unwrap_number(*v1, cmp.diagnostic), unwrap_number(v2, cmp.diagnostic));
|
||||
}
|
||||
if (kv1.name == "S") {
|
||||
return cmp(rjson::to_string_view(kv1.value),
|
||||
rjson::to_string_view(kv2.value));
|
||||
return cmp(std::string_view(kv1.value.GetString(), kv1.value.GetStringLength()),
|
||||
std::string_view(kv2.value.GetString(), kv2.value.GetStringLength()));
|
||||
}
|
||||
if (kv1.name == "B") {
|
||||
auto d_kv1 = unwrap_bytes(kv1.value, v1_from_query);
|
||||
@@ -470,9 +470,9 @@ static bool check_BETWEEN(const rjson::value* v, const rjson::value& lb, const r
|
||||
return check_BETWEEN(unwrap_number(*v, diag), unwrap_number(lb, diag), unwrap_number(ub, diag), bounds_from_query);
|
||||
}
|
||||
if (kv_v.name == "S") {
|
||||
return check_BETWEEN(rjson::to_string_view(kv_v.value),
|
||||
rjson::to_string_view(kv_lb.value),
|
||||
rjson::to_string_view(kv_ub.value),
|
||||
return check_BETWEEN(std::string_view(kv_v.value.GetString(), kv_v.value.GetStringLength()),
|
||||
std::string_view(kv_lb.value.GetString(), kv_lb.value.GetStringLength()),
|
||||
std::string_view(kv_ub.value.GetString(), kv_ub.value.GetStringLength()),
|
||||
bounds_from_query);
|
||||
}
|
||||
if (kv_v.name == "B") {
|
||||
@@ -618,7 +618,7 @@ conditional_operator_type get_conditional_operator(const rjson::value& req) {
|
||||
// Check if the existing values of the item (previous_item) match the
|
||||
// conditions given by the Expected and ConditionalOperator parameters
|
||||
// (if they exist) in the request (an UpdateItem, PutItem or DeleteItem).
|
||||
// This function can throw a ValidationException API error if there
|
||||
// This function can throw an ValidationException API error if there
|
||||
// are errors in the format of the condition itself.
|
||||
bool verify_expected(const rjson::value& req, const rjson::value* previous_item) {
|
||||
const rjson::value* expected = rjson::find(req, "Expected");
|
||||
|
||||
@@ -8,8 +8,6 @@
|
||||
|
||||
#include "consumed_capacity.hh"
|
||||
#include "error.hh"
|
||||
#include "utils/rjson.hh"
|
||||
#include <fmt/format.h>
|
||||
|
||||
namespace alternator {
|
||||
|
||||
@@ -34,18 +32,18 @@ bool consumed_capacity_counter::should_add_capacity(const rjson::value& request)
|
||||
if (!return_consumed->IsString()) {
|
||||
throw api_error::validation("Non-string ReturnConsumedCapacity field in request");
|
||||
}
|
||||
std::string_view consumed = rjson::to_string_view(*return_consumed);
|
||||
std::string consumed = return_consumed->GetString();
|
||||
if (consumed == "INDEXES") {
|
||||
throw api_error::validation("INDEXES consumed capacity is not supported");
|
||||
}
|
||||
if (consumed != "TOTAL") {
|
||||
throw api_error::validation(fmt::format("Unknown consumed capacity {}", consumed));
|
||||
throw api_error::validation("Unknown consumed capacity "+ consumed);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
void consumed_capacity_counter::add_consumed_capacity_to_response_if_needed(rjson::value& response) const noexcept {
|
||||
if (_should_add_to_response) {
|
||||
if (_should_add_to_reponse) {
|
||||
auto consumption = rjson::empty_object();
|
||||
rjson::add(consumption, "CapacityUnits", get_consumed_capacity_units());
|
||||
rjson::add(response, "ConsumedCapacity", std::move(consumption));
|
||||
|
||||
@@ -28,9 +28,9 @@ namespace alternator {
|
||||
class consumed_capacity_counter {
|
||||
public:
|
||||
consumed_capacity_counter() = default;
|
||||
consumed_capacity_counter(bool should_add_to_response) : _should_add_to_response(should_add_to_response){}
|
||||
consumed_capacity_counter(bool should_add_to_reponse) : _should_add_to_reponse(should_add_to_reponse){}
|
||||
bool operator()() const noexcept {
|
||||
return _should_add_to_response;
|
||||
return _should_add_to_reponse;
|
||||
}
|
||||
|
||||
consumed_capacity_counter& operator +=(uint64_t bytes);
|
||||
@@ -44,7 +44,7 @@ public:
|
||||
uint64_t _total_bytes = 0;
|
||||
static bool should_add_capacity(const rjson::value& request);
|
||||
protected:
|
||||
bool _should_add_to_response = false;
|
||||
bool _should_add_to_reponse = false;
|
||||
};
|
||||
|
||||
class rcu_consumed_capacity_counter : public consumed_capacity_counter {
|
||||
|
||||
@@ -28,7 +28,6 @@ static logging::logger logger("alternator_controller");
|
||||
controller::controller(
|
||||
sharded<gms::gossiper>& gossiper,
|
||||
sharded<service::storage_proxy>& proxy,
|
||||
sharded<service::storage_service>& ss,
|
||||
sharded<service::migration_manager>& mm,
|
||||
sharded<db::system_distributed_keyspace>& sys_dist_ks,
|
||||
sharded<cdc::generation_service>& cdc_gen_svc,
|
||||
@@ -40,7 +39,6 @@ controller::controller(
|
||||
: protocol_server(sg)
|
||||
, _gossiper(gossiper)
|
||||
, _proxy(proxy)
|
||||
, _ss(ss)
|
||||
, _mm(mm)
|
||||
, _sys_dist_ks(sys_dist_ks)
|
||||
, _cdc_gen_svc(cdc_gen_svc)
|
||||
@@ -91,7 +89,7 @@ future<> controller::start_server() {
|
||||
auto get_timeout_in_ms = [] (const db::config& cfg) -> utils::updateable_value<uint32_t> {
|
||||
return cfg.alternator_timeout_in_ms;
|
||||
};
|
||||
_executor.start(std::ref(_gossiper), std::ref(_proxy), std::ref(_ss), std::ref(_mm), std::ref(_sys_dist_ks),
|
||||
_executor.start(std::ref(_gossiper), std::ref(_proxy), std::ref(_mm), std::ref(_sys_dist_ks),
|
||||
sharded_parameter(get_cdc_metadata, std::ref(_cdc_gen_svc)), _ssg.value(),
|
||||
sharded_parameter(get_timeout_in_ms, std::ref(_config))).get();
|
||||
_server.start(std::ref(_executor), std::ref(_proxy), std::ref(_gossiper), std::ref(_auth_service), std::ref(_sl_controller)).get();
|
||||
@@ -105,23 +103,11 @@ future<> controller::start_server() {
|
||||
alternator_port = _config.alternator_port();
|
||||
_listen_addresses.push_back({addr, *alternator_port});
|
||||
}
|
||||
std::optional<uint16_t> alternator_port_proxy_protocol;
|
||||
if (_config.alternator_port_proxy_protocol()) {
|
||||
alternator_port_proxy_protocol = _config.alternator_port_proxy_protocol();
|
||||
_listen_addresses.push_back({addr, *alternator_port_proxy_protocol});
|
||||
}
|
||||
std::optional<uint16_t> alternator_https_port;
|
||||
std::optional<uint16_t> alternator_https_port_proxy_protocol;
|
||||
std::optional<tls::credentials_builder> creds;
|
||||
if (_config.alternator_https_port() || _config.alternator_https_port_proxy_protocol()) {
|
||||
if (_config.alternator_https_port()) {
|
||||
alternator_https_port = _config.alternator_https_port();
|
||||
_listen_addresses.push_back({addr, *alternator_https_port});
|
||||
}
|
||||
if (_config.alternator_https_port_proxy_protocol()) {
|
||||
alternator_https_port_proxy_protocol = _config.alternator_https_port_proxy_protocol();
|
||||
_listen_addresses.push_back({addr, *alternator_https_port_proxy_protocol});
|
||||
}
|
||||
if (_config.alternator_https_port()) {
|
||||
alternator_https_port = _config.alternator_https_port();
|
||||
_listen_addresses.push_back({addr, *alternator_https_port});
|
||||
creds.emplace();
|
||||
auto opts = _config.alternator_encryption_options();
|
||||
if (opts.empty()) {
|
||||
@@ -147,29 +133,19 @@ future<> controller::start_server() {
|
||||
}
|
||||
}
|
||||
_server.invoke_on_all(
|
||||
[this, addr, alternator_port, alternator_https_port, alternator_port_proxy_protocol, alternator_https_port_proxy_protocol, creds = std::move(creds)] (server& server) mutable {
|
||||
return server.init(addr, alternator_port, alternator_https_port, alternator_port_proxy_protocol, alternator_https_port_proxy_protocol, creds,
|
||||
[this, addr, alternator_port, alternator_https_port, creds = std::move(creds)] (server& server) mutable {
|
||||
return server.init(addr, alternator_port, alternator_https_port, creds,
|
||||
_config.alternator_enforce_authorization,
|
||||
_config.alternator_warn_authorization,
|
||||
_config.alternator_max_users_query_size_in_trace_output,
|
||||
&_memory_limiter.local().get_semaphore(),
|
||||
_config.max_concurrent_requests_per_shard);
|
||||
}).handle_exception([this, addr, alternator_port, alternator_https_port, alternator_port_proxy_protocol, alternator_https_port_proxy_protocol] (std::exception_ptr ep) {
|
||||
logger.error("Failed to set up Alternator HTTP server on {} port {}, TLS port {}, proxy-protocol port {}, TLS proxy-protocol port {}: {}",
|
||||
addr,
|
||||
alternator_port ? std::to_string(*alternator_port) : "OFF",
|
||||
alternator_https_port ? std::to_string(*alternator_https_port) : "OFF",
|
||||
alternator_port_proxy_protocol ? std::to_string(*alternator_port_proxy_protocol) : "OFF",
|
||||
alternator_https_port_proxy_protocol ? std::to_string(*alternator_https_port_proxy_protocol) : "OFF",
|
||||
ep);
|
||||
}).handle_exception([this, addr, alternator_port, alternator_https_port] (std::exception_ptr ep) {
|
||||
logger.error("Failed to set up Alternator HTTP server on {} port {}, TLS port {}: {}",
|
||||
addr, alternator_port ? std::to_string(*alternator_port) : "OFF", alternator_https_port ? std::to_string(*alternator_https_port) : "OFF", ep);
|
||||
return stop_server().then([ep = std::move(ep)] { return make_exception_future<>(ep); });
|
||||
}).then([addr, alternator_port, alternator_https_port, alternator_port_proxy_protocol, alternator_https_port_proxy_protocol] {
|
||||
logger.info("Alternator server listening on {}, HTTP port {}, HTTPS port {}, proxy-protocol port {}, TLS proxy-protocol port {}",
|
||||
addr,
|
||||
alternator_port ? std::to_string(*alternator_port) : "OFF",
|
||||
alternator_https_port ? std::to_string(*alternator_https_port) : "OFF",
|
||||
alternator_port_proxy_protocol ? std::to_string(*alternator_port_proxy_protocol) : "OFF",
|
||||
alternator_https_port_proxy_protocol ? std::to_string(*alternator_https_port_proxy_protocol) : "OFF");
|
||||
}).then([addr, alternator_port, alternator_https_port] {
|
||||
logger.info("Alternator server listening on {}, HTTP port {}, HTTPS port {}",
|
||||
addr, alternator_port ? std::to_string(*alternator_port) : "OFF", alternator_https_port ? std::to_string(*alternator_https_port) : "OFF");
|
||||
}).get();
|
||||
});
|
||||
}
|
||||
@@ -192,7 +168,7 @@ future<> controller::request_stop_server() {
|
||||
});
|
||||
}
|
||||
|
||||
future<utils::chunked_vector<foreign_ptr<std::unique_ptr<client_data>>>> controller::get_client_data() {
|
||||
future<utils::chunked_vector<client_data>> controller::get_client_data() {
|
||||
return _server.local().get_client_data();
|
||||
}
|
||||
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
|
||||
namespace service {
|
||||
class storage_proxy;
|
||||
class storage_service;
|
||||
class migration_manager;
|
||||
class memory_limiter;
|
||||
}
|
||||
@@ -58,7 +57,6 @@ class server;
|
||||
class controller : public protocol_server {
|
||||
sharded<gms::gossiper>& _gossiper;
|
||||
sharded<service::storage_proxy>& _proxy;
|
||||
sharded<service::storage_service>& _ss;
|
||||
sharded<service::migration_manager>& _mm;
|
||||
sharded<db::system_distributed_keyspace>& _sys_dist_ks;
|
||||
sharded<cdc::generation_service>& _cdc_gen_svc;
|
||||
@@ -76,7 +74,6 @@ public:
|
||||
controller(
|
||||
sharded<gms::gossiper>& gossiper,
|
||||
sharded<service::storage_proxy>& proxy,
|
||||
sharded<service::storage_service>& ss,
|
||||
sharded<service::migration_manager>& mm,
|
||||
sharded<db::system_distributed_keyspace>& sys_dist_ks,
|
||||
sharded<cdc::generation_service>& cdc_gen_svc,
|
||||
@@ -96,7 +93,7 @@ public:
|
||||
// This virtual function is called (on each shard separately) when the
|
||||
// virtual table "system.clients" is read. It is expected to generate a
|
||||
// list of clients connected to this server (on this shard).
|
||||
virtual future<utils::chunked_vector<foreign_ptr<std::unique_ptr<client_data>>>> get_client_data() override;
|
||||
virtual future<utils::chunked_vector<client_data>> get_client_data() override;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -17,13 +17,11 @@
|
||||
#include "service/client_state.hh"
|
||||
#include "service_permit.hh"
|
||||
#include "db/timeout_clock.hh"
|
||||
#include "db/config.hh"
|
||||
|
||||
#include "alternator/error.hh"
|
||||
#include "stats.hh"
|
||||
#include "utils/rjson.hh"
|
||||
#include "utils/updateable_value.hh"
|
||||
#include "utils/simple_value_with_expiry.hh"
|
||||
|
||||
#include "tracing/trace_state.hh"
|
||||
|
||||
@@ -42,8 +40,6 @@ namespace cql3::selection {
|
||||
|
||||
namespace service {
|
||||
class storage_proxy;
|
||||
class cas_shard;
|
||||
class storage_service;
|
||||
}
|
||||
|
||||
namespace cdc {
|
||||
@@ -60,9 +56,7 @@ class schema_builder;
|
||||
|
||||
namespace alternator {
|
||||
|
||||
enum class table_status;
|
||||
class rmw_operation;
|
||||
class put_or_delete_item;
|
||||
|
||||
schema_ptr get_table(service::storage_proxy& proxy, const rjson::value& request);
|
||||
bool is_alternator_keyspace(const sstring& ks_name);
|
||||
@@ -140,24 +134,17 @@ class expression_cache;
|
||||
|
||||
class executor : public peering_sharded_service<executor> {
|
||||
gms::gossiper& _gossiper;
|
||||
service::storage_service& _ss;
|
||||
service::storage_proxy& _proxy;
|
||||
service::migration_manager& _mm;
|
||||
db::system_distributed_keyspace& _sdks;
|
||||
cdc::metadata& _cdc_metadata;
|
||||
utils::updateable_value<bool> _enforce_authorization;
|
||||
utils::updateable_value<bool> _warn_authorization;
|
||||
// An smp_service_group to be used for limiting the concurrency when
|
||||
// forwarding Alternator request between shards - if necessary for LWT.
|
||||
smp_service_group _ssg;
|
||||
|
||||
std::unique_ptr<parsed::expression_cache> _parsed_expression_cache;
|
||||
|
||||
struct describe_table_info_manager;
|
||||
std::unique_ptr<describe_table_info_manager> _describe_table_info_manager;
|
||||
|
||||
future<> cache_newly_calculated_size_on_all_shards(schema_ptr schema, std::uint64_t size_in_bytes, std::chrono::nanoseconds ttl);
|
||||
future<> fill_table_size(rjson::value &table_description, schema_ptr schema, bool deleting);
|
||||
public:
|
||||
using client_state = service::client_state;
|
||||
// request_return_type is the return type of the executor methods, which
|
||||
@@ -183,7 +170,6 @@ public:
|
||||
|
||||
executor(gms::gossiper& gossiper,
|
||||
service::storage_proxy& proxy,
|
||||
service::storage_service& ss,
|
||||
service::migration_manager& mm,
|
||||
db::system_distributed_keyspace& sdks,
|
||||
cdc::metadata& cdc_metadata,
|
||||
@@ -231,18 +217,6 @@ private:
|
||||
friend class rmw_operation;
|
||||
|
||||
static void describe_key_schema(rjson::value& parent, const schema&, std::unordered_map<std::string,std::string> * = nullptr, const std::map<sstring, sstring> *tags = nullptr);
|
||||
future<rjson::value> fill_table_description(schema_ptr schema, table_status tbl_status, service::client_state& client_state, tracing::trace_state_ptr trace_state, service_permit permit);
|
||||
future<executor::request_return_type> create_table_on_shard0(service::client_state&& client_state, tracing::trace_state_ptr trace_state, rjson::value request, bool enforce_authorization, bool warn_authorization, const db::tablets_mode_t::mode tablets_mode);
|
||||
|
||||
future<> do_batch_write(
|
||||
std::vector<std::pair<schema_ptr, put_or_delete_item>> mutation_builders,
|
||||
service::client_state& client_state,
|
||||
tracing::trace_state_ptr trace_state,
|
||||
service_permit permit);
|
||||
|
||||
future<> cas_write(schema_ptr schema, service::cas_shard cas_shard, const dht::decorated_key& dk,
|
||||
const std::vector<put_or_delete_item>& mutation_builders, service::client_state& client_state,
|
||||
tracing::trace_state_ptr trace_state, service_permit permit);
|
||||
|
||||
public:
|
||||
static void describe_key_schema(rjson::value& parent, const schema& schema, std::unordered_map<std::string,std::string>&, const std::map<sstring, sstring> *tags = nullptr);
|
||||
@@ -290,7 +264,7 @@ bool is_big(const rjson::value& val, int big_size = 100'000);
|
||||
// Check CQL's Role-Based Access Control (RBAC) permission (MODIFY,
|
||||
// SELECT, DROP, etc.) on the given table. When permission is denied an
|
||||
// appropriate user-readable api_error::access_denied is thrown.
|
||||
future<> verify_permission(bool enforce_authorization, bool warn_authorization, const service::client_state&, const schema_ptr&, auth::permission, alternator::stats& stats);
|
||||
future<> verify_permission(bool enforce_authorization, const service::client_state&, const schema_ptr&, auth::permission);
|
||||
|
||||
/**
|
||||
* Make return type for serializing the object "streamed",
|
||||
|
||||
@@ -50,7 +50,7 @@ public:
|
||||
_operators.emplace_back(i);
|
||||
check_depth_limit();
|
||||
}
|
||||
void add_dot(std::string name) {
|
||||
void add_dot(std::string(name)) {
|
||||
_operators.emplace_back(std::move(name));
|
||||
check_depth_limit();
|
||||
}
|
||||
@@ -85,7 +85,7 @@ struct constant {
|
||||
}
|
||||
};
|
||||
|
||||
// "value" is a value used in the right hand side of an assignment
|
||||
// "value" is is a value used in the right hand side of an assignment
|
||||
// expression, "SET a = ...". It can be a constant (a reference to a value
|
||||
// included in the request, e.g., ":val"), a path to an attribute from the
|
||||
// existing item (e.g., "a.b[3].c"), or a function of other such values.
|
||||
@@ -205,7 +205,7 @@ public:
|
||||
// The supported primitive conditions are:
|
||||
// 1. Binary operators - v1 OP v2, where OP is =, <>, <, <=, >, or >= and
|
||||
// v1 and v2 are values - from the item (an attribute path), the query
|
||||
// (a ":val" reference), or a function of the above (only the size()
|
||||
// (a ":val" reference), or a function of the the above (only the size()
|
||||
// function is supported).
|
||||
// 2. Ternary operator - v1 BETWEEN v2 and v3 (means v1 >= v2 AND v1 <= v3).
|
||||
// 3. N-ary operator - v1 IN ( v2, v3, ... )
|
||||
|
||||
@@ -1,301 +0,0 @@
|
||||
/*
|
||||
* Copyright 2025-present ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#include "alternator/http_compression.hh"
|
||||
#include "alternator/server.hh"
|
||||
#include <seastar/coroutine/maybe_yield.hh>
|
||||
#include <zlib.h>
|
||||
|
||||
static logging::logger slogger("alternator-http-compression");
|
||||
|
||||
namespace alternator {
|
||||
|
||||
|
||||
static constexpr size_t compressed_buffer_size = 1024;
|
||||
class zlib_compressor {
|
||||
z_stream _zs;
|
||||
temporary_buffer<char> _output_buf;
|
||||
noncopyable_function<future<>(temporary_buffer<char>&&)> _write_func;
|
||||
public:
|
||||
zlib_compressor(bool gzip, int compression_level, noncopyable_function<future<>(temporary_buffer<char>&&)> write_func)
|
||||
: _write_func(std::move(write_func)) {
|
||||
memset(&_zs, 0, sizeof(_zs));
|
||||
if (deflateInit2(&_zs, std::clamp(compression_level, Z_NO_COMPRESSION, Z_BEST_COMPRESSION), Z_DEFLATED,
|
||||
(gzip ? 16 : 0) + MAX_WBITS, 8, Z_DEFAULT_STRATEGY) != Z_OK) {
|
||||
// Should only happen if memory allocation fails
|
||||
throw std::bad_alloc();
|
||||
}
|
||||
}
|
||||
~zlib_compressor() {
|
||||
deflateEnd(&_zs);
|
||||
}
|
||||
future<> close() {
|
||||
return compress(nullptr, 0, true);
|
||||
}
|
||||
|
||||
future<> compress(const char* buf, size_t len, bool is_last_chunk = false) {
|
||||
_zs.next_in = reinterpret_cast<unsigned char*>(const_cast<char*>(buf));
|
||||
_zs.avail_in = (uInt) len;
|
||||
int mode = is_last_chunk ? Z_FINISH : Z_NO_FLUSH;
|
||||
while(_zs.avail_in > 0 || is_last_chunk) {
|
||||
co_await coroutine::maybe_yield();
|
||||
if (_output_buf.empty()) {
|
||||
if (is_last_chunk) {
|
||||
uint32_t max_buffer_size = 0;
|
||||
deflatePending(&_zs, &max_buffer_size, nullptr);
|
||||
max_buffer_size += deflateBound(&_zs, _zs.avail_in) + 1;
|
||||
_output_buf = temporary_buffer<char>(std::min(compressed_buffer_size, (size_t) max_buffer_size));
|
||||
} else {
|
||||
_output_buf = temporary_buffer<char>(compressed_buffer_size);
|
||||
}
|
||||
_zs.next_out = reinterpret_cast<unsigned char*>(_output_buf.get_write());
|
||||
_zs.avail_out = compressed_buffer_size;
|
||||
}
|
||||
int e = deflate(&_zs, mode);
|
||||
if (e < Z_OK) {
|
||||
throw api_error::internal("Error during compression of response body");
|
||||
}
|
||||
if (e == Z_STREAM_END || _zs.avail_out < compressed_buffer_size / 4) {
|
||||
_output_buf.trim(compressed_buffer_size - _zs.avail_out);
|
||||
co_await _write_func(std::move(_output_buf));
|
||||
if (e == Z_STREAM_END) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Helper string_view functions for parsing Accept-Encoding header
|
||||
struct case_insensitive_cmp_sv {
|
||||
bool operator()(std::string_view s1, std::string_view s2) const {
|
||||
return std::equal(s1.begin(), s1.end(), s2.begin(), s2.end(),
|
||||
[](char a, char b) { return ::tolower(a) == ::tolower(b); });
|
||||
}
|
||||
};
|
||||
static inline std::string_view trim_left(std::string_view sv) {
|
||||
while (!sv.empty() && std::isspace(static_cast<unsigned char>(sv.front())))
|
||||
sv.remove_prefix(1);
|
||||
return sv;
|
||||
}
|
||||
static inline std::string_view trim_right(std::string_view sv) {
|
||||
while (!sv.empty() && std::isspace(static_cast<unsigned char>(sv.back())))
|
||||
sv.remove_suffix(1);
|
||||
return sv;
|
||||
}
|
||||
static inline std::string_view trim(std::string_view sv) {
|
||||
return trim_left(trim_right(sv));
|
||||
}
|
||||
|
||||
inline std::vector<std::string_view> split(std::string_view text, char separator) {
|
||||
std::vector<std::string_view> tokens;
|
||||
if (text == "") {
|
||||
return tokens;
|
||||
}
|
||||
|
||||
while (true) {
|
||||
auto pos = text.find_first_of(separator);
|
||||
if (pos != std::string_view::npos) {
|
||||
tokens.emplace_back(text.data(), pos);
|
||||
text.remove_prefix(pos + 1);
|
||||
} else {
|
||||
tokens.emplace_back(text);
|
||||
break;
|
||||
}
|
||||
}
|
||||
return tokens;
|
||||
}
|
||||
|
||||
constexpr response_compressor::compression_type response_compressor::get_compression_type(std::string_view encoding) {
|
||||
for (size_t i = 0; i < static_cast<size_t>(compression_type::count); ++i) {
|
||||
if (case_insensitive_cmp_sv{}(encoding, compression_names[i])) {
|
||||
return static_cast<compression_type>(i);
|
||||
}
|
||||
}
|
||||
return compression_type::unknown;
|
||||
}
|
||||
|
||||
response_compressor::compression_type response_compressor::find_compression(std::string_view accept_encoding, size_t response_size) {
|
||||
std::optional<float> ct_q[static_cast<size_t>(compression_type::count)];
|
||||
ct_q[static_cast<size_t>(compression_type::none)] = std::numeric_limits<float>::min(); // enabled, but lowest priority
|
||||
compression_type selected_ct = compression_type::none;
|
||||
|
||||
std::vector<std::string_view> entries = split(accept_encoding, ',');
|
||||
for (auto& e : entries) {
|
||||
std::vector<std::string_view> params = split(e, ';');
|
||||
if (params.size() == 0) {
|
||||
continue;
|
||||
}
|
||||
compression_type ct = get_compression_type(trim(params[0]));
|
||||
if (ct == compression_type::unknown) {
|
||||
continue; // ignore unknown encoding types
|
||||
}
|
||||
if (ct_q[static_cast<size_t>(ct)].has_value() && ct_q[static_cast<size_t>(ct)] != 0.0f) {
|
||||
continue; // already processed this encoding
|
||||
}
|
||||
if (response_size < _threshold[static_cast<size_t>(ct)]) {
|
||||
continue; // below threshold treat as unknown
|
||||
}
|
||||
for (size_t i = 1; i < params.size(); ++i) { // find "q=" parameter
|
||||
auto pos = params[i].find("q=");
|
||||
if (pos == std::string_view::npos) {
|
||||
continue;
|
||||
}
|
||||
std::string_view param = params[i].substr(pos + 2);
|
||||
param = trim(param);
|
||||
// parse quality value
|
||||
float q_value = 1.0f;
|
||||
auto [ptr, ec] = std::from_chars(param.data(), param.data() + param.size(), q_value);
|
||||
if (ec != std::errc() || ptr != param.data() + param.size()) {
|
||||
continue;
|
||||
}
|
||||
if (q_value < 0.0) {
|
||||
q_value = 0.0;
|
||||
} else if (q_value > 1.0) {
|
||||
q_value = 1.0;
|
||||
}
|
||||
ct_q[static_cast<size_t>(ct)] = q_value;
|
||||
break; // we parsed quality value
|
||||
}
|
||||
if (!ct_q[static_cast<size_t>(ct)].has_value()) {
|
||||
ct_q[static_cast<size_t>(ct)] = 1.0f; // default quality value
|
||||
}
|
||||
// keep the highest encoding (in the order, unless 'any')
|
||||
if (selected_ct == compression_type::any) {
|
||||
if (ct_q[static_cast<size_t>(ct)] >= ct_q[static_cast<size_t>(selected_ct)]) {
|
||||
selected_ct = ct;
|
||||
}
|
||||
} else {
|
||||
if (ct_q[static_cast<size_t>(ct)] > ct_q[static_cast<size_t>(selected_ct)]) {
|
||||
selected_ct = ct;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (selected_ct == compression_type::any) {
|
||||
// select any not mentioned or highest quality
|
||||
selected_ct = compression_type::none;
|
||||
for (size_t i = 0; i < static_cast<size_t>(compression_type::compressions_count); ++i) {
|
||||
if (!ct_q[i].has_value()) {
|
||||
return static_cast<compression_type>(i);
|
||||
}
|
||||
if (ct_q[i] > ct_q[static_cast<size_t>(selected_ct)]) {
|
||||
selected_ct = static_cast<compression_type>(i);
|
||||
}
|
||||
}
|
||||
}
|
||||
return selected_ct;
|
||||
}
|
||||
|
||||
static future<chunked_content> compress(response_compressor::compression_type ct, const db::config& cfg, std::string str) {
|
||||
chunked_content compressed;
|
||||
auto write = [&compressed](temporary_buffer<char>&& buf) -> future<> {
|
||||
compressed.push_back(std::move(buf));
|
||||
return make_ready_future<>();
|
||||
};
|
||||
zlib_compressor compressor(ct != response_compressor::compression_type::deflate,
|
||||
cfg.alternator_response_gzip_compression_level(), std::move(write));
|
||||
co_await compressor.compress(str.data(), str.size(), true);
|
||||
co_return compressed;
|
||||
}
|
||||
|
||||
static sstring flatten(chunked_content&& cc) {
|
||||
size_t total_size = 0;
|
||||
for (const auto& chunk : cc) {
|
||||
total_size += chunk.size();
|
||||
}
|
||||
sstring result = sstring{ sstring::initialized_later{}, total_size };
|
||||
size_t offset = 0;
|
||||
for (const auto& chunk : cc) {
|
||||
std::copy(chunk.begin(), chunk.end(), result.begin() + offset);
|
||||
offset += chunk.size();
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
future<std::unique_ptr<http::reply>> response_compressor::generate_reply(std::unique_ptr<http::reply> rep, sstring accept_encoding, const char* content_type, std::string&& response_body) {
|
||||
response_compressor::compression_type ct = find_compression(accept_encoding, response_body.size());
|
||||
if (ct != response_compressor::compression_type::none) {
|
||||
rep->add_header("Content-Encoding", get_encoding_name(ct));
|
||||
rep->set_content_type(content_type);
|
||||
return compress(ct, cfg, std::move(response_body)).then([rep = std::move(rep)] (chunked_content compressed) mutable {
|
||||
rep->_content = flatten(std::move(compressed));
|
||||
return make_ready_future<std::unique_ptr<http::reply>>(std::move(rep));
|
||||
});
|
||||
} else {
|
||||
// Note that despite the move, there is a copy here -
|
||||
// as str is std::string and rep->_content is sstring.
|
||||
rep->_content = std::move(response_body);
|
||||
rep->set_content_type(content_type);
|
||||
}
|
||||
return make_ready_future<std::unique_ptr<http::reply>>(std::move(rep));
|
||||
}
|
||||
|
||||
template<typename Compressor>
|
||||
class compressed_data_sink_impl : public data_sink_impl {
|
||||
output_stream<char> _out;
|
||||
Compressor _compressor;
|
||||
public:
|
||||
template<typename... Args>
|
||||
compressed_data_sink_impl(output_stream<char>&& out, Args&&... args)
|
||||
: _out(std::move(out)), _compressor(std::forward<Args>(args)..., [this](temporary_buffer<char>&& buf) {
|
||||
return _out.write(std::move(buf));
|
||||
}) { }
|
||||
|
||||
future<> put(std::span<temporary_buffer<char>> data) override {
|
||||
return data_sink_impl::fallback_put(data, [this] (temporary_buffer<char>&& buf) {
|
||||
return do_put(std::move(buf));
|
||||
});
|
||||
}
|
||||
|
||||
private:
|
||||
future<> do_put(temporary_buffer<char> buf) {
|
||||
co_return co_await _compressor.compress(buf.get(), buf.size());
|
||||
|
||||
}
|
||||
future<> close() override {
|
||||
return _compressor.close().then([this] {
|
||||
return _out.close();
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
executor::body_writer compress(response_compressor::compression_type ct, const db::config& cfg, executor::body_writer&& bw) {
|
||||
return [bw = std::move(bw), ct, level = cfg.alternator_response_gzip_compression_level()](output_stream<char>&& out) mutable -> future<> {
|
||||
output_stream_options opts;
|
||||
opts.trim_to_size = true;
|
||||
std::unique_ptr<data_sink_impl> data_sink_impl;
|
||||
switch (ct) {
|
||||
case response_compressor::compression_type::gzip:
|
||||
data_sink_impl = std::make_unique<compressed_data_sink_impl<zlib_compressor>>(std::move(out), true, level);
|
||||
break;
|
||||
case response_compressor::compression_type::deflate:
|
||||
data_sink_impl = std::make_unique<compressed_data_sink_impl<zlib_compressor>>(std::move(out), false, level);
|
||||
break;
|
||||
case response_compressor::compression_type::none:
|
||||
case response_compressor::compression_type::any:
|
||||
case response_compressor::compression_type::unknown:
|
||||
on_internal_error(slogger,"Compression not selected");
|
||||
default:
|
||||
on_internal_error(slogger, "Unsupported compression type for data sink");
|
||||
}
|
||||
return bw(output_stream<char>(data_sink(std::move(data_sink_impl)), compressed_buffer_size, opts));
|
||||
};
|
||||
}
|
||||
|
||||
future<std::unique_ptr<http::reply>> response_compressor::generate_reply(std::unique_ptr<http::reply> rep, sstring accept_encoding, const char* content_type, executor::body_writer&& body_writer) {
|
||||
response_compressor::compression_type ct = find_compression(accept_encoding, std::numeric_limits<size_t>::max());
|
||||
if (ct != response_compressor::compression_type::none) {
|
||||
rep->add_header("Content-Encoding", get_encoding_name(ct));
|
||||
rep->write_body(content_type, compress(ct, cfg, std::move(body_writer)));
|
||||
} else {
|
||||
rep->write_body(content_type, std::move(body_writer));
|
||||
}
|
||||
return make_ready_future<std::unique_ptr<http::reply>>(std::move(rep));
|
||||
}
|
||||
|
||||
} // namespace alternator
|
||||
@@ -1,91 +0,0 @@
|
||||
/*
|
||||
* Copyright 2025-present ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "alternator/executor.hh"
|
||||
#include <seastar/http/httpd.hh>
|
||||
#include "db/config.hh"
|
||||
|
||||
namespace alternator {
|
||||
|
||||
class response_compressor {
|
||||
public:
|
||||
enum class compression_type {
|
||||
gzip,
|
||||
deflate,
|
||||
compressions_count,
|
||||
any = compressions_count,
|
||||
none,
|
||||
count,
|
||||
unknown = count
|
||||
};
|
||||
static constexpr std::string_view compression_names[] = {
|
||||
"gzip",
|
||||
"deflate",
|
||||
"*",
|
||||
"identity"
|
||||
};
|
||||
|
||||
static sstring get_encoding_name(compression_type ct) {
|
||||
return sstring(compression_names[static_cast<size_t>(ct)]);
|
||||
}
|
||||
static constexpr compression_type get_compression_type(std::string_view encoding);
|
||||
|
||||
sstring get_accepted_encoding(const http::request& req) {
|
||||
if (get_threshold() == 0) {
|
||||
return "";
|
||||
}
|
||||
return req.get_header("Accept-Encoding");
|
||||
}
|
||||
compression_type find_compression(std::string_view accept_encoding, size_t response_size);
|
||||
|
||||
response_compressor(const db::config& cfg)
|
||||
: cfg(cfg)
|
||||
,_gzip_level_observer(
|
||||
cfg.alternator_response_gzip_compression_level.observe([this](int v) {
|
||||
update_threshold();
|
||||
}))
|
||||
,_gzip_threshold_observer(
|
||||
cfg.alternator_response_compression_threshold_in_bytes.observe([this](uint32_t v) {
|
||||
update_threshold();
|
||||
}))
|
||||
{
|
||||
update_threshold();
|
||||
}
|
||||
response_compressor(const response_compressor& rhs) : response_compressor(rhs.cfg) {}
|
||||
|
||||
private:
|
||||
const db::config& cfg;
|
||||
utils::observable<int>::observer _gzip_level_observer;
|
||||
utils::observable<uint32_t>::observer _gzip_threshold_observer;
|
||||
uint32_t _threshold[static_cast<size_t>(compression_type::count)];
|
||||
|
||||
size_t get_threshold() { return _threshold[static_cast<size_t>(compression_type::any)]; }
|
||||
void update_threshold() {
|
||||
_threshold[static_cast<size_t>(compression_type::none)] = std::numeric_limits<uint32_t>::max();
|
||||
_threshold[static_cast<size_t>(compression_type::any)] = std::numeric_limits<uint32_t>::max();
|
||||
uint32_t gzip = cfg.alternator_response_gzip_compression_level() <= 0 ? std::numeric_limits<uint32_t>::max()
|
||||
: cfg.alternator_response_compression_threshold_in_bytes();
|
||||
_threshold[static_cast<size_t>(compression_type::gzip)] = gzip;
|
||||
_threshold[static_cast<size_t>(compression_type::deflate)] = gzip;
|
||||
for (size_t i = 0; i < static_cast<size_t>(compression_type::compressions_count); ++i) {
|
||||
if (_threshold[i] < _threshold[static_cast<size_t>(compression_type::any)]) {
|
||||
_threshold[static_cast<size_t>(compression_type::any)] = _threshold[i];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public:
|
||||
future<std::unique_ptr<http::reply>> generate_reply(std::unique_ptr<http::reply> rep,
|
||||
sstring accept_encoding, const char* content_type, std::string&& response_body);
|
||||
future<std::unique_ptr<http::reply>> generate_reply(std::unique_ptr<http::reply> rep,
|
||||
sstring accept_encoding, const char* content_type, executor::body_writer&& body_writer);
|
||||
};
|
||||
|
||||
}
|
||||
@@ -282,23 +282,15 @@ std::string type_to_string(data_type type) {
|
||||
return it->second;
|
||||
}
|
||||
|
||||
std::optional<bytes> try_get_key_column_value(const rjson::value& item, const column_definition& column) {
|
||||
bytes get_key_column_value(const rjson::value& item, const column_definition& column) {
|
||||
std::string column_name = column.name_as_text();
|
||||
const rjson::value* key_typed_value = rjson::find(item, column_name);
|
||||
if (!key_typed_value) {
|
||||
return std::nullopt;
|
||||
throw api_error::validation(fmt::format("Key column {} not found", column_name));
|
||||
}
|
||||
return get_key_from_typed_value(*key_typed_value, column);
|
||||
}
|
||||
|
||||
bytes get_key_column_value(const rjson::value& item, const column_definition& column) {
|
||||
auto value = try_get_key_column_value(item, column);
|
||||
if (!value) {
|
||||
throw api_error::validation(fmt::format("Key column {} not found", column.name_as_text()));
|
||||
}
|
||||
return std::move(*value);
|
||||
}
|
||||
|
||||
// Parses the JSON encoding for a key value, which is a map with a single
|
||||
// entry whose key is the type and the value is the encoded value.
|
||||
// If this type does not match the desired "type_str", an api_error::validation
|
||||
@@ -388,38 +380,20 @@ clustering_key ck_from_json(const rjson::value& item, schema_ptr schema) {
|
||||
return clustering_key::make_empty();
|
||||
}
|
||||
std::vector<bytes> raw_ck;
|
||||
// Note: it's possible to get more than one clustering column here, as
|
||||
// Alternator can be used to read scylla internal tables.
|
||||
// FIXME: this is a loop, but we really allow only one clustering key column.
|
||||
for (const column_definition& cdef : schema->clustering_key_columns()) {
|
||||
auto raw_value = get_key_column_value(item, cdef);
|
||||
bytes raw_value = get_key_column_value(item, cdef);
|
||||
raw_ck.push_back(std::move(raw_value));
|
||||
}
|
||||
|
||||
return clustering_key::from_exploded(raw_ck);
|
||||
}
|
||||
|
||||
clustering_key_prefix ck_prefix_from_json(const rjson::value& item, schema_ptr schema) {
|
||||
if (schema->clustering_key_size() == 0) {
|
||||
return clustering_key_prefix::make_empty();
|
||||
}
|
||||
std::vector<bytes> raw_ck;
|
||||
for (const column_definition& cdef : schema->clustering_key_columns()) {
|
||||
auto raw_value = try_get_key_column_value(item, cdef);
|
||||
if (!raw_value) {
|
||||
break;
|
||||
}
|
||||
raw_ck.push_back(std::move(*raw_value));
|
||||
}
|
||||
|
||||
return clustering_key_prefix::from_exploded(raw_ck);
|
||||
}
|
||||
|
||||
position_in_partition pos_from_json(const rjson::value& item, schema_ptr schema) {
|
||||
const bool is_alternator_ks = is_alternator_keyspace(schema->ks_name());
|
||||
if (is_alternator_ks) {
|
||||
return position_in_partition::for_key(ck_from_json(item, schema));
|
||||
auto ck = ck_from_json(item, schema);
|
||||
if (is_alternator_keyspace(schema->ks_name())) {
|
||||
return position_in_partition::for_key(std::move(ck));
|
||||
}
|
||||
|
||||
const auto region_item = rjson::find(item, scylla_paging_region);
|
||||
const auto weight_item = rjson::find(item, scylla_paging_weight);
|
||||
if (bool(region_item) != bool(weight_item)) {
|
||||
@@ -439,9 +413,8 @@ position_in_partition pos_from_json(const rjson::value& item, schema_ptr schema)
|
||||
} else {
|
||||
throw std::runtime_error(fmt::format("Invalid value for weight: {}", weight_view));
|
||||
}
|
||||
return position_in_partition(region, weight, region == partition_region::clustered ? std::optional(ck_prefix_from_json(item, schema)) : std::nullopt);
|
||||
return position_in_partition(region, weight, region == partition_region::clustered ? std::optional(std::move(ck)) : std::nullopt);
|
||||
}
|
||||
auto ck = ck_from_json(item, schema);
|
||||
if (ck.is_empty()) {
|
||||
return position_in_partition::for_partition_start();
|
||||
}
|
||||
@@ -496,7 +469,7 @@ const std::pair<std::string, const rjson::value*> unwrap_set(const rjson::value&
|
||||
return {"", nullptr};
|
||||
}
|
||||
auto it = v.MemberBegin();
|
||||
const std::string it_key = rjson::to_string(it->name);
|
||||
const std::string it_key = it->name.GetString();
|
||||
if (it_key != "SS" && it_key != "BS" && it_key != "NS") {
|
||||
return {std::move(it_key), nullptr};
|
||||
}
|
||||
|
||||
@@ -55,7 +55,7 @@ partition_key pk_from_json(const rjson::value& item, schema_ptr schema);
|
||||
clustering_key ck_from_json(const rjson::value& item, schema_ptr schema);
|
||||
position_in_partition pos_from_json(const rjson::value& item, schema_ptr schema);
|
||||
|
||||
// If v encodes a number (i.e., it is a {"N": [...]}), returns an object representing it. Otherwise,
|
||||
// If v encodes a number (i.e., it is a {"N": [...]}, returns an object representing it. Otherwise,
|
||||
// raises ValidationException with diagnostic.
|
||||
big_decimal unwrap_number(const rjson::value& v, std::string_view diagnostic);
|
||||
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
#include <seastar/http/function_handlers.hh>
|
||||
#include <seastar/http/short_streams.hh>
|
||||
#include <seastar/core/coroutine.hh>
|
||||
#include <seastar/coroutine/maybe_yield.hh>
|
||||
#include <seastar/util/defer.hh>
|
||||
#include <seastar/util/short_streams.hh>
|
||||
#include "seastarx.hh"
|
||||
@@ -32,9 +31,6 @@
|
||||
#include "utils/overloaded_functor.hh"
|
||||
#include "utils/aws_sigv4.hh"
|
||||
#include "client_data.hh"
|
||||
#include "utils/updateable_value.hh"
|
||||
#include <zlib.h>
|
||||
#include "alternator/http_compression.hh"
|
||||
|
||||
static logging::logger slogger("alternator-server");
|
||||
|
||||
@@ -112,12 +108,9 @@ class api_handler : public handler_base {
|
||||
// type applies to all replies, both success and error.
|
||||
static constexpr const char* REPLY_CONTENT_TYPE = "application/x-amz-json-1.0";
|
||||
public:
|
||||
api_handler(const std::function<future<executor::request_return_type>(std::unique_ptr<request> req)>& _handle,
|
||||
const db::config& config) : _response_compressor(config), _f_handle(
|
||||
api_handler(const std::function<future<executor::request_return_type>(std::unique_ptr<request> req)>& _handle) : _f_handle(
|
||||
[this, _handle](std::unique_ptr<request> req, std::unique_ptr<reply> rep) {
|
||||
sstring accept_encoding = _response_compressor.get_accepted_encoding(*req);
|
||||
return seastar::futurize_invoke(_handle, std::move(req)).then_wrapped(
|
||||
[this, rep = std::move(rep), accept_encoding=std::move(accept_encoding)](future<executor::request_return_type> resf) mutable {
|
||||
return seastar::futurize_invoke(_handle, std::move(req)).then_wrapped([this, rep = std::move(rep)](future<executor::request_return_type> resf) mutable {
|
||||
if (resf.failed()) {
|
||||
// Exceptions of type api_error are wrapped as JSON and
|
||||
// returned to the client as expected. Other types of
|
||||
@@ -137,20 +130,22 @@ public:
|
||||
return make_ready_future<std::unique_ptr<reply>>(std::move(rep));
|
||||
}
|
||||
auto res = resf.get();
|
||||
return std::visit(overloaded_functor {
|
||||
std::visit(overloaded_functor {
|
||||
[&] (std::string&& str) {
|
||||
return _response_compressor.generate_reply(std::move(rep), std::move(accept_encoding),
|
||||
REPLY_CONTENT_TYPE, std::move(str));
|
||||
// Note that despite the move, there is a copy here -
|
||||
// as str is std::string and rep->_content is sstring.
|
||||
rep->_content = std::move(str);
|
||||
rep->set_content_type(REPLY_CONTENT_TYPE);
|
||||
},
|
||||
[&] (executor::body_writer&& body_writer) {
|
||||
return _response_compressor.generate_reply(std::move(rep), std::move(accept_encoding),
|
||||
REPLY_CONTENT_TYPE, std::move(body_writer));
|
||||
rep->write_body(REPLY_CONTENT_TYPE, std::move(body_writer));
|
||||
},
|
||||
[&] (const api_error& err) {
|
||||
generate_error_reply(*rep, err);
|
||||
return make_ready_future<std::unique_ptr<reply>>(std::move(rep));
|
||||
}
|
||||
}, std::move(res));
|
||||
|
||||
return make_ready_future<std::unique_ptr<reply>>(std::move(rep));
|
||||
});
|
||||
}) { }
|
||||
|
||||
@@ -179,7 +174,6 @@ protected:
|
||||
slogger.trace("api_handler error case: {}", rep._content);
|
||||
}
|
||||
|
||||
response_compressor _response_compressor;
|
||||
future_handler_function _f_handle;
|
||||
};
|
||||
|
||||
@@ -276,57 +270,24 @@ protected:
|
||||
}
|
||||
};
|
||||
|
||||
// This function increments the authentication_failures counter, and may also
|
||||
// log a warn-level message and/or throw an exception, depending on what
|
||||
// enforce_authorization and warn_authorization are set to.
|
||||
// The username and client address are only used for logging purposes -
|
||||
// they are not included in the error message returned to the client, since
|
||||
// the client knows who it is.
|
||||
// Note that if enforce_authorization is false, this function will return
|
||||
// without throwing. So a caller that doesn't want to continue after an
|
||||
// authentication_error must explicitly return after calling this function.
|
||||
template<typename Exception>
|
||||
static void authentication_error(alternator::stats& stats, bool enforce_authorization, bool warn_authorization, Exception&& e, std::string_view user, gms::inet_address client_address) {
|
||||
stats.authentication_failures++;
|
||||
if (enforce_authorization) {
|
||||
if (warn_authorization) {
|
||||
slogger.warn("alternator_warn_authorization=true: {} for user {}, client address {}", e.what(), user, client_address);
|
||||
}
|
||||
throw std::move(e);
|
||||
} else {
|
||||
if (warn_authorization) {
|
||||
slogger.warn("If you set alternator_enforce_authorization=true the following will be enforced: {} for user {}, client address {}", e.what(), user, client_address);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
future<std::string> server::verify_signature(const request& req, const chunked_content& content) {
|
||||
if (!_enforce_authorization.get() && !_warn_authorization.get()) {
|
||||
if (!_enforce_authorization) {
|
||||
slogger.debug("Skipping authorization");
|
||||
return make_ready_future<std::string>();
|
||||
}
|
||||
auto host_it = req._headers.find("Host");
|
||||
if (host_it == req._headers.end()) {
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
api_error::invalid_signature("Host header is mandatory for signature verification"),
|
||||
"", req.get_client_address());
|
||||
return make_ready_future<std::string>();
|
||||
throw api_error::invalid_signature("Host header is mandatory for signature verification");
|
||||
}
|
||||
auto authorization_it = req._headers.find("Authorization");
|
||||
if (authorization_it == req._headers.end()) {
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
api_error::missing_authentication_token("Authorization header is mandatory for signature verification"),
|
||||
"", req.get_client_address());
|
||||
return make_ready_future<std::string>();
|
||||
throw api_error::missing_authentication_token("Authorization header is mandatory for signature verification");
|
||||
}
|
||||
std::string host = host_it->second;
|
||||
std::string_view authorization_header = authorization_it->second;
|
||||
auto pos = authorization_header.find_first_of(' ');
|
||||
if (pos == std::string_view::npos || authorization_header.substr(0, pos) != "AWS4-HMAC-SHA256") {
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
api_error::invalid_signature(fmt::format("Authorization header must use AWS4-HMAC-SHA256 algorithm: {}", authorization_header)),
|
||||
"", req.get_client_address());
|
||||
return make_ready_future<std::string>();
|
||||
throw api_error::invalid_signature(fmt::format("Authorization header must use AWS4-HMAC-SHA256 algorithm: {}", authorization_header));
|
||||
}
|
||||
authorization_header.remove_prefix(pos+1);
|
||||
std::string credential;
|
||||
@@ -361,9 +322,7 @@ future<std::string> server::verify_signature(const request& req, const chunked_c
|
||||
|
||||
std::vector<std::string_view> credential_split = split(credential, '/');
|
||||
if (credential_split.size() != 5) {
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
api_error::validation(fmt::format("Incorrect credential information format: {}", credential)), "", req.get_client_address());
|
||||
return make_ready_future<std::string>();
|
||||
throw api_error::validation(fmt::format("Incorrect credential information format: {}", credential));
|
||||
}
|
||||
std::string user(credential_split[0]);
|
||||
std::string datestamp(credential_split[1]);
|
||||
@@ -374,81 +333,39 @@ future<std::string> server::verify_signature(const request& req, const chunked_c
|
||||
for (const auto& header : signed_headers) {
|
||||
signed_headers_map.emplace(header, std::string_view());
|
||||
}
|
||||
std::vector<std::string> modified_values;
|
||||
for (auto& header : req._headers) {
|
||||
std::string header_str;
|
||||
header_str.resize(header.first.size());
|
||||
std::transform(header.first.begin(), header.first.end(), header_str.begin(), ::tolower);
|
||||
auto it = signed_headers_map.find(header_str);
|
||||
if (it != signed_headers_map.end()) {
|
||||
// replace multiple spaces in the header value header.second with
|
||||
// a single space, as required by AWS SigV4 header canonization.
|
||||
// If we modify the value, we need to save it in modified_values
|
||||
// to keep it alive.
|
||||
std::string value;
|
||||
value.reserve(header.second.size());
|
||||
bool prev_space = false;
|
||||
bool modified = false;
|
||||
for (char ch : header.second) {
|
||||
if (ch == ' ') {
|
||||
if (!prev_space) {
|
||||
value += ch;
|
||||
prev_space = true;
|
||||
} else {
|
||||
modified = true; // skip a space
|
||||
}
|
||||
} else {
|
||||
value += ch;
|
||||
prev_space = false;
|
||||
}
|
||||
}
|
||||
if (modified) {
|
||||
modified_values.emplace_back(std::move(value));
|
||||
it->second = std::string_view(modified_values.back());
|
||||
} else {
|
||||
it->second = std::string_view(header.second);
|
||||
}
|
||||
it->second = std::string_view(header.second);
|
||||
}
|
||||
}
|
||||
|
||||
auto cache_getter = [&proxy = _proxy] (std::string username) {
|
||||
return get_key_from_roles(proxy, std::move(username));
|
||||
auto cache_getter = [&proxy = _proxy, &as = _auth_service] (std::string username) {
|
||||
return get_key_from_roles(proxy, as, std::move(username));
|
||||
};
|
||||
return _key_cache.get_ptr(user, cache_getter).then_wrapped([this, &req, &content,
|
||||
return _key_cache.get_ptr(user, cache_getter).then([this, &req, &content,
|
||||
user = std::move(user),
|
||||
host = std::move(host),
|
||||
datestamp = std::move(datestamp),
|
||||
signed_headers_str = std::move(signed_headers_str),
|
||||
signed_headers_map = std::move(signed_headers_map),
|
||||
modified_values = std::move(modified_values),
|
||||
region = std::move(region),
|
||||
service = std::move(service),
|
||||
user_signature = std::move(user_signature)] (future<key_cache::value_ptr> key_ptr_fut) {
|
||||
key_cache::value_ptr key_ptr(nullptr);
|
||||
try {
|
||||
key_ptr = key_ptr_fut.get();
|
||||
} catch (const api_error& e) {
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
e, user, req.get_client_address());
|
||||
return std::string();
|
||||
}
|
||||
user_signature = std::move(user_signature)] (key_cache::value_ptr key_ptr) {
|
||||
std::string signature;
|
||||
try {
|
||||
signature = utils::aws::get_signature(user, *key_ptr, std::string_view(host), "/", req._method,
|
||||
datestamp, signed_headers_str, signed_headers_map, &content, region, service, "");
|
||||
} catch (const std::exception& e) {
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
api_error::invalid_signature(fmt::format("invalid signature: {}", e.what())),
|
||||
user, req.get_client_address());
|
||||
return std::string();
|
||||
throw api_error::invalid_signature(e.what());
|
||||
}
|
||||
|
||||
if (signature != std::string_view(user_signature)) {
|
||||
_key_cache.remove(user);
|
||||
authentication_error(_executor._stats, _enforce_authorization.get(), _warn_authorization.get(),
|
||||
api_error::unrecognized_client("wrong signature"),
|
||||
user, req.get_client_address());
|
||||
return std::string();
|
||||
throw api_error::unrecognized_client("The security token included in the request is invalid.");
|
||||
}
|
||||
return user;
|
||||
});
|
||||
@@ -584,108 +501,6 @@ read_entire_stream(input_stream<char>& inp, size_t length_limit) {
|
||||
co_return ret;
|
||||
}
|
||||
|
||||
// safe_gzip_stream is an exception-safe wrapper for zlib's z_stream.
|
||||
// The "z_stream" struct is used by zlib to hold state while decompressing a
|
||||
// stream of data. It allocates memory which must be freed with inflateEnd(),
|
||||
// which the destructor of this class does.
|
||||
class safe_gzip_zstream {
|
||||
z_stream _zs;
|
||||
public:
|
||||
// If gzip is true, decode a gzip header (for "Content-Encoding: gzip").
|
||||
// Otherwise, a zlib header (for "Content-Encoding: deflate").
|
||||
safe_gzip_zstream(bool gzip = true) {
|
||||
memset(&_zs, 0, sizeof(_zs));
|
||||
if (inflateInit2(&_zs, gzip ? 16 + MAX_WBITS : MAX_WBITS) != Z_OK) {
|
||||
// Should only happen if memory allocation fails
|
||||
throw std::bad_alloc();
|
||||
}
|
||||
}
|
||||
~safe_gzip_zstream() {
|
||||
inflateEnd(&_zs);
|
||||
}
|
||||
z_stream* operator->() {
|
||||
return &_zs;
|
||||
}
|
||||
z_stream* get() {
|
||||
return &_zs;
|
||||
}
|
||||
void reset() {
|
||||
inflateReset(&_zs);
|
||||
}
|
||||
};
|
||||
|
||||
// ungzip() takes a chunked_content of a compressed request body, and returns
|
||||
// the uncompressed content as a chunked_content. If gzip is true, we expect
|
||||
// gzip header (for "Content-Encoding: gzip"), if gzip is false, we expect a
|
||||
// zlib header (for "Content-Encoding: deflate").
|
||||
// If the uncompressed content exceeds length_limit, an error is thrown.
|
||||
static future<chunked_content>
|
||||
ungzip(chunked_content&& compressed_body, size_t length_limit, bool gzip = true) {
|
||||
chunked_content ret;
|
||||
// output_buf can be any size - when uncompressing input_buf, it doesn't
|
||||
// need to fit in a single output_buf, we'll use multiple output_buf for
|
||||
// a single input_buf if needed.
|
||||
constexpr size_t OUTPUT_BUF_SIZE = 4096;
|
||||
temporary_buffer<char> output_buf;
|
||||
safe_gzip_zstream strm(gzip);
|
||||
bool complete_stream = false; // empty input is not a valid gzip/deflate
|
||||
size_t total_out_bytes = 0;
|
||||
for (const temporary_buffer<char>& input_buf : compressed_body) {
|
||||
if (input_buf.empty()) {
|
||||
continue;
|
||||
}
|
||||
complete_stream = false;
|
||||
strm->next_in = (Bytef*) input_buf.get();
|
||||
strm->avail_in = (uInt) input_buf.size();
|
||||
do {
|
||||
co_await coroutine::maybe_yield();
|
||||
if (output_buf.empty()) {
|
||||
output_buf = temporary_buffer<char>(OUTPUT_BUF_SIZE);
|
||||
}
|
||||
strm->next_out = (Bytef*) output_buf.get();
|
||||
strm->avail_out = OUTPUT_BUF_SIZE;
|
||||
int e = inflate(strm.get(), Z_NO_FLUSH);
|
||||
size_t out_bytes = OUTPUT_BUF_SIZE - strm->avail_out;
|
||||
if (out_bytes > 0) {
|
||||
// If output_buf is nearly full, we save it as-is in ret. But
|
||||
// if it only has little data, better copy to a small buffer.
|
||||
if (out_bytes > OUTPUT_BUF_SIZE/2) {
|
||||
ret.push_back(std::move(output_buf).prefix(out_bytes));
|
||||
// output_buf is now empty. if this loop finds more input,
|
||||
// we'll allocate a new output buffer.
|
||||
} else {
|
||||
ret.push_back(temporary_buffer<char>(output_buf.get(), out_bytes));
|
||||
}
|
||||
total_out_bytes += out_bytes;
|
||||
if (total_out_bytes > length_limit) {
|
||||
throw api_error::payload_too_large(fmt::format("Request content length limit of {} bytes exceeded", length_limit));
|
||||
}
|
||||
}
|
||||
if (e == Z_STREAM_END) {
|
||||
// There may be more input after the first gzip stream - in
|
||||
// either this input_buf or the next one. The additional input
|
||||
// should be a second concatenated gzip. We need to allow that
|
||||
// by resetting the gzip stream and continuing the input loop
|
||||
// until there's no more input.
|
||||
strm.reset();
|
||||
if (strm->avail_in == 0) {
|
||||
complete_stream = true;
|
||||
break;
|
||||
}
|
||||
} else if (e != Z_OK && e != Z_BUF_ERROR) {
|
||||
// DynamoDB returns an InternalServerError when given a bad
|
||||
// gzip request body. See test test_broken_gzip_content
|
||||
throw api_error::internal("Error during gzip decompression of request body");
|
||||
}
|
||||
} while (strm->avail_in > 0 || strm->avail_out == 0);
|
||||
}
|
||||
if (!complete_stream) {
|
||||
// The gzip stream was not properly finished with Z_STREAM_END
|
||||
throw api_error::internal("Truncated gzip in request body");
|
||||
}
|
||||
co_return ret;
|
||||
}
|
||||
|
||||
future<executor::request_return_type> server::handle_api_request(std::unique_ptr<request> req) {
|
||||
_executor._stats.total_operations++;
|
||||
sstring target = req->get_header("X-Amz-Target");
|
||||
@@ -710,7 +525,7 @@ future<executor::request_return_type> server::handle_api_request(std::unique_ptr
|
||||
++_executor._stats.requests_blocked_memory;
|
||||
}
|
||||
auto units = co_await std::move(units_fut);
|
||||
throwing_assert(req->content_stream);
|
||||
SCYLLA_ASSERT(req->content_stream);
|
||||
chunked_content content = co_await read_entire_stream(*req->content_stream, request_content_length_limit);
|
||||
// If the request had no Content-Length, we reserved too many units
|
||||
// so need to return some
|
||||
@@ -723,32 +538,11 @@ future<executor::request_return_type> server::handle_api_request(std::unique_ptr
|
||||
units.return_units(mem_estimate - new_mem_estimate);
|
||||
}
|
||||
auto username = co_await verify_signature(*req, content);
|
||||
// If the request is compressed, uncompress it now, after we checked
|
||||
// the signature (the signature is computed on the compressed content).
|
||||
// We apply the request_content_length_limit again to the uncompressed
|
||||
// content - we don't want to allow a tiny compressed request to
|
||||
// expand to a huge uncompressed request.
|
||||
sstring content_encoding = req->get_header("Content-Encoding");
|
||||
if (content_encoding == "gzip") {
|
||||
content = co_await ungzip(std::move(content), request_content_length_limit);
|
||||
} else if (content_encoding == "deflate") {
|
||||
content = co_await ungzip(std::move(content), request_content_length_limit, false);
|
||||
} else if (!content_encoding.empty()) {
|
||||
// DynamoDB returns a 500 error for unsupported Content-Encoding.
|
||||
// I'm not sure if this is the best error code, but let's do it too.
|
||||
// See the test test_garbage_content_encoding confirming this case.
|
||||
co_return api_error::internal("Unsupported Content-Encoding");
|
||||
}
|
||||
|
||||
// As long as the system_clients_entry object is alive, this request will
|
||||
// be visible in the "system.clients" virtual table. When requested, this
|
||||
// entry will be formatted by server::ongoing_request::make_client_data().
|
||||
auto user_agent_header = co_await _connection_options_keys_and_values.get_or_load(req->get_header("User-Agent"), [] (const client_options_cache_key_type&) {
|
||||
return make_ready_future<options_cache_value_type>(options_cache_value_type{});
|
||||
});
|
||||
|
||||
auto system_clients_entry = _ongoing_requests.emplace(
|
||||
req->get_client_address(), std::move(user_agent_header),
|
||||
req->get_client_address(), req->get_header("User-Agent"),
|
||||
username, current_scheduling_group(),
|
||||
req->get_protocol_name() == "https");
|
||||
|
||||
@@ -771,7 +565,7 @@ future<executor::request_return_type> server::handle_api_request(std::unique_ptr
|
||||
if (!username.empty()) {
|
||||
client_state.set_login(auth::authenticated_user(username));
|
||||
}
|
||||
client_state.maybe_update_per_service_level_params();
|
||||
co_await client_state.maybe_update_per_service_level_params();
|
||||
|
||||
tracing::trace_state_ptr trace_state = maybe_trace_query(client_state, username, op, content, _max_users_query_size_in_trace_output.get());
|
||||
tracing::trace(trace_state, "{}", op);
|
||||
@@ -793,7 +587,7 @@ future<executor::request_return_type> server::handle_api_request(std::unique_ptr
|
||||
void server::set_routes(routes& r) {
|
||||
api_handler* req_handler = new api_handler([this] (std::unique_ptr<request> req) mutable {
|
||||
return handle_api_request(std::move(req));
|
||||
}, _proxy.data_dictionary().get_config());
|
||||
});
|
||||
|
||||
r.put(operation_type::POST, "/", req_handler);
|
||||
r.put(operation_type::GET, "/", new health_handler(_pending_requests));
|
||||
@@ -824,6 +618,7 @@ server::server(executor& exec, service::storage_proxy& proxy, gms::gossiper& gos
|
||||
, _auth_service(auth_service)
|
||||
, _sl_controller(sl_controller)
|
||||
, _key_cache(1024, 1min, slogger)
|
||||
, _enforce_authorization(false)
|
||||
, _max_users_query_size_in_trace_output(1024)
|
||||
, _enabled_servers{}
|
||||
, _pending_requests("alternator::server::pending_requests")
|
||||
@@ -904,38 +699,27 @@ server::server(executor& exec, service::storage_proxy& proxy, gms::gossiper& gos
|
||||
} {
|
||||
}
|
||||
|
||||
future<> server::init(net::inet_address addr, std::optional<uint16_t> port, std::optional<uint16_t> https_port,
|
||||
std::optional<uint16_t> port_proxy_protocol, std::optional<uint16_t> https_port_proxy_protocol,
|
||||
std::optional<tls::credentials_builder> creds,
|
||||
utils::updateable_value<bool> enforce_authorization, utils::updateable_value<bool> warn_authorization, utils::updateable_value<uint64_t> max_users_query_size_in_trace_output,
|
||||
future<> server::init(net::inet_address addr, std::optional<uint16_t> port, std::optional<uint16_t> https_port, std::optional<tls::credentials_builder> creds,
|
||||
utils::updateable_value<bool> enforce_authorization, utils::updateable_value<uint64_t> max_users_query_size_in_trace_output,
|
||||
semaphore* memory_limiter, utils::updateable_value<uint32_t> max_concurrent_requests) {
|
||||
_memory_limiter = memory_limiter;
|
||||
_enforce_authorization = std::move(enforce_authorization);
|
||||
_warn_authorization = std::move(warn_authorization);
|
||||
_max_concurrent_requests = std::move(max_concurrent_requests);
|
||||
_max_users_query_size_in_trace_output = std::move(max_users_query_size_in_trace_output);
|
||||
if (!port && !https_port && !port_proxy_protocol && !https_port_proxy_protocol) {
|
||||
if (!port && !https_port) {
|
||||
return make_exception_future<>(std::runtime_error("Either regular port or TLS port"
|
||||
" must be specified in order to init an alternator HTTP server instance"));
|
||||
}
|
||||
return seastar::async([this, addr, port, https_port, port_proxy_protocol, https_port_proxy_protocol, creds] {
|
||||
return seastar::async([this, addr, port, https_port, creds] {
|
||||
_executor.start().get();
|
||||
|
||||
if (port || port_proxy_protocol) {
|
||||
if (port) {
|
||||
set_routes(_http_server._routes);
|
||||
_http_server.set_content_streaming(true);
|
||||
if (port) {
|
||||
_http_server.listen(socket_address{addr, *port}).get();
|
||||
}
|
||||
if (port_proxy_protocol) {
|
||||
listen_options lo;
|
||||
lo.reuse_address = true;
|
||||
lo.proxy_protocol = true;
|
||||
_http_server.listen(socket_address{addr, *port_proxy_protocol}, lo).get();
|
||||
}
|
||||
_http_server.listen(socket_address{addr, *port}).get();
|
||||
_enabled_servers.push_back(std::ref(_http_server));
|
||||
}
|
||||
if (https_port || https_port_proxy_protocol) {
|
||||
if (https_port) {
|
||||
set_routes(_https_server._routes);
|
||||
_https_server.set_content_streaming(true);
|
||||
|
||||
@@ -955,15 +739,7 @@ future<> server::init(net::inet_address addr, std::optional<uint16_t> port, std:
|
||||
} else {
|
||||
_credentials = creds->build_server_credentials();
|
||||
}
|
||||
if (https_port) {
|
||||
_https_server.listen(socket_address{addr, *https_port}, _credentials).get();
|
||||
}
|
||||
if (https_port_proxy_protocol) {
|
||||
listen_options lo;
|
||||
lo.reuse_address = true;
|
||||
lo.proxy_protocol = true;
|
||||
_https_server.listen(socket_address{addr, *https_port_proxy_protocol}, lo, _credentials).get();
|
||||
}
|
||||
_https_server.listen(socket_address{addr, *https_port}, _credentials).get();
|
||||
_enabled_servers.push_back(std::ref(_https_server));
|
||||
}
|
||||
});
|
||||
@@ -1036,15 +812,16 @@ client_data server::ongoing_request::make_client_data() const {
|
||||
// and keep "driver_version" unset.
|
||||
cd.driver_name = _user_agent;
|
||||
// Leave "protocol_version" unset, it has no meaning in Alternator.
|
||||
// Leave "hostname", "ssl_protocol" and "ssl_cipher_suite" unset for Alternator.
|
||||
// Note: CQL sets ssl_protocol and ssl_cipher_suite via generic_server::connection base class.
|
||||
// Leave "hostname", "ssl_protocol" and "ssl_cipher_suite" unset.
|
||||
// As reported in issue #9216, we never set these fields in CQL
|
||||
// either (see cql_server::connection::make_client_data()).
|
||||
return cd;
|
||||
}
|
||||
|
||||
future<utils::chunked_vector<foreign_ptr<std::unique_ptr<client_data>>>> server::get_client_data() {
|
||||
utils::chunked_vector<foreign_ptr<std::unique_ptr<client_data>>> ret;
|
||||
future<utils::chunked_vector<client_data>> server::get_client_data() {
|
||||
utils::chunked_vector<client_data> ret;
|
||||
co_await _ongoing_requests.for_each_gently([&ret] (const ongoing_request& r) {
|
||||
ret.emplace_back(make_foreign(std::make_unique<client_data>(r.make_client_data())));
|
||||
ret.emplace_back(r.make_client_data());
|
||||
});
|
||||
co_return ret;
|
||||
}
|
||||
|
||||
@@ -47,7 +47,6 @@ class server : public peering_sharded_service<server> {
|
||||
|
||||
key_cache _key_cache;
|
||||
utils::updateable_value<bool> _enforce_authorization;
|
||||
utils::updateable_value<bool> _warn_authorization;
|
||||
utils::updateable_value<uint64_t> _max_users_query_size_in_trace_output;
|
||||
utils::small_vector<std::reference_wrapper<seastar::httpd::http_server>, 2> _enabled_servers;
|
||||
named_gate _pending_requests;
|
||||
@@ -55,7 +54,6 @@ class server : public peering_sharded_service<server> {
|
||||
// though it isn't really relevant for Alternator which defines its own
|
||||
// timeouts separately. We can create this object only once.
|
||||
updateable_timeout_config _timeout_config;
|
||||
client_options_cache_type _connection_options_keys_and_values;
|
||||
|
||||
alternator_callbacks_map _callbacks;
|
||||
|
||||
@@ -89,7 +87,7 @@ class server : public peering_sharded_service<server> {
|
||||
// is called when reading the "system.clients" virtual table.
|
||||
struct ongoing_request {
|
||||
socket_address _client_address;
|
||||
client_options_cache_entry_type _user_agent;
|
||||
sstring _user_agent;
|
||||
sstring _username;
|
||||
scheduling_group _scheduling_group;
|
||||
bool _is_https;
|
||||
@@ -100,17 +98,15 @@ class server : public peering_sharded_service<server> {
|
||||
public:
|
||||
server(executor& executor, service::storage_proxy& proxy, gms::gossiper& gossiper, auth::service& service, qos::service_level_controller& sl_controller);
|
||||
|
||||
future<> init(net::inet_address addr, std::optional<uint16_t> port, std::optional<uint16_t> https_port,
|
||||
std::optional<uint16_t> port_proxy_protocol, std::optional<uint16_t> https_port_proxy_protocol,
|
||||
std::optional<tls::credentials_builder> creds,
|
||||
utils::updateable_value<bool> enforce_authorization, utils::updateable_value<bool> warn_authorization, utils::updateable_value<uint64_t> max_users_query_size_in_trace_output,
|
||||
future<> init(net::inet_address addr, std::optional<uint16_t> port, std::optional<uint16_t> https_port, std::optional<tls::credentials_builder> creds,
|
||||
utils::updateable_value<bool> enforce_authorization, utils::updateable_value<uint64_t> max_users_query_size_in_trace_output,
|
||||
semaphore* memory_limiter, utils::updateable_value<uint32_t> max_concurrent_requests);
|
||||
future<> stop();
|
||||
// get_client_data() is called (on each shard separately) when the virtual
|
||||
// table "system.clients" is read. It is expected to generate a list of
|
||||
// clients connected to this server (on this shard). This function is
|
||||
// called by alternator::controller::get_client_data().
|
||||
future<utils::chunked_vector<foreign_ptr<std::unique_ptr<client_data>>>> get_client_data();
|
||||
future<utils::chunked_vector<client_data>> get_client_data();
|
||||
private:
|
||||
void set_routes(seastar::httpd::routes& r);
|
||||
// If verification succeeds, returns the authenticated user's username
|
||||
|
||||
@@ -14,6 +14,20 @@
|
||||
namespace alternator {
|
||||
|
||||
const char* ALTERNATOR_METRICS = "alternator";
|
||||
static seastar::metrics::histogram estimated_histogram_to_metrics(const utils::estimated_histogram& histogram) {
|
||||
seastar::metrics::histogram res;
|
||||
res.buckets.resize(histogram.bucket_offsets.size());
|
||||
uint64_t cumulative_count = 0;
|
||||
res.sample_count = histogram._count;
|
||||
res.sample_sum = histogram._sample_sum;
|
||||
for (size_t i = 0; i < res.buckets.size(); i++) {
|
||||
auto& v = res.buckets[i];
|
||||
v.upper_bound = histogram.bucket_offsets[i];
|
||||
cumulative_count += histogram.buckets[i];
|
||||
v.count = cumulative_count;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
static seastar::metrics::label column_family_label("cf");
|
||||
static seastar::metrics::label keyspace_label("ks");
|
||||
@@ -137,21 +151,21 @@ static void register_metrics_with_optional_table(seastar::metrics::metric_groups
|
||||
seastar::metrics::make_counter("batch_item_count", seastar::metrics::description("The total number of items processed across all batches"), labels,
|
||||
stats.api_operations.batch_get_item_batch_total)(op("BatchGetItem")).aggregate(aggregate_labels).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("batch_item_count_histogram", seastar::metrics::description("Histogram of the number of items in a batch request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.api_operations.batch_get_item_histogram);})(op("BatchGetItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.api_operations.batch_get_item_histogram);})(op("BatchGetItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("batch_item_count_histogram", seastar::metrics::description("Histogram of the number of items in a batch request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.api_operations.batch_write_item_histogram);})(op("BatchWriteItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.api_operations.batch_write_item_histogram);})(op("BatchWriteItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("operation_size_kb", seastar::metrics::description("Histogram of item sizes involved in a request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.operation_sizes.get_item_op_size_kb);})(op("GetItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.operation_sizes.get_item_op_size_kb);})(op("GetItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("operation_size_kb", seastar::metrics::description("Histogram of item sizes involved in a request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.operation_sizes.put_item_op_size_kb);})(op("PutItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.operation_sizes.put_item_op_size_kb);})(op("PutItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("operation_size_kb", seastar::metrics::description("Histogram of item sizes involved in a request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.operation_sizes.delete_item_op_size_kb);})(op("DeleteItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.operation_sizes.delete_item_op_size_kb);})(op("DeleteItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("operation_size_kb", seastar::metrics::description("Histogram of item sizes involved in a request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.operation_sizes.update_item_op_size_kb);})(op("UpdateItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.operation_sizes.update_item_op_size_kb);})(op("UpdateItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("operation_size_kb", seastar::metrics::description("Histogram of item sizes involved in a request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.operation_sizes.batch_get_item_op_size_kb);})(op("BatchGetItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.operation_sizes.batch_get_item_op_size_kb);})(op("BatchGetItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_histogram("operation_size_kb", seastar::metrics::description("Histogram of item sizes involved in a request"), labels,
|
||||
[&stats]{ return to_metrics_histogram(stats.operation_sizes.batch_write_item_op_size_kb);})(op("BatchWriteItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
[&stats]{ return estimated_histogram_to_metrics(stats.operation_sizes.batch_write_item_op_size_kb);})(op("BatchWriteItem")).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
});
|
||||
|
||||
seastar::metrics::label expression_label("expression");
|
||||
@@ -174,16 +188,6 @@ static void register_metrics_with_optional_table(seastar::metrics::metric_groups
|
||||
seastar::metrics::make_total_operations("expression_cache_misses", stats.expression_cache.requests[stats::expression_types::PROJECTION_EXPRESSION].misses,
|
||||
seastar::metrics::description("Counts number of misses of cached expressions"), labels)(expression_label("ProjectionExpression")).aggregate(aggregate_labels).set_skip_when_empty()
|
||||
});
|
||||
|
||||
// Only register the following metrics for the global metrics, not per-table
|
||||
if (!has_table) {
|
||||
metrics.add_group("alternator", {
|
||||
seastar::metrics::make_counter("authentication_failures", stats.authentication_failures,
|
||||
seastar::metrics::description("total number of authentication failures"), labels).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
seastar::metrics::make_counter("authorization_failures", stats.authorization_failures,
|
||||
seastar::metrics::description("total number of authorization failures"), labels).aggregate({seastar::metrics::shard_label}).set_skip_when_empty(),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
void register_metrics(seastar::metrics::metric_groups& metrics, const stats& stats) {
|
||||
|
||||
@@ -16,8 +16,6 @@
|
||||
#include "cql3/stats.hh"
|
||||
|
||||
namespace alternator {
|
||||
using batch_histogram = utils::estimated_histogram_with_max<128>;
|
||||
using op_size_histogram = utils::estimated_histogram_with_max<512>;
|
||||
|
||||
// Object holding per-shard statistics related to Alternator.
|
||||
// While this object is alive, these metrics are also registered to be
|
||||
@@ -78,46 +76,35 @@ public:
|
||||
utils::timed_rate_moving_average_summary_and_histogram batch_get_item_latency;
|
||||
utils::timed_rate_moving_average_summary_and_histogram get_records_latency;
|
||||
|
||||
batch_histogram batch_get_item_histogram;
|
||||
batch_histogram batch_write_item_histogram;
|
||||
utils::estimated_histogram batch_get_item_histogram{22}; // a histogram that covers the range 1 - 100
|
||||
utils::estimated_histogram batch_write_item_histogram{22}; // a histogram that covers the range 1 - 100
|
||||
} api_operations;
|
||||
// Operation size metrics
|
||||
struct {
|
||||
// Item size statistics collected per table and aggregated per node.
|
||||
// Each histogram covers the range 0 - 512. Resolves #25143.
|
||||
// Each histogram covers the range 0 - 446. Resolves #25143.
|
||||
// A size is the retrieved item's size.
|
||||
op_size_histogram get_item_op_size_kb;
|
||||
utils::estimated_histogram get_item_op_size_kb{30};
|
||||
// A size is the maximum of the new item's size and the old item's size.
|
||||
op_size_histogram put_item_op_size_kb;
|
||||
utils::estimated_histogram put_item_op_size_kb{30};
|
||||
// A size is the deleted item's size. If the deleted item's size is
|
||||
// unknown (i.e. read-before-write wasn't necessary and it wasn't
|
||||
// forced by a configuration option), it won't be recorded on the
|
||||
// histogram.
|
||||
op_size_histogram delete_item_op_size_kb;
|
||||
utils::estimated_histogram delete_item_op_size_kb{30};
|
||||
// A size is the maximum of existing item's size and the estimated size
|
||||
// of the update. This will be changed to the maximum of the existing item's
|
||||
// size and the new item's size in a subsequent PR.
|
||||
op_size_histogram update_item_op_size_kb;
|
||||
utils::estimated_histogram update_item_op_size_kb{30};
|
||||
|
||||
// A size is the sum of the sizes of all items per table. This means
|
||||
// that a single BatchGetItem / BatchWriteItem updates the histogram
|
||||
// for each table that it has items in.
|
||||
// The sizes are the retrieved items' sizes grouped per table.
|
||||
op_size_histogram batch_get_item_op_size_kb;
|
||||
utils::estimated_histogram batch_get_item_op_size_kb{30};
|
||||
// The sizes are the the written items' sizes grouped per table.
|
||||
op_size_histogram batch_write_item_op_size_kb;
|
||||
utils::estimated_histogram batch_write_item_op_size_kb{30};
|
||||
} operation_sizes;
|
||||
// Count of authentication and authorization failures, counted if either
|
||||
// alternator_enforce_authorization or alternator_warn_authorization are
|
||||
// set to true. If both are false, no authentication or authorization
|
||||
// checks are performed, so failures are not recognized or counted.
|
||||
// "authentication" failure means the request was not signed with a valid
|
||||
// user and key combination. "authorization" failure means the request was
|
||||
// authenticated to a valid user - but this user did not have permissions
|
||||
// to perform the operation (considering RBAC settings and the user's
|
||||
// superuser status).
|
||||
uint64_t authentication_failures = 0;
|
||||
uint64_t authorization_failures = 0;
|
||||
// Miscellaneous event counters
|
||||
uint64_t total_operations = 0;
|
||||
uint64_t unsupported_operations = 0;
|
||||
@@ -142,7 +129,7 @@ public:
|
||||
cql3::cql_stats cql_stats;
|
||||
|
||||
// Enumeration of expression types only for stats
|
||||
// if needed it can be extended e.g. per operation
|
||||
// if needed it can be extended e.g. per operation
|
||||
enum expression_types {
|
||||
UPDATE_EXPRESSION,
|
||||
CONDITION_EXPRESSION,
|
||||
@@ -166,7 +153,7 @@ struct table_stats {
|
||||
void register_metrics(seastar::metrics::metric_groups& metrics, const stats& stats);
|
||||
|
||||
inline uint64_t bytes_to_kb_ceil(uint64_t bytes) {
|
||||
return (bytes) / 1024;
|
||||
return (bytes + 1023) / 1024;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -33,8 +33,6 @@
|
||||
#include "data_dictionary/data_dictionary.hh"
|
||||
#include "utils/rjson.hh"
|
||||
|
||||
static logging::logger elogger("alternator-streams");
|
||||
|
||||
/**
|
||||
* Base template type to implement rapidjson::internal::TypeHelper<...>:s
|
||||
* for types that are ostreamable/string constructible/castable.
|
||||
@@ -430,25 +428,6 @@ using namespace std::chrono_literals;
|
||||
// Dynamo docs says no data shall live longer than 24h.
|
||||
static constexpr auto dynamodb_streams_max_window = 24h;
|
||||
|
||||
// find the parent shard in previous generation for the given child shard
|
||||
// takes care of wrap-around case in vnodes
|
||||
// prev_streams must be sorted by token
|
||||
const cdc::stream_id& find_parent_shard_in_previous_generation(db_clock::time_point prev_timestamp, const utils::chunked_vector<cdc::stream_id> &prev_streams, const cdc::stream_id &child) {
|
||||
if (prev_streams.empty()) {
|
||||
// something is really wrong - streams are empty
|
||||
// let's try internal_error in hope it will be notified and fixed
|
||||
on_internal_error(elogger, fmt::format("streams are empty for cdc generation at {} ({})", prev_timestamp, prev_timestamp.time_since_epoch().count()));
|
||||
}
|
||||
auto it = std::lower_bound(prev_streams.begin(), prev_streams.end(), child.token(), [](const cdc::stream_id& id, const dht::token& t) {
|
||||
return id.token() < t;
|
||||
});
|
||||
if (it == prev_streams.end()) {
|
||||
// wrap around case - take first
|
||||
it = prev_streams.begin();
|
||||
}
|
||||
return *it;
|
||||
}
|
||||
|
||||
future<executor::request_return_type> executor::describe_stream(client_state& client_state, service_permit permit, rjson::value request) {
|
||||
_stats.api_operations.describe_stream++;
|
||||
|
||||
@@ -512,7 +491,7 @@ future<executor::request_return_type> executor::describe_stream(client_state& cl
|
||||
|
||||
if (!opts.enabled()) {
|
||||
rjson::add(ret, "StreamDescription", std::move(stream_desc));
|
||||
co_return rjson::print(std::move(ret));
|
||||
return make_ready_future<executor::request_return_type>(rjson::print(std::move(ret)));
|
||||
}
|
||||
|
||||
// TODO: label
|
||||
@@ -523,113 +502,123 @@ future<executor::request_return_type> executor::describe_stream(client_state& cl
|
||||
// filter out cdc generations older than the table or now() - cdc::ttl (typically dynamodb_streams_max_window - 24h)
|
||||
auto low_ts = std::max(as_timepoint(schema->id()), db_clock::now() - ttl);
|
||||
|
||||
std::map<db_clock::time_point, cdc::streams_version> topologies = co_await _sdks.cdc_get_versioned_streams(low_ts, { normal_token_owners });
|
||||
auto e = topologies.end();
|
||||
auto prev = e;
|
||||
auto shards = rjson::empty_array();
|
||||
return _sdks.cdc_get_versioned_streams(low_ts, { normal_token_owners }).then([db, shard_start, limit, ret = std::move(ret), stream_desc = std::move(stream_desc)] (std::map<db_clock::time_point, cdc::streams_version> topologies) mutable {
|
||||
|
||||
std::optional<shard_id> last;
|
||||
auto e = topologies.end();
|
||||
auto prev = e;
|
||||
auto shards = rjson::empty_array();
|
||||
|
||||
auto i = topologies.begin();
|
||||
// if we're a paged query, skip to the generation where we left of.
|
||||
if (shard_start) {
|
||||
i = topologies.find(shard_start->time);
|
||||
}
|
||||
std::optional<shard_id> last;
|
||||
|
||||
// for parent-child stuff we need id:s to be sorted by token
|
||||
// (see explanation above) since we want to find closest
|
||||
// token boundary when determining parent.
|
||||
// #7346 - we processed and searched children/parents in
|
||||
// stored order, which is not necessarily token order,
|
||||
// so the finding of "closest" token boundary (using upper bound)
|
||||
// could give somewhat weird results.
|
||||
static auto token_cmp = [](const cdc::stream_id& id1, const cdc::stream_id& id2) {
|
||||
return id1.token() < id2.token();
|
||||
};
|
||||
auto i = topologies.begin();
|
||||
// if we're a paged query, skip to the generation where we left of.
|
||||
if (shard_start) {
|
||||
i = topologies.find(shard_start->time);
|
||||
}
|
||||
|
||||
// #7409 - shards must be returned in lexicographical order,
|
||||
// normal bytes compare is string_traits<int8_t>::compare.
|
||||
// thus bytes 0x8000 is less than 0x0000. By doing unsigned
|
||||
// compare instead we inadvertently will sort in string lexical.
|
||||
static auto id_cmp = [](const cdc::stream_id& id1, const cdc::stream_id& id2) {
|
||||
return compare_unsigned(id1.to_bytes(), id2.to_bytes()) < 0;
|
||||
};
|
||||
|
||||
// need a prev even if we are skipping stuff
|
||||
if (i != topologies.begin()) {
|
||||
prev = std::prev(i);
|
||||
}
|
||||
|
||||
for (; limit > 0 && i != e; prev = i, ++i) {
|
||||
auto& [ts, sv] = *i;
|
||||
|
||||
last = std::nullopt;
|
||||
|
||||
auto lo = sv.streams.begin();
|
||||
auto end = sv.streams.end();
|
||||
// for parent-child stuff we need id:s to be sorted by token
|
||||
// (see explanation above) since we want to find closest
|
||||
// token boundary when determining parent.
|
||||
// #7346 - we processed and searched children/parents in
|
||||
// stored order, which is not necessarily token order,
|
||||
// so the finding of "closest" token boundary (using upper bound)
|
||||
// could give somewhat weird results.
|
||||
static auto token_cmp = [](const cdc::stream_id& id1, const cdc::stream_id& id2) {
|
||||
return id1.token() < id2.token();
|
||||
};
|
||||
|
||||
// #7409 - shards must be returned in lexicographical order,
|
||||
std::sort(lo, end, id_cmp);
|
||||
// normal bytes compare is string_traits<int8_t>::compare.
|
||||
// thus bytes 0x8000 is less than 0x0000. By doing unsigned
|
||||
// compare instead we inadvertently will sort in string lexical.
|
||||
static auto id_cmp = [](const cdc::stream_id& id1, const cdc::stream_id& id2) {
|
||||
return compare_unsigned(id1.to_bytes(), id2.to_bytes()) < 0;
|
||||
};
|
||||
|
||||
if (shard_start) {
|
||||
// find next shard position
|
||||
lo = std::upper_bound(lo, end, shard_start->id, id_cmp);
|
||||
shard_start = std::nullopt;
|
||||
// need a prev even if we are skipping stuff
|
||||
if (i != topologies.begin()) {
|
||||
prev = std::prev(i);
|
||||
}
|
||||
|
||||
if (lo != end && prev != e) {
|
||||
// We want older stuff sorted in token order so we can find matching
|
||||
// token range when determining parent shard.
|
||||
std::stable_sort(prev->second.streams.begin(), prev->second.streams.end(), token_cmp);
|
||||
}
|
||||
|
||||
auto expired = [&]() -> std::optional<db_clock::time_point> {
|
||||
auto j = std::next(i);
|
||||
if (j == e) {
|
||||
return std::nullopt;
|
||||
}
|
||||
// add this so we sort of match potential
|
||||
// sequence numbers in get_records result.
|
||||
return j->first + confidence_interval(db);
|
||||
}();
|
||||
|
||||
while (lo != end) {
|
||||
auto& id = *lo++;
|
||||
|
||||
auto shard = rjson::empty_object();
|
||||
|
||||
if (prev != e) {
|
||||
auto &pid = find_parent_shard_in_previous_generation(prev->first, prev->second.streams, id);
|
||||
rjson::add(shard, "ParentShardId", shard_id(prev->first, pid));
|
||||
}
|
||||
|
||||
last.emplace(ts, id);
|
||||
rjson::add(shard, "ShardId", *last);
|
||||
auto range = rjson::empty_object();
|
||||
rjson::add(range, "StartingSequenceNumber", sequence_number(utils::UUID_gen::min_time_UUID(ts.time_since_epoch())));
|
||||
if (expired) {
|
||||
rjson::add(range, "EndingSequenceNumber", sequence_number(utils::UUID_gen::min_time_UUID(expired->time_since_epoch())));
|
||||
}
|
||||
|
||||
rjson::add(shard, "SequenceNumberRange", std::move(range));
|
||||
rjson::push_back(shards, std::move(shard));
|
||||
|
||||
if (--limit == 0) {
|
||||
break;
|
||||
}
|
||||
for (; limit > 0 && i != e; prev = i, ++i) {
|
||||
auto& [ts, sv] = *i;
|
||||
|
||||
last = std::nullopt;
|
||||
|
||||
auto lo = sv.streams.begin();
|
||||
auto end = sv.streams.end();
|
||||
|
||||
// #7409 - shards must be returned in lexicographical order,
|
||||
std::sort(lo, end, id_cmp);
|
||||
|
||||
if (shard_start) {
|
||||
// find next shard position
|
||||
lo = std::upper_bound(lo, end, shard_start->id, id_cmp);
|
||||
shard_start = std::nullopt;
|
||||
}
|
||||
|
||||
if (lo != end && prev != e) {
|
||||
// We want older stuff sorted in token order so we can find matching
|
||||
// token range when determining parent shard.
|
||||
std::stable_sort(prev->second.streams.begin(), prev->second.streams.end(), token_cmp);
|
||||
}
|
||||
|
||||
auto expired = [&]() -> std::optional<db_clock::time_point> {
|
||||
auto j = std::next(i);
|
||||
if (j == e) {
|
||||
return std::nullopt;
|
||||
}
|
||||
// add this so we sort of match potential
|
||||
// sequence numbers in get_records result.
|
||||
return j->first + confidence_interval(db);
|
||||
}();
|
||||
|
||||
while (lo != end) {
|
||||
auto& id = *lo++;
|
||||
|
||||
auto shard = rjson::empty_object();
|
||||
|
||||
if (prev != e) {
|
||||
auto& pids = prev->second.streams;
|
||||
auto pid = std::upper_bound(pids.begin(), pids.end(), id.token(), [](const dht::token& t, const cdc::stream_id& id) {
|
||||
return t < id.token();
|
||||
});
|
||||
if (pid != pids.begin()) {
|
||||
pid = std::prev(pid);
|
||||
}
|
||||
if (pid != pids.end()) {
|
||||
rjson::add(shard, "ParentShardId", shard_id(prev->first, *pid));
|
||||
}
|
||||
}
|
||||
|
||||
last.emplace(ts, id);
|
||||
rjson::add(shard, "ShardId", *last);
|
||||
auto range = rjson::empty_object();
|
||||
rjson::add(range, "StartingSequenceNumber", sequence_number(utils::UUID_gen::min_time_UUID(ts.time_since_epoch())));
|
||||
if (expired) {
|
||||
rjson::add(range, "EndingSequenceNumber", sequence_number(utils::UUID_gen::min_time_UUID(expired->time_since_epoch())));
|
||||
}
|
||||
|
||||
rjson::add(shard, "SequenceNumberRange", std::move(range));
|
||||
rjson::push_back(shards, std::move(shard));
|
||||
|
||||
if (--limit == 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
last = std::nullopt;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (last) {
|
||||
rjson::add(stream_desc, "LastEvaluatedShardId", *last);
|
||||
}
|
||||
if (last) {
|
||||
rjson::add(stream_desc, "LastEvaluatedShardId", *last);
|
||||
}
|
||||
|
||||
rjson::add(stream_desc, "Shards", std::move(shards));
|
||||
rjson::add(ret, "StreamDescription", std::move(stream_desc));
|
||||
|
||||
co_return rjson::print(std::move(ret));
|
||||
rjson::add(stream_desc, "Shards", std::move(shards));
|
||||
rjson::add(ret, "StreamDescription", std::move(stream_desc));
|
||||
|
||||
return make_ready_future<executor::request_return_type>(rjson::print(std::move(ret)));
|
||||
});
|
||||
}
|
||||
|
||||
enum class shard_iterator_type {
|
||||
@@ -838,7 +827,7 @@ future<executor::request_return_type> executor::get_records(client_state& client
|
||||
|
||||
tracing::add_table_name(trace_state, schema->ks_name(), schema->cf_name());
|
||||
|
||||
co_await verify_permission(_enforce_authorization, _warn_authorization, client_state, schema, auth::permission::SELECT, _stats);
|
||||
co_await verify_permission(_enforce_authorization, client_state, schema, auth::permission::SELECT);
|
||||
|
||||
db::consistency_level cl = db::consistency_level::LOCAL_QUORUM;
|
||||
partition_key pk = iter.shard.id.to_partition_key(*schema);
|
||||
@@ -909,169 +898,172 @@ future<executor::request_return_type> executor::get_records(client_state& client
|
||||
auto command = ::make_lw_shared<query::read_command>(schema->id(), schema->version(), partition_slice, _proxy.get_max_result_size(partition_slice),
|
||||
query::tombstone_limit(_proxy.get_tombstone_limit()), query::row_limit(limit * mul));
|
||||
|
||||
service::storage_proxy::coordinator_query_result qr = co_await _proxy.query(schema, std::move(command), std::move(partition_ranges), cl, service::storage_proxy::coordinator_query_options(default_timeout(), std::move(permit), client_state));
|
||||
cql3::selection::result_set_builder builder(*selection, gc_clock::now());
|
||||
query::result_view::consume(*qr.query_result, partition_slice, cql3::selection::result_set_builder::visitor(builder, *schema, *selection));
|
||||
co_return co_await _proxy.query(schema, std::move(command), std::move(partition_ranges), cl, service::storage_proxy::coordinator_query_options(default_timeout(), std::move(permit), client_state)).then(
|
||||
[this, schema, partition_slice = std::move(partition_slice), selection = std::move(selection), start_time = std::move(start_time), limit, key_names = std::move(key_names), attr_names = std::move(attr_names), type, iter, high_ts] (service::storage_proxy::coordinator_query_result qr) mutable {
|
||||
cql3::selection::result_set_builder builder(*selection, gc_clock::now());
|
||||
query::result_view::consume(*qr.query_result, partition_slice, cql3::selection::result_set_builder::visitor(builder, *schema, *selection));
|
||||
|
||||
auto result_set = builder.build();
|
||||
auto records = rjson::empty_array();
|
||||
auto result_set = builder.build();
|
||||
auto records = rjson::empty_array();
|
||||
|
||||
auto& metadata = result_set->get_metadata();
|
||||
auto& metadata = result_set->get_metadata();
|
||||
|
||||
auto op_index = std::distance(metadata.get_names().begin(),
|
||||
std::find_if(metadata.get_names().begin(), metadata.get_names().end(), [](const lw_shared_ptr<cql3::column_specification>& cdef) {
|
||||
return cdef->name->name() == op_column_name;
|
||||
})
|
||||
);
|
||||
auto ts_index = std::distance(metadata.get_names().begin(),
|
||||
std::find_if(metadata.get_names().begin(), metadata.get_names().end(), [](const lw_shared_ptr<cql3::column_specification>& cdef) {
|
||||
return cdef->name->name() == timestamp_column_name;
|
||||
})
|
||||
);
|
||||
auto eor_index = std::distance(metadata.get_names().begin(),
|
||||
std::find_if(metadata.get_names().begin(), metadata.get_names().end(), [](const lw_shared_ptr<cql3::column_specification>& cdef) {
|
||||
return cdef->name->name() == eor_column_name;
|
||||
})
|
||||
);
|
||||
auto op_index = std::distance(metadata.get_names().begin(),
|
||||
std::find_if(metadata.get_names().begin(), metadata.get_names().end(), [](const lw_shared_ptr<cql3::column_specification>& cdef) {
|
||||
return cdef->name->name() == op_column_name;
|
||||
})
|
||||
);
|
||||
auto ts_index = std::distance(metadata.get_names().begin(),
|
||||
std::find_if(metadata.get_names().begin(), metadata.get_names().end(), [](const lw_shared_ptr<cql3::column_specification>& cdef) {
|
||||
return cdef->name->name() == timestamp_column_name;
|
||||
})
|
||||
);
|
||||
auto eor_index = std::distance(metadata.get_names().begin(),
|
||||
std::find_if(metadata.get_names().begin(), metadata.get_names().end(), [](const lw_shared_ptr<cql3::column_specification>& cdef) {
|
||||
return cdef->name->name() == eor_column_name;
|
||||
})
|
||||
);
|
||||
|
||||
std::optional<utils::UUID> timestamp;
|
||||
auto dynamodb = rjson::empty_object();
|
||||
auto record = rjson::empty_object();
|
||||
const auto dc_name = _proxy.get_token_metadata_ptr()->get_topology().get_datacenter();
|
||||
std::optional<utils::UUID> timestamp;
|
||||
auto dynamodb = rjson::empty_object();
|
||||
auto record = rjson::empty_object();
|
||||
const auto dc_name = _proxy.get_token_metadata_ptr()->get_topology().get_datacenter();
|
||||
|
||||
using op_utype = std::underlying_type_t<cdc::operation>;
|
||||
using op_utype = std::underlying_type_t<cdc::operation>;
|
||||
|
||||
auto maybe_add_record = [&] {
|
||||
if (!dynamodb.ObjectEmpty()) {
|
||||
rjson::add(record, "dynamodb", std::move(dynamodb));
|
||||
dynamodb = rjson::empty_object();
|
||||
}
|
||||
if (!record.ObjectEmpty()) {
|
||||
rjson::add(record, "awsRegion", rjson::from_string(dc_name));
|
||||
rjson::add(record, "eventID", event_id(iter.shard.id, *timestamp));
|
||||
rjson::add(record, "eventSource", "scylladb:alternator");
|
||||
rjson::add(record, "eventVersion", "1.1");
|
||||
rjson::push_back(records, std::move(record));
|
||||
record = rjson::empty_object();
|
||||
--limit;
|
||||
}
|
||||
};
|
||||
auto maybe_add_record = [&] {
|
||||
if (!dynamodb.ObjectEmpty()) {
|
||||
rjson::add(record, "dynamodb", std::move(dynamodb));
|
||||
dynamodb = rjson::empty_object();
|
||||
}
|
||||
if (!record.ObjectEmpty()) {
|
||||
rjson::add(record, "awsRegion", rjson::from_string(dc_name));
|
||||
rjson::add(record, "eventID", event_id(iter.shard.id, *timestamp));
|
||||
rjson::add(record, "eventSource", "scylladb:alternator");
|
||||
rjson::add(record, "eventVersion", "1.1");
|
||||
rjson::push_back(records, std::move(record));
|
||||
record = rjson::empty_object();
|
||||
--limit;
|
||||
}
|
||||
};
|
||||
|
||||
for (auto& row : result_set->rows()) {
|
||||
auto op = static_cast<cdc::operation>(value_cast<op_utype>(data_type_for<op_utype>()->deserialize(*row[op_index])));
|
||||
auto ts = value_cast<utils::UUID>(data_type_for<utils::UUID>()->deserialize(*row[ts_index]));
|
||||
auto eor = row[eor_index].has_value() ? value_cast<bool>(boolean_type->deserialize(*row[eor_index])) : false;
|
||||
for (auto& row : result_set->rows()) {
|
||||
auto op = static_cast<cdc::operation>(value_cast<op_utype>(data_type_for<op_utype>()->deserialize(*row[op_index])));
|
||||
auto ts = value_cast<utils::UUID>(data_type_for<utils::UUID>()->deserialize(*row[ts_index]));
|
||||
auto eor = row[eor_index].has_value() ? value_cast<bool>(boolean_type->deserialize(*row[eor_index])) : false;
|
||||
|
||||
if (!dynamodb.HasMember("Keys")) {
|
||||
auto keys = rjson::empty_object();
|
||||
describe_single_item(*selection, row, key_names, keys);
|
||||
rjson::add(dynamodb, "Keys", std::move(keys));
|
||||
rjson::add(dynamodb, "ApproximateCreationDateTime", utils::UUID_gen::unix_timestamp_in_sec(ts).count());
|
||||
rjson::add(dynamodb, "SequenceNumber", sequence_number(ts));
|
||||
rjson::add(dynamodb, "StreamViewType", type);
|
||||
// TODO: SizeBytes
|
||||
}
|
||||
if (!dynamodb.HasMember("Keys")) {
|
||||
auto keys = rjson::empty_object();
|
||||
describe_single_item(*selection, row, key_names, keys);
|
||||
rjson::add(dynamodb, "Keys", std::move(keys));
|
||||
rjson::add(dynamodb, "ApproximateCreationDateTime", utils::UUID_gen::unix_timestamp_in_sec(ts).count());
|
||||
rjson::add(dynamodb, "SequenceNumber", sequence_number(ts));
|
||||
rjson::add(dynamodb, "StreamViewType", type);
|
||||
// TODO: SizeBytes
|
||||
}
|
||||
|
||||
/**
|
||||
* We merge rows with same timestamp into a single event.
|
||||
* This is pretty much needed, because a CDC row typically
|
||||
* encodes ~half the info of an alternator write.
|
||||
*
|
||||
* A big, big downside to how alternator records are written
|
||||
* (i.e. CQL), is that the distinction between INSERT and UPDATE
|
||||
* is somewhat lost/unmappable to actual eventName.
|
||||
* A write (currently) always looks like an insert+modify
|
||||
* regardless whether we wrote existing record or not.
|
||||
*
|
||||
* Maybe RMW ops could be done slightly differently so
|
||||
* we can distinguish them here...
|
||||
*
|
||||
* For now, all writes will become MODIFY.
|
||||
*
|
||||
* Note: we do not check the current pre/post
|
||||
* flags on CDC log, instead we use data to
|
||||
* drive what is returned. This is (afaict)
|
||||
* consistent with dynamo streams
|
||||
*/
|
||||
switch (op) {
|
||||
case cdc::operation::pre_image:
|
||||
case cdc::operation::post_image:
|
||||
{
|
||||
auto item = rjson::empty_object();
|
||||
describe_single_item(*selection, row, attr_names, item, nullptr, true);
|
||||
describe_single_item(*selection, row, key_names, item);
|
||||
rjson::add(dynamodb, op == cdc::operation::pre_image ? "OldImage" : "NewImage", std::move(item));
|
||||
break;
|
||||
}
|
||||
case cdc::operation::update:
|
||||
rjson::add(record, "eventName", "MODIFY");
|
||||
break;
|
||||
case cdc::operation::insert:
|
||||
rjson::add(record, "eventName", "INSERT");
|
||||
break;
|
||||
case cdc::operation::service_row_delete:
|
||||
case cdc::operation::service_partition_delete:
|
||||
{
|
||||
auto user_identity = rjson::empty_object();
|
||||
rjson::add(user_identity, "Type", "Service");
|
||||
rjson::add(user_identity, "PrincipalId", "dynamodb.amazonaws.com");
|
||||
rjson::add(record, "userIdentity", std::move(user_identity));
|
||||
rjson::add(record, "eventName", "REMOVE");
|
||||
break;
|
||||
}
|
||||
default:
|
||||
rjson::add(record, "eventName", "REMOVE");
|
||||
break;
|
||||
}
|
||||
if (eor) {
|
||||
maybe_add_record();
|
||||
timestamp = ts;
|
||||
if (limit == 0) {
|
||||
/**
|
||||
* We merge rows with same timestamp into a single event.
|
||||
* This is pretty much needed, because a CDC row typically
|
||||
* encodes ~half the info of an alternator write.
|
||||
*
|
||||
* A big, big downside to how alternator records are written
|
||||
* (i.e. CQL), is that the distinction between INSERT and UPDATE
|
||||
* is somewhat lost/unmappable to actual eventName.
|
||||
* A write (currently) always looks like an insert+modify
|
||||
* regardless whether we wrote existing record or not.
|
||||
*
|
||||
* Maybe RMW ops could be done slightly differently so
|
||||
* we can distinguish them here...
|
||||
*
|
||||
* For now, all writes will become MODIFY.
|
||||
*
|
||||
* Note: we do not check the current pre/post
|
||||
* flags on CDC log, instead we use data to
|
||||
* drive what is returned. This is (afaict)
|
||||
* consistent with dynamo streams
|
||||
*/
|
||||
switch (op) {
|
||||
case cdc::operation::pre_image:
|
||||
case cdc::operation::post_image:
|
||||
{
|
||||
auto item = rjson::empty_object();
|
||||
describe_single_item(*selection, row, attr_names, item, nullptr, true);
|
||||
describe_single_item(*selection, row, key_names, item);
|
||||
rjson::add(dynamodb, op == cdc::operation::pre_image ? "OldImage" : "NewImage", std::move(item));
|
||||
break;
|
||||
}
|
||||
case cdc::operation::update:
|
||||
rjson::add(record, "eventName", "MODIFY");
|
||||
break;
|
||||
case cdc::operation::insert:
|
||||
rjson::add(record, "eventName", "INSERT");
|
||||
break;
|
||||
case cdc::operation::service_row_delete:
|
||||
case cdc::operation::service_partition_delete:
|
||||
{
|
||||
auto user_identity = rjson::empty_object();
|
||||
rjson::add(user_identity, "Type", "Service");
|
||||
rjson::add(user_identity, "PrincipalId", "dynamodb.amazonaws.com");
|
||||
rjson::add(record, "userIdentity", std::move(user_identity));
|
||||
rjson::add(record, "eventName", "REMOVE");
|
||||
break;
|
||||
}
|
||||
default:
|
||||
rjson::add(record, "eventName", "REMOVE");
|
||||
break;
|
||||
}
|
||||
if (eor) {
|
||||
maybe_add_record();
|
||||
timestamp = ts;
|
||||
if (limit == 0) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
auto ret = rjson::empty_object();
|
||||
auto nrecords = records.Size();
|
||||
rjson::add(ret, "Records", std::move(records));
|
||||
auto ret = rjson::empty_object();
|
||||
auto nrecords = records.Size();
|
||||
rjson::add(ret, "Records", std::move(records));
|
||||
|
||||
if (nrecords != 0) {
|
||||
// #9642. Set next iterators threshold to > last
|
||||
shard_iterator next_iter(iter.table, iter.shard, *timestamp, false);
|
||||
// Note that here we unconditionally return NextShardIterator,
|
||||
// without checking if maybe we reached the end-of-shard. If the
|
||||
// shard did end, then the next read will have nrecords == 0 and
|
||||
// will notice end end of shard and not return NextShardIterator.
|
||||
rjson::add(ret, "NextShardIterator", next_iter);
|
||||
_stats.api_operations.get_records_latency.mark(std::chrono::steady_clock::now() - start_time);
|
||||
co_return rjson::print(std::move(ret));
|
||||
}
|
||||
if (nrecords != 0) {
|
||||
// #9642. Set next iterators threshold to > last
|
||||
shard_iterator next_iter(iter.table, iter.shard, *timestamp, false);
|
||||
// Note that here we unconditionally return NextShardIterator,
|
||||
// without checking if maybe we reached the end-of-shard. If the
|
||||
// shard did end, then the next read will have nrecords == 0 and
|
||||
// will notice end end of shard and not return NextShardIterator.
|
||||
rjson::add(ret, "NextShardIterator", next_iter);
|
||||
_stats.api_operations.get_records_latency.mark(std::chrono::steady_clock::now() - start_time);
|
||||
return make_ready_future<executor::request_return_type>(rjson::print(std::move(ret)));
|
||||
}
|
||||
|
||||
// ugh. figure out if we are and end-of-shard
|
||||
auto normal_token_owners = _proxy.get_token_metadata_ptr()->count_normal_token_owners();
|
||||
// ugh. figure out if we are and end-of-shard
|
||||
auto normal_token_owners = _proxy.get_token_metadata_ptr()->count_normal_token_owners();
|
||||
|
||||
db_clock::time_point ts = co_await _sdks.cdc_current_generation_timestamp({ normal_token_owners });
|
||||
auto& shard = iter.shard;
|
||||
return _sdks.cdc_current_generation_timestamp({ normal_token_owners }).then([this, iter, high_ts, start_time, ret = std::move(ret)](db_clock::time_point ts) mutable {
|
||||
auto& shard = iter.shard;
|
||||
|
||||
if (shard.time < ts && ts < high_ts) {
|
||||
// The DynamoDB documentation states that when a shard is
|
||||
// closed, reading it until the end has NextShardIterator
|
||||
// "set to null". Our test test_streams_closed_read
|
||||
// confirms that by "null" they meant not set at all.
|
||||
} else {
|
||||
// We could have return the same iterator again, but we did
|
||||
// a search from it until high_ts and found nothing, so we
|
||||
// can also start the next search from high_ts.
|
||||
// TODO: but why? It's simpler just to leave the iterator be.
|
||||
shard_iterator next_iter(iter.table, iter.shard, utils::UUID_gen::min_time_UUID(high_ts.time_since_epoch()), true);
|
||||
rjson::add(ret, "NextShardIterator", iter);
|
||||
}
|
||||
_stats.api_operations.get_records_latency.mark(std::chrono::steady_clock::now() - start_time);
|
||||
if (is_big(ret)) {
|
||||
co_return make_streamed(std::move(ret));
|
||||
}
|
||||
co_return rjson::print(std::move(ret));
|
||||
if (shard.time < ts && ts < high_ts) {
|
||||
// The DynamoDB documentation states that when a shard is
|
||||
// closed, reading it until the end has NextShardIterator
|
||||
// "set to null". Our test test_streams_closed_read
|
||||
// confirms that by "null" they meant not set at all.
|
||||
} else {
|
||||
// We could have return the same iterator again, but we did
|
||||
// a search from it until high_ts and found nothing, so we
|
||||
// can also start the next search from high_ts.
|
||||
// TODO: but why? It's simpler just to leave the iterator be.
|
||||
shard_iterator next_iter(iter.table, iter.shard, utils::UUID_gen::min_time_UUID(high_ts.time_since_epoch()), true);
|
||||
rjson::add(ret, "NextShardIterator", iter);
|
||||
}
|
||||
_stats.api_operations.get_records_latency.mark(std::chrono::steady_clock::now() - start_time);
|
||||
if (is_big(ret)) {
|
||||
return make_ready_future<executor::request_return_type>(make_streamed(std::move(ret)));
|
||||
}
|
||||
return make_ready_future<executor::request_return_type>(rjson::print(std::move(ret)));
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
bool executor::add_stream_options(const rjson::value& stream_specification, schema_builder& builder, service::storage_proxy& sp) {
|
||||
@@ -1081,7 +1073,9 @@ bool executor::add_stream_options(const rjson::value& stream_specification, sche
|
||||
}
|
||||
|
||||
if (stream_enabled->GetBool()) {
|
||||
if (!sp.features().alternator_streams) {
|
||||
auto db = sp.data_dictionary();
|
||||
|
||||
if (!db.features().alternator_streams) {
|
||||
throw api_error::validation("StreamSpecification: alternator streams feature not enabled in cluster.");
|
||||
}
|
||||
|
||||
|
||||
@@ -46,7 +46,6 @@
|
||||
#include "alternator/executor.hh"
|
||||
#include "alternator/controller.hh"
|
||||
#include "alternator/serialization.hh"
|
||||
#include "alternator/ttl_tag.hh"
|
||||
#include "dht/sharder.hh"
|
||||
#include "db/config.hh"
|
||||
#include "db/tags/utils.hh"
|
||||
@@ -58,10 +57,19 @@ static logging::logger tlogger("alternator_ttl");
|
||||
|
||||
namespace alternator {
|
||||
|
||||
// We write the expiration-time attribute enabled on a table in a
|
||||
// tag TTL_TAG_KEY.
|
||||
// Currently, the *value* of this tag is simply the name of the attribute,
|
||||
// and the expiration scanner interprets it as an Alternator attribute name -
|
||||
// It can refer to a real column or if that doesn't exist, to a member of
|
||||
// the ":attrs" map column. Although this is designed for Alternator, it may
|
||||
// be good enough for CQL as well (there, the ":attrs" column won't exist).
|
||||
extern const sstring TTL_TAG_KEY;
|
||||
|
||||
future<executor::request_return_type> executor::update_time_to_live(client_state& client_state, service_permit permit, rjson::value request) {
|
||||
_stats.api_operations.update_time_to_live++;
|
||||
if (!_proxy.features().alternator_ttl) {
|
||||
co_return api_error::unknown_operation("UpdateTimeToLive not yet supported. Upgrade all nodes to a version that supports it.");
|
||||
if (!_proxy.data_dictionary().features().alternator_ttl) {
|
||||
co_return api_error::unknown_operation("UpdateTimeToLive not yet supported. Experimental support is available if the 'alternator-ttl' experimental feature is enabled on all nodes.");
|
||||
}
|
||||
|
||||
schema_ptr schema = get_table(_proxy, request);
|
||||
@@ -85,9 +93,9 @@ future<executor::request_return_type> executor::update_time_to_live(client_state
|
||||
if (v->GetStringLength() < 1 || v->GetStringLength() > 255) {
|
||||
co_return api_error::validation("The length of AttributeName must be between 1 and 255");
|
||||
}
|
||||
sstring attribute_name = rjson::to_sstring(*v);
|
||||
sstring attribute_name(v->GetString(), v->GetStringLength());
|
||||
|
||||
co_await verify_permission(_enforce_authorization, _warn_authorization, client_state, schema, auth::permission::ALTER, _stats);
|
||||
co_await verify_permission(_enforce_authorization, client_state, schema, auth::permission::ALTER);
|
||||
co_await db::modify_tags(_mm, schema->ks_name(), schema->cf_name(), [&](std::map<sstring, sstring>& tags_map) {
|
||||
if (enabled) {
|
||||
if (tags_map.contains(TTL_TAG_KEY)) {
|
||||
@@ -133,7 +141,7 @@ future<executor::request_return_type> executor::describe_time_to_live(client_sta
|
||||
|
||||
// expiration_service is a sharded service responsible for cleaning up expired
|
||||
// items in all tables with per-item expiration enabled. Currently, this means
|
||||
// Alternator tables with TTL configured via an UpdateTimeToLive request.
|
||||
// Alternator tables with TTL configured via a UpdateTimeToLive request.
|
||||
//
|
||||
// Here is a brief overview of how the expiration service works:
|
||||
//
|
||||
@@ -316,7 +324,9 @@ static future<std::vector<std::pair<dht::token_range, locator::host_id>>> get_se
|
||||
const auto& tm = *erm->get_token_metadata_ptr();
|
||||
const auto& sorted_tokens = tm.sorted_tokens();
|
||||
std::vector<std::pair<dht::token_range, locator::host_id>> ret;
|
||||
throwing_assert(!sorted_tokens.empty());
|
||||
if (sorted_tokens.empty()) {
|
||||
on_internal_error(tlogger, "Token metadata is empty");
|
||||
}
|
||||
auto prev_tok = sorted_tokens.back();
|
||||
for (const auto& tok : sorted_tokens) {
|
||||
co_await coroutine::maybe_yield();
|
||||
@@ -553,7 +563,7 @@ static future<> scan_table_ranges(
|
||||
expiration_service::stats& expiration_stats)
|
||||
{
|
||||
const schema_ptr& s = scan_ctx.s;
|
||||
throwing_assert(partition_ranges.size() == 1); // otherwise issue #9167 will cause incorrect results.
|
||||
SCYLLA_ASSERT (partition_ranges.size() == 1); // otherwise issue #9167 will cause incorrect results.
|
||||
auto p = service::pager::query_pagers::pager(proxy, s, scan_ctx.selection, *scan_ctx.query_state_ptr,
|
||||
*scan_ctx.query_options, scan_ctx.command, std::move(partition_ranges), nullptr);
|
||||
while (!p->is_exhausted()) {
|
||||
@@ -583,7 +593,7 @@ static future<> scan_table_ranges(
|
||||
if (retries >= 10) {
|
||||
// Don't get stuck forever asking the same page, maybe there's
|
||||
// a bug or a real problem in several replicas. Give up on
|
||||
// this scan and retry the scan from a random position later,
|
||||
// this scan an retry the scan from a random position later,
|
||||
// in the next scan period.
|
||||
throw runtime_exception("scanner thread failed after too many timeouts for the same page");
|
||||
}
|
||||
@@ -630,38 +640,13 @@ static future<> scan_table_ranges(
|
||||
}
|
||||
} else {
|
||||
// For a real column to contain an expiration time, it
|
||||
// must be a numeric type. We currently support decimal
|
||||
// (used by Alternator TTL) as well as bigint, int and
|
||||
// timestamp (used by CQL per-row TTL).
|
||||
switch (meta[*expiration_column]->type->get_kind()) {
|
||||
case abstract_type::kind::decimal:
|
||||
// Used by Alternator TTL for key columns not stored
|
||||
// in the map. The value is in seconds, fractional
|
||||
// part is ignored.
|
||||
expired = is_expired(value_cast<big_decimal>(v), now);
|
||||
break;
|
||||
case abstract_type::kind::long_kind:
|
||||
// Used by CQL per-row TTL. The value is in seconds.
|
||||
expired = is_expired(gc_clock::time_point(std::chrono::seconds(value_cast<int64_t>(v))), now);
|
||||
break;
|
||||
case abstract_type::kind::int32:
|
||||
// Used by CQL per-row TTL. The value is in seconds.
|
||||
// Using int type is not recommended because it will
|
||||
// overflow in 2038, but we support it to allow users
|
||||
// to use existing int columns for expiration.
|
||||
expired = is_expired(gc_clock::time_point(std::chrono::seconds(value_cast<int32_t>(v))), now);
|
||||
break;
|
||||
case abstract_type::kind::timestamp:
|
||||
// Used by CQL per-row TTL. The value is in milliseconds
|
||||
// but we truncate it to gc_clock's precision (whole seconds).
|
||||
expired = is_expired(gc_clock::time_point(std::chrono::duration_cast<gc_clock::duration>(value_cast<db_clock::time_point>(v).time_since_epoch())), now);
|
||||
break;
|
||||
default:
|
||||
// Should never happen - we verified the column's type
|
||||
// before starting the scan.
|
||||
[[unlikely]]
|
||||
on_internal_error(tlogger, format("expiration scanner value of unsupported type {} in column {}", meta[*expiration_column]->type->cql3_type_name(), scan_ctx.column_name) );
|
||||
}
|
||||
// must be a numeric type.
|
||||
// FIXME: Currently we only support decimal_type (which is
|
||||
// what Alternator uses), but other numeric types can be
|
||||
// supported as well to make this feature more useful in CQL.
|
||||
// Note that kind::decimal is also checked above.
|
||||
big_decimal n = value_cast<big_decimal>(v);
|
||||
expired = is_expired(n, now);
|
||||
}
|
||||
if (expired) {
|
||||
expiration_stats.items_deleted++;
|
||||
@@ -723,12 +708,16 @@ static future<bool> scan_table(
|
||||
co_return false;
|
||||
}
|
||||
// attribute_name may be one of the schema's columns (in Alternator, this
|
||||
// means a key column, in CQL it's a regular column), or an element in
|
||||
// Alternator's attrs map encoded in Alternator's JSON encoding (which we
|
||||
// decode). If attribute_name is a real column, in Alternator it will have
|
||||
// the type decimal, counting seconds since the UNIX epoch, while in CQL
|
||||
// it will one of the types bigint or int (counting seconds) or timestamp
|
||||
// (counting milliseconds).
|
||||
// means it's a key column), or an element in Alternator's attrs map
|
||||
// encoded in Alternator's JSON encoding.
|
||||
// FIXME: To make this less Alternators-specific, we should encode in the
|
||||
// single key's value three things:
|
||||
// 1. The name of a column
|
||||
// 2. Optionally if column is a map, a member in the map
|
||||
// 3. The deserializer for the value: CQL or Alternator (JSON).
|
||||
// The deserializer can be guessed: If the given column or map item is
|
||||
// numeric, it can be used directly. If it is a "bytes" type, it needs to
|
||||
// be deserialized using Alternator's deserializer.
|
||||
bytes column_name = to_bytes(*attribute_name);
|
||||
const column_definition *cd = s->get_column_definition(column_name);
|
||||
std::optional<std::string> member;
|
||||
@@ -747,14 +736,11 @@ static future<bool> scan_table(
|
||||
data_type column_type = cd->type;
|
||||
// Verify that the column has the right type: If "member" exists
|
||||
// the column must be a map, and if it doesn't, the column must
|
||||
// be decimal_type (Alternator), bigint, int or timestamp (CQL).
|
||||
// If the column has the wrong type nothing can get expired in
|
||||
// this table, and it's pointless to scan it.
|
||||
// (currently) be a decimal_type. If the column has the wrong type
|
||||
// nothing can get expired in this table, and it's pointless to
|
||||
// scan it.
|
||||
if ((member && column_type->get_kind() != abstract_type::kind::map) ||
|
||||
(!member && column_type->get_kind() != abstract_type::kind::decimal &&
|
||||
column_type->get_kind() != abstract_type::kind::long_kind &&
|
||||
column_type->get_kind() != abstract_type::kind::int32 &&
|
||||
column_type->get_kind() != abstract_type::kind::timestamp)) {
|
||||
(!member && column_type->get_kind() != abstract_type::kind::decimal)) {
|
||||
tlogger.info("table {} TTL column has unsupported type, not scanning", s->cf_name());
|
||||
co_return false;
|
||||
}
|
||||
@@ -767,7 +753,7 @@ static future<bool> scan_table(
|
||||
auto my_host_id = erm->get_topology().my_host_id();
|
||||
const auto &tablet_map = erm->get_token_metadata().tablets().get_tablet_map(s->id());
|
||||
for (std::optional tablet = tablet_map.first_tablet(); tablet; tablet = tablet_map.next_tablet(*tablet)) {
|
||||
auto tablet_primary_replica = tablet_map.get_primary_replica(*tablet, erm->get_topology());
|
||||
auto tablet_primary_replica = tablet_map.get_primary_replica(*tablet);
|
||||
// check if this is the primary replica for the current tablet
|
||||
if (tablet_primary_replica.host == my_host_id && tablet_primary_replica.shard == this_shard_id()) {
|
||||
co_await scan_tablet(*tablet, proxy, abort_source, page_sem, expiration_stats, scan_ctx, tablet_map);
|
||||
@@ -781,7 +767,7 @@ static future<bool> scan_table(
|
||||
// by tasking another node to take over scanning of the dead node's primary
|
||||
// ranges. What we do here is that this node will also check expiration
|
||||
// on its *secondary* ranges - but only those whose primary owner is down.
|
||||
auto tablet_secondary_replica = tablet_map.get_secondary_replica(*tablet, erm->get_topology()); // throws if no secondary replica
|
||||
auto tablet_secondary_replica = tablet_map.get_secondary_replica(*tablet); // throws if no secondary replica
|
||||
if (tablet_secondary_replica.host == my_host_id && tablet_secondary_replica.shard == this_shard_id()) {
|
||||
if (!gossiper.is_alive(tablet_primary_replica.host)) {
|
||||
co_await scan_tablet(*tablet, proxy, abort_source, page_sem, expiration_stats, scan_ctx, tablet_map);
|
||||
@@ -892,10 +878,12 @@ future<> expiration_service::run() {
|
||||
future<> expiration_service::start() {
|
||||
// Called by main() on each shard to start the expiration-service
|
||||
// thread. Just runs run() in the background and allows stop().
|
||||
if (!shutting_down()) {
|
||||
_end = run().handle_exception([] (std::exception_ptr ep) {
|
||||
tlogger.error("expiration_service failed: {}", ep);
|
||||
});
|
||||
if (_db.features().alternator_ttl) {
|
||||
if (!shutting_down()) {
|
||||
_end = run().handle_exception([] (std::exception_ptr ep) {
|
||||
tlogger.error("expiration_service failed: {}", ep);
|
||||
});
|
||||
}
|
||||
}
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ namespace alternator {
|
||||
|
||||
// expiration_service is a sharded service responsible for cleaning up expired
|
||||
// items in all tables with per-item expiration enabled. Currently, this means
|
||||
// Alternator tables with TTL configured via an UpdateTimeToLive request.
|
||||
// Alternator tables with TTL configured via a UpdateTimeToLeave request.
|
||||
class expiration_service final : public seastar::peering_sharded_service<expiration_service> {
|
||||
public:
|
||||
// Object holding per-shard statistics related to the expiration service.
|
||||
@@ -52,7 +52,7 @@ private:
|
||||
data_dictionary::database _db;
|
||||
service::storage_proxy& _proxy;
|
||||
gms::gossiper& _gossiper;
|
||||
// _end is set by start(), and resolves when the background service
|
||||
// _end is set by start(), and resolves when the the background service
|
||||
// started by it ends. To ask the background service to end, _abort_source
|
||||
// should be triggered. stop() below uses both _abort_source and _end.
|
||||
std::optional<future<>> _end;
|
||||
|
||||
@@ -1,26 +0,0 @@
|
||||
/*
|
||||
* Copyright 2026-present ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "seastarx.hh"
|
||||
#include <seastar/core/sstring.hh>
|
||||
|
||||
namespace alternator {
|
||||
// We use the table tag TTL_TAG_KEY ("system:ttl_attribute") to remember
|
||||
// which attribute was chosen as the expiration-time attribute for
|
||||
// Alternator's TTL and CQL's per-row TTL features.
|
||||
// Currently, the *value* of this tag is simply the name of the attribute:
|
||||
// It can refer to a real column or if that doesn't exist, to a member of
|
||||
// the ":attrs" map column (which Alternator uses).
|
||||
extern const sstring TTL_TAG_KEY;
|
||||
} // namespace alternator
|
||||
|
||||
// let users use TTL_TAG_KEY without the "alternator::" prefix,
|
||||
// to make it easier to move it to a different namespace later.
|
||||
using alternator::TTL_TAG_KEY;
|
||||
@@ -31,7 +31,6 @@ set(swagger_files
|
||||
api-doc/column_family.json
|
||||
api-doc/commitlog.json
|
||||
api-doc/compaction_manager.json
|
||||
api-doc/client_routes.json
|
||||
api-doc/config.json
|
||||
api-doc/cql_server_test.json
|
||||
api-doc/endpoint_snitch_info.json
|
||||
@@ -69,7 +68,6 @@ target_sources(api
|
||||
PRIVATE
|
||||
api.cc
|
||||
cache_service.cc
|
||||
client_routes.cc
|
||||
collectd.cc
|
||||
column_family.cc
|
||||
commitlog.cc
|
||||
@@ -108,8 +106,5 @@ target_link_libraries(api
|
||||
wasmtime_bindings
|
||||
absl::headers)
|
||||
|
||||
if (Scylla_USE_PRECOMPILED_HEADER_USE)
|
||||
target_precompile_headers(api REUSE_FROM scylla-precompiled-header)
|
||||
endif()
|
||||
check_headers(check-headers api
|
||||
GLOB_RECURSE ${CMAKE_CURRENT_SOURCE_DIR}/*.hh)
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
"summary":"Resets authorized prepared statements cache",
|
||||
"summary":"Reset cache",
|
||||
"type":"void",
|
||||
"nickname":"authorization_cache_reset",
|
||||
"produces":[
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
, "client_routes_entry": {
|
||||
"id": "client_routes_entry",
|
||||
"summary": "An entry storing client routes",
|
||||
"properties": {
|
||||
"connection_id": {"type": "string"},
|
||||
"host_id": {"type": "string", "format": "uuid"},
|
||||
"address": {"type": "string"},
|
||||
"port": {"type": "integer"},
|
||||
"tls_port": {"type": "integer"},
|
||||
"alternator_port": {"type": "integer"},
|
||||
"alternator_https_port": {"type": "integer"}
|
||||
},
|
||||
"required": ["connection_id", "host_id", "address"]
|
||||
}
|
||||
, "client_routes_key": {
|
||||
"id": "client_routes_key",
|
||||
"summary": "A key of client_routes_entry",
|
||||
"properties": {
|
||||
"connection_id": {"type": "string"},
|
||||
"host_id": {"type": "string", "format": "uuid"}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,74 +0,0 @@
|
||||
, "/v2/client-routes":{
|
||||
"get": {
|
||||
"description":"List all client route entries",
|
||||
"operationId":"get_client_routes",
|
||||
"tags":["client_routes"],
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[],
|
||||
"responses":{
|
||||
"200":{
|
||||
"schema":{
|
||||
"type":"array",
|
||||
"items":{ "$ref":"#/definitions/client_routes_entry" }
|
||||
}
|
||||
},
|
||||
"default":{
|
||||
"description":"unexpected error",
|
||||
"schema":{"$ref":"#/definitions/ErrorModel"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"post": {
|
||||
"description":"Upsert one or more client route entries",
|
||||
"operationId":"set_client_routes",
|
||||
"tags":["client_routes"],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"body",
|
||||
"in":"body",
|
||||
"required":true,
|
||||
"schema":{
|
||||
"type":"array",
|
||||
"items":{ "$ref":"#/definitions/client_routes_entry" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"responses":{
|
||||
"200":{ "description": "OK" },
|
||||
"default":{
|
||||
"description":"unexpected error",
|
||||
"schema":{ "$ref":"#/definitions/ErrorModel" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"delete": {
|
||||
"description":"Delete one or more client route entries",
|
||||
"operationId":"delete_client_routes",
|
||||
"tags":["client_routes"],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"body",
|
||||
"in":"body",
|
||||
"required":true,
|
||||
"schema":{
|
||||
"type":"array",
|
||||
"items":{ "$ref":"#/definitions/client_routes_key" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"responses":{
|
||||
"200":{
|
||||
"description": "OK"
|
||||
},
|
||||
"default":{
|
||||
"description":"unexpected error",
|
||||
"schema":{
|
||||
"$ref":"#/definitions/ErrorModel"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -243,7 +243,7 @@
|
||||
"GOSSIP_DIGEST_SYN",
|
||||
"GOSSIP_DIGEST_ACK2",
|
||||
"GOSSIP_SHUTDOWN",
|
||||
"UNUSED__DEFINITIONS_UPDATE",
|
||||
"DEFINITIONS_UPDATE",
|
||||
"TRUNCATE",
|
||||
"UNUSED__REPLICATION_FINISHED",
|
||||
"MIGRATION_REQUEST",
|
||||
|
||||
@@ -220,25 +220,6 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/nodes/excluded",
|
||||
"operations":[
|
||||
{
|
||||
"method":"GET",
|
||||
"summary":"Retrieve host ids of nodes which are marked as excluded",
|
||||
"type":"array",
|
||||
"items":{
|
||||
"type":"string"
|
||||
},
|
||||
"nickname":"get_excluded_nodes",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/nodes/joining",
|
||||
"operations":[
|
||||
@@ -961,14 +942,6 @@
|
||||
"type":"string",
|
||||
"paramType":"query",
|
||||
"enum": ["all", "dc", "rack", "node"]
|
||||
},
|
||||
{
|
||||
"name":"primary_replica_only",
|
||||
"description":"Load the sstables and stream to the primary replica node within the scope, if one is specified. If not, stream to the global primary replica.",
|
||||
"required":false,
|
||||
"allowMultiple":false,
|
||||
"type":"boolean",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1055,7 +1028,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/cleanup_all/",
|
||||
"path":"/storage_service/cleanup_all",
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
@@ -1065,30 +1038,6 @@
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"global",
|
||||
"description":"true if cleanup of entire cluster is requested",
|
||||
"required":false,
|
||||
"allowMultiple":false,
|
||||
"type":"boolean",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/mark_node_as_clean",
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
"summary":"Mark the node as clean. After that the node will not be considered as needing cleanup during automatic cleanup which is triggered by some topology operations",
|
||||
"type":"void",
|
||||
"nickname":"reset_cleanup_needed",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[]
|
||||
}
|
||||
]
|
||||
@@ -1295,45 +1244,6 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/logstor_compaction",
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
"summary":"Trigger compaction of the key-value storage",
|
||||
"type":"void",
|
||||
"nickname":"logstor_compaction",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"major",
|
||||
"description":"When true, perform a major compaction",
|
||||
"required":false,
|
||||
"allowMultiple":false,
|
||||
"type":"boolean",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/logstor_flush",
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
"summary":"Trigger flush of logstor storage",
|
||||
"type":"void",
|
||||
"nickname":"logstor_flush",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/active_repair/",
|
||||
"operations":[
|
||||
@@ -1661,30 +1571,6 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/exclude_node",
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
"summary":"Marks the node as permanently down (excluded).",
|
||||
"type":"void",
|
||||
"nickname":"exclude_node",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"hosts",
|
||||
"description":"Comma-separated list of host ids to exclude",
|
||||
"required":true,
|
||||
"allowMultiple":false,
|
||||
"type":"string",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/removal_status",
|
||||
"operations":[
|
||||
@@ -3124,48 +3010,6 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
{
|
||||
"path":"/storage_service/tablets/snapshots",
|
||||
"operations":[
|
||||
{
|
||||
"method":"POST",
|
||||
"summary":"Takes the snapshot for the given keyspaces/tables. A snapshot name must be specified.",
|
||||
"type":"void",
|
||||
"nickname":"take_cluster_snapshot",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"tag",
|
||||
"description":"the tag given to the snapshot",
|
||||
"required":true,
|
||||
"allowMultiple":false,
|
||||
"type":"string",
|
||||
"paramType":"query"
|
||||
},
|
||||
{
|
||||
"name":"keyspace",
|
||||
"description":"Keyspace(s) to snapshot. Multiple keyspaces can be provided using a comma-separated list. If omitted, snapshot all keyspaces.",
|
||||
"required":false,
|
||||
"allowMultiple":false,
|
||||
"type":"string",
|
||||
"paramType":"query"
|
||||
},
|
||||
{
|
||||
"name":"table",
|
||||
"description":"Table(s) to snapshot. Multiple tables (in a single keyspace) can be provided using a comma-separated list. If omitted, snapshot all tables in the given keyspace(s).",
|
||||
"required":false,
|
||||
"allowMultiple":false,
|
||||
"type":"string",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
{
|
||||
"path":"/storage_service/quiesce_topology",
|
||||
"operations":[
|
||||
@@ -3268,38 +3112,6 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/logstor_info",
|
||||
"operations":[
|
||||
{
|
||||
"method":"GET",
|
||||
"summary":"Logstor segment information for one table",
|
||||
"type":"table_logstor_info",
|
||||
"nickname":"logstor_info",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[
|
||||
{
|
||||
"name":"keyspace",
|
||||
"description":"The keyspace",
|
||||
"required":true,
|
||||
"allowMultiple":false,
|
||||
"type":"string",
|
||||
"paramType":"query"
|
||||
},
|
||||
{
|
||||
"name":"table",
|
||||
"description":"table name",
|
||||
"required":true,
|
||||
"allowMultiple":false,
|
||||
"type":"string",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/storage_service/retrain_dict",
|
||||
"operations":[
|
||||
@@ -3708,47 +3520,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"logstor_hist_bucket":{
|
||||
"id":"logstor_hist_bucket",
|
||||
"properties":{
|
||||
"bucket":{
|
||||
"type":"long"
|
||||
},
|
||||
"count":{
|
||||
"type":"long"
|
||||
},
|
||||
"min_data_size":{
|
||||
"type":"long"
|
||||
},
|
||||
"max_data_size":{
|
||||
"type":"long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"table_logstor_info":{
|
||||
"id":"table_logstor_info",
|
||||
"description":"Per-table logstor segment distribution",
|
||||
"properties":{
|
||||
"keyspace":{
|
||||
"type":"string"
|
||||
},
|
||||
"table":{
|
||||
"type":"string"
|
||||
},
|
||||
"compaction_groups":{
|
||||
"type":"long"
|
||||
},
|
||||
"segments":{
|
||||
"type":"long"
|
||||
},
|
||||
"data_size_histogram":{
|
||||
"type":"array",
|
||||
"items":{
|
||||
"$ref":"logstor_hist_bucket"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"tablet_repair_result":{
|
||||
"id":"tablet_repair_result",
|
||||
"description":"Tablet repair result",
|
||||
|
||||
@@ -209,21 +209,6 @@
|
||||
"parameters":[]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path":"/system/chosen_sstable_version",
|
||||
"operations":[
|
||||
{
|
||||
"method":"GET",
|
||||
"summary":"Get sstable version currently chosen for use in new sstables",
|
||||
"type":"string",
|
||||
"nickname":"get_chosen_sstable_version",
|
||||
"produces":[
|
||||
"application/json"
|
||||
],
|
||||
"parameters":[]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -42,14 +42,6 @@
|
||||
"allowMultiple":false,
|
||||
"type":"boolean",
|
||||
"paramType":"query"
|
||||
},
|
||||
{
|
||||
"name":"consider_only_existing_data",
|
||||
"description":"Set to \"true\" to flush all memtables and force tombstone garbage collection to check only the sstables being compacted (false by default). The memtable, commitlog and other uncompacted sstables will not be checked during tombstone garbage collection.",
|
||||
"required":false,
|
||||
"allowMultiple":false,
|
||||
"type":"boolean",
|
||||
"paramType":"query"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
19
api/api.cc
19
api/api.cc
@@ -37,7 +37,6 @@
|
||||
#include "raft.hh"
|
||||
#include "gms/gossip_address_map.hh"
|
||||
#include "service_levels.hh"
|
||||
#include "client_routes.hh"
|
||||
|
||||
logging::logger apilog("api");
|
||||
|
||||
@@ -68,11 +67,9 @@ future<> set_server_init(http_context& ctx) {
|
||||
rb02->set_api_doc(r);
|
||||
rb02->register_api_file(r, "swagger20_header");
|
||||
rb02->register_api_file(r, "metrics");
|
||||
rb02->register_api_file(r, "client_routes");
|
||||
rb->register_function(r, "system",
|
||||
"The system related API");
|
||||
rb02->add_definitions_file(r, "metrics");
|
||||
rb02->add_definitions_file(r, "client_routes");
|
||||
set_system(ctx, r);
|
||||
rb->register_function(r, "error_injection",
|
||||
"The error injection API");
|
||||
@@ -122,9 +119,9 @@ future<> unset_thrift_controller(http_context& ctx) {
|
||||
return ctx.http_server.set_routes([&ctx] (routes& r) { unset_thrift_controller(ctx, r); });
|
||||
}
|
||||
|
||||
future<> set_server_storage_service(http_context& ctx, sharded<service::storage_service>& ss, sharded<db::snapshot_ctl>& ssc, service::raft_group0_client& group0_client) {
|
||||
return ctx.http_server.set_routes([&ctx, &ss, &ssc, &group0_client] (routes& r) {
|
||||
set_storage_service(ctx, r, ss, ssc, group0_client);
|
||||
future<> set_server_storage_service(http_context& ctx, sharded<service::storage_service>& ss, service::raft_group0_client& group0_client) {
|
||||
return ctx.http_server.set_routes([&ctx, &ss, &group0_client] (routes& r) {
|
||||
set_storage_service(ctx, r, ss, group0_client);
|
||||
});
|
||||
}
|
||||
|
||||
@@ -132,16 +129,6 @@ future<> unset_server_storage_service(http_context& ctx) {
|
||||
return ctx.http_server.set_routes([&ctx] (routes& r) { unset_storage_service(ctx, r); });
|
||||
}
|
||||
|
||||
future<> set_server_client_routes(http_context& ctx, sharded<service::client_routes_service>& cr) {
|
||||
return ctx.http_server.set_routes([&ctx, &cr] (routes& r) {
|
||||
set_client_routes(ctx, r, cr);
|
||||
});
|
||||
}
|
||||
|
||||
future<> unset_server_client_routes(http_context& ctx) {
|
||||
return ctx.http_server.set_routes([&ctx] (routes& r) { unset_client_routes(ctx, r); });
|
||||
}
|
||||
|
||||
future<> set_load_meter(http_context& ctx, service::load_meter& lm) {
|
||||
return ctx.http_server.set_routes([&ctx, &lm] (routes& r) { set_load_meter(ctx, r, lm); });
|
||||
}
|
||||
|
||||
25
api/api.hh
25
api/api.hh
@@ -23,6 +23,31 @@
|
||||
|
||||
namespace api {
|
||||
|
||||
template<class T>
|
||||
std::vector<T> map_to_key_value(const std::map<sstring, sstring>& map) {
|
||||
std::vector<T> res;
|
||||
res.reserve(map.size());
|
||||
|
||||
for (const auto& [key, value] : map) {
|
||||
res.push_back(T());
|
||||
res.back().key = key;
|
||||
res.back().value = value;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
template<class T, class MAP>
|
||||
std::vector<T>& map_to_key_value(const MAP& map, std::vector<T>& res) {
|
||||
res.reserve(res.size() + std::size(map));
|
||||
|
||||
for (const auto& [key, value] : map) {
|
||||
T val;
|
||||
val.key = fmt::to_string(key);
|
||||
val.value = fmt::to_string(value);
|
||||
res.push_back(val);
|
||||
}
|
||||
return res;
|
||||
}
|
||||
template <typename T, typename S = T>
|
||||
T map_sum(T&& dest, const S& src) {
|
||||
for (const auto& i : src) {
|
||||
|
||||
@@ -29,7 +29,6 @@ class storage_proxy;
|
||||
class storage_service;
|
||||
class raft_group0_client;
|
||||
class raft_group_registry;
|
||||
class client_routes_service;
|
||||
|
||||
} // namespace service
|
||||
|
||||
@@ -98,10 +97,8 @@ future<> set_server_config(http_context& ctx, db::config& cfg);
|
||||
future<> unset_server_config(http_context& ctx);
|
||||
future<> set_server_snitch(http_context& ctx, sharded<locator::snitch_ptr>& snitch);
|
||||
future<> unset_server_snitch(http_context& ctx);
|
||||
future<> set_server_storage_service(http_context& ctx, sharded<service::storage_service>& ss, sharded<db::snapshot_ctl>&, service::raft_group0_client&);
|
||||
future<> set_server_storage_service(http_context& ctx, sharded<service::storage_service>& ss, service::raft_group0_client&);
|
||||
future<> unset_server_storage_service(http_context& ctx);
|
||||
future<> set_server_client_routes(http_context& ctx, sharded<service::client_routes_service>& cr);
|
||||
future<> unset_server_client_routes(http_context& ctx);
|
||||
future<> set_server_sstables_loader(http_context& ctx, sharded<sstables_loader>& sst_loader);
|
||||
future<> unset_server_sstables_loader(http_context& ctx);
|
||||
future<> set_server_view_builder(http_context& ctx, sharded<db::view::view_builder>& vb, sharded<gms::gossiper>& g);
|
||||
|
||||
@@ -1,176 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2025-present ScyllaDB
|
||||
*
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#include <seastar/http/short_streams.hh>
|
||||
|
||||
#include "client_routes.hh"
|
||||
#include "api/api.hh"
|
||||
#include "service/storage_service.hh"
|
||||
#include "service/client_routes.hh"
|
||||
#include "utils/rjson.hh"
|
||||
|
||||
|
||||
#include "api/api-doc/client_routes.json.hh"
|
||||
|
||||
using namespace seastar::httpd;
|
||||
using namespace std::chrono_literals;
|
||||
using namespace json;
|
||||
|
||||
extern logging::logger apilog;
|
||||
|
||||
namespace api {
|
||||
|
||||
static void validate_client_routes_endpoint(sharded<service::client_routes_service>& cr, sstring endpoint_name) {
|
||||
if (!cr.local().get_feature_service().client_routes) {
|
||||
apilog.warn("{}: called before the cluster feature was enabled", endpoint_name);
|
||||
throw std::runtime_error(fmt::format("{} requires all nodes to support the CLIENT_ROUTES cluster feature", endpoint_name));
|
||||
}
|
||||
}
|
||||
|
||||
static sstring parse_string(const char* name, rapidjson::Value const& v) {
|
||||
const auto it = v.FindMember(name);
|
||||
if (it == v.MemberEnd()) {
|
||||
throw bad_param_exception(fmt::format("Missing '{}'", name));
|
||||
}
|
||||
if (!it->value.IsString()) {
|
||||
throw bad_param_exception(fmt::format("'{}' must be a string", name));
|
||||
}
|
||||
return {it->value.GetString(), it->value.GetStringLength()};
|
||||
}
|
||||
|
||||
static std::optional<uint32_t> parse_port(const char* name, rapidjson::Value const& v) {
|
||||
const auto it = v.FindMember(name);
|
||||
if (it == v.MemberEnd()) {
|
||||
return std::nullopt;
|
||||
}
|
||||
if (!it->value.IsInt()) {
|
||||
throw bad_param_exception(fmt::format("'{}' must be an integer", name));
|
||||
}
|
||||
auto port = it->value.GetInt();
|
||||
if (port < 1 || port > 65535) {
|
||||
throw bad_param_exception(fmt::format("'{}' value={} is outside the allowed port range", name, port));
|
||||
}
|
||||
return port;
|
||||
}
|
||||
|
||||
static std::vector<service::client_routes_service::client_route_entry> parse_set_client_array(const rapidjson::Document& root) {
|
||||
if (!root.IsArray()) {
|
||||
throw bad_param_exception("Body must be a JSON array");
|
||||
}
|
||||
|
||||
std::vector<service::client_routes_service::client_route_entry> v;
|
||||
v.reserve(root.GetArray().Size());
|
||||
for (const auto& element : root.GetArray()) {
|
||||
if (!element.IsObject()) { throw bad_param_exception("Each element must be object"); }
|
||||
|
||||
const auto port = parse_port("port", element);
|
||||
const auto tls_port = parse_port("tls_port", element);
|
||||
const auto alternator_port = parse_port("alternator_port", element);
|
||||
const auto alternator_https_port = parse_port("alternator_https_port", element);
|
||||
|
||||
if (!port.has_value() && !tls_port.has_value() && !alternator_port.has_value() && !alternator_https_port.has_value()) {
|
||||
throw bad_param_exception("At least one port field ('port', 'tls_port', 'alternator_port', 'alternator_https_port') must be specified");
|
||||
}
|
||||
|
||||
v.emplace_back(
|
||||
parse_string("connection_id", element),
|
||||
utils::UUID{parse_string("host_id", element)},
|
||||
parse_string("address", element),
|
||||
port,
|
||||
tls_port,
|
||||
alternator_port,
|
||||
alternator_https_port
|
||||
);
|
||||
}
|
||||
|
||||
return v;
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_set_client_routes(http_context& ctx, sharded<service::client_routes_service>& cr, std::unique_ptr<http::request> req) {
|
||||
validate_client_routes_endpoint(cr, "rest_set_client_routes");
|
||||
|
||||
rapidjson::Document root;
|
||||
auto content = co_await util::read_entire_stream_contiguous(*req->content_stream);
|
||||
root.Parse(content.c_str());
|
||||
|
||||
co_await cr.local().set_client_routes(parse_set_client_array(root));
|
||||
co_return seastar::json::json_void();
|
||||
}
|
||||
|
||||
static std::vector<service::client_routes_service::client_route_key> parse_delete_client_array(const rapidjson::Document& root) {
|
||||
if (!root.IsArray()) {
|
||||
throw bad_param_exception("Body must be a JSON array");
|
||||
}
|
||||
|
||||
std::vector<service::client_routes_service::client_route_key> v;
|
||||
v.reserve(root.GetArray().Size());
|
||||
for (const auto& element : root.GetArray()) {
|
||||
v.emplace_back(
|
||||
parse_string("connection_id", element),
|
||||
utils::UUID{parse_string("host_id", element)}
|
||||
);
|
||||
}
|
||||
|
||||
return v;
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_delete_client_routes(http_context& ctx, sharded<service::client_routes_service>& cr, std::unique_ptr<http::request> req) {
|
||||
validate_client_routes_endpoint(cr, "delete_client_routes");
|
||||
|
||||
rapidjson::Document root;
|
||||
auto content = co_await util::read_entire_stream_contiguous(*req->content_stream);
|
||||
root.Parse(content.c_str());
|
||||
|
||||
co_await cr.local().delete_client_routes(parse_delete_client_array(root));
|
||||
co_return seastar::json::json_void();
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_get_client_routes(http_context& ctx, sharded<service::client_routes_service>& cr, std::unique_ptr<http::request> req) {
|
||||
validate_client_routes_endpoint(cr, "get_client_routes");
|
||||
|
||||
co_return co_await cr.invoke_on(0, [] (service::client_routes_service& cr) -> future<json::json_return_type> {
|
||||
co_return json::json_return_type(stream_range_as_array(co_await cr.get_client_routes(), [](const service::client_routes_service::client_route_entry & entry) {
|
||||
seastar::httpd::client_routes_json::client_routes_entry obj;
|
||||
obj.connection_id = entry.connection_id;
|
||||
obj.host_id = fmt::to_string(entry.host_id);
|
||||
obj.address = entry.address;
|
||||
if (entry.port.has_value()) { obj.port = entry.port.value(); }
|
||||
if (entry.tls_port.has_value()) { obj.tls_port = entry.tls_port.value(); }
|
||||
if (entry.alternator_port.has_value()) { obj.alternator_port = entry.alternator_port.value(); }
|
||||
if (entry.alternator_https_port.has_value()) { obj.alternator_https_port = entry.alternator_https_port.value(); }
|
||||
return obj;
|
||||
}));
|
||||
});
|
||||
}
|
||||
|
||||
void set_client_routes(http_context& ctx, routes& r, sharded<service::client_routes_service>& cr) {
|
||||
seastar::httpd::client_routes_json::set_client_routes.set(r, [&ctx, &cr] (std::unique_ptr<seastar::http::request> req) {
|
||||
return rest_set_client_routes(ctx, cr, std::move(req));
|
||||
});
|
||||
seastar::httpd::client_routes_json::delete_client_routes.set(r, [&ctx, &cr] (std::unique_ptr<seastar::http::request> req) {
|
||||
return rest_delete_client_routes(ctx, cr, std::move(req));
|
||||
});
|
||||
seastar::httpd::client_routes_json::get_client_routes.set(r, [&ctx, &cr] (std::unique_ptr<seastar::http::request> req) {
|
||||
return rest_get_client_routes(ctx, cr, std::move(req));
|
||||
});
|
||||
}
|
||||
|
||||
void unset_client_routes(http_context& ctx, routes& r) {
|
||||
seastar::httpd::client_routes_json::set_client_routes.unset(r);
|
||||
seastar::httpd::client_routes_json::delete_client_routes.unset(r);
|
||||
seastar::httpd::client_routes_json::get_client_routes.unset(r);
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2025-present ScyllaDB
|
||||
*
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
#pragma once
|
||||
|
||||
#include <seastar/core/sharded.hh>
|
||||
#include <seastar/json/json_elements.hh>
|
||||
#include "api/api_init.hh"
|
||||
|
||||
namespace api {
|
||||
|
||||
void set_client_routes(http_context& ctx, httpd::routes& r, sharded<service::client_routes_service>& cr);
|
||||
void unset_client_routes(http_context& ctx, httpd::routes& r);
|
||||
|
||||
}
|
||||
@@ -18,9 +18,7 @@
|
||||
#include "utils/assert.hh"
|
||||
#include "utils/estimated_histogram.hh"
|
||||
#include <algorithm>
|
||||
#include <sstream>
|
||||
#include "db/data_listeners.hh"
|
||||
#include "utils/hash.hh"
|
||||
#include "storage_service.hh"
|
||||
#include "compaction/compaction_manager.hh"
|
||||
#include "unimplemented.hh"
|
||||
@@ -68,13 +66,6 @@ static future<json::json_return_type> get_cf_stats(sharded<replica::database>&
|
||||
}, std::plus<int64_t>());
|
||||
}
|
||||
|
||||
static future<json::json_return_type> get_cf_stats(sharded<replica::database>& db,
|
||||
std::function<int64_t(const replica::column_family_stats&)> f) {
|
||||
return map_reduce_cf(db, int64_t(0), [f](const replica::column_family& cf) {
|
||||
return f(cf.get_stats());
|
||||
}, std::plus<int64_t>());
|
||||
}
|
||||
|
||||
static future<json::json_return_type> for_tables_on_all_shards(sharded<replica::database>& db, std::vector<table_info> tables, std::function<future<>(replica::table&)> set) {
|
||||
return do_with(std::move(tables), [&db, set] (const std::vector<table_info>& tables) {
|
||||
return db.invoke_on_all([&tables, set] (replica::database& db) {
|
||||
@@ -344,56 +335,6 @@ uint64_t accumulate_on_active_memtables(replica::table& t, noncopyable_function<
|
||||
return ret;
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_toppartitions_generic(sharded<replica::database>& db, std::unique_ptr<http::request> req) {
|
||||
bool filters_provided = false;
|
||||
|
||||
std::unordered_set<std::tuple<sstring, sstring>, utils::tuple_hash> table_filters {};
|
||||
if (auto filters = req->get_query_param("table_filters"); !filters.empty()) {
|
||||
filters_provided = true;
|
||||
std::stringstream ss { filters };
|
||||
std::string filter;
|
||||
while (!filters.empty() && ss.good()) {
|
||||
std::getline(ss, filter, ',');
|
||||
table_filters.emplace(parse_fully_qualified_cf_name(filter));
|
||||
}
|
||||
}
|
||||
|
||||
std::unordered_set<sstring> keyspace_filters {};
|
||||
if (auto filters = req->get_query_param("keyspace_filters"); !filters.empty()) {
|
||||
filters_provided = true;
|
||||
std::stringstream ss { filters };
|
||||
std::string filter;
|
||||
while (!filters.empty() && ss.good()) {
|
||||
std::getline(ss, filter, ',');
|
||||
keyspace_filters.emplace(std::move(filter));
|
||||
}
|
||||
}
|
||||
|
||||
// when the query is empty return immediately
|
||||
if (filters_provided && table_filters.empty() && keyspace_filters.empty()) {
|
||||
apilog.debug("toppartitions query: processing results");
|
||||
cf::toppartitions_query_results results;
|
||||
|
||||
results.read_cardinality = 0;
|
||||
results.write_cardinality = 0;
|
||||
|
||||
return make_ready_future<json::json_return_type>(results);
|
||||
}
|
||||
|
||||
api::req_param<std::chrono::milliseconds, unsigned> duration{*req, "duration", 1000ms};
|
||||
api::req_param<unsigned> capacity(*req, "capacity", 256);
|
||||
api::req_param<unsigned> list_size(*req, "list_size", 10);
|
||||
|
||||
apilog.info("toppartitions query: #table_filters={} #keyspace_filters={} duration={} list_size={} capacity={}",
|
||||
!table_filters.empty() ? std::to_string(table_filters.size()) : "all", !keyspace_filters.empty() ? std::to_string(keyspace_filters.size()) : "all", duration.value, list_size.value, capacity.value);
|
||||
|
||||
return seastar::do_with(db::toppartitions_query(db, std::move(table_filters), std::move(keyspace_filters), duration.value, list_size, capacity), [] (db::toppartitions_query& q) {
|
||||
return run_toppartitions_query(q);
|
||||
});
|
||||
}
|
||||
|
||||
void set_column_family(http_context& ctx, routes& r, sharded<replica::database>& db) {
|
||||
cf::get_column_family_name.set(r, [&db] (const_req req){
|
||||
std::vector<sstring> res;
|
||||
@@ -1099,10 +1040,6 @@ void set_column_family(http_context& ctx, routes& r, sharded<replica::database>&
|
||||
});
|
||||
});
|
||||
|
||||
ss::toppartitions_generic.set(r, [&db] (std::unique_ptr<http::request> req) {
|
||||
return rest_toppartitions_generic(db, std::move(req));
|
||||
});
|
||||
|
||||
cf::force_major_compaction.set(r, [&ctx, &db](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
if (!req->get_query_param("split_output").empty()) {
|
||||
fail(unimplemented::cause::API);
|
||||
@@ -1129,14 +1066,10 @@ void set_column_family(http_context& ctx, routes& r, sharded<replica::database>&
|
||||
});
|
||||
|
||||
ss::get_load.set(r, [&db] (std::unique_ptr<http::request> req) {
|
||||
return get_cf_stats(db, [](const replica::column_family_stats& stats) {
|
||||
return stats.live_disk_space_used.on_disk;
|
||||
});
|
||||
return get_cf_stats(db, &replica::column_family_stats::live_disk_space_used);
|
||||
});
|
||||
ss::get_metrics_load.set(r, [&db] (std::unique_ptr<http::request> req) {
|
||||
return get_cf_stats(db, [](const replica::column_family_stats& stats) {
|
||||
return stats.live_disk_space_used.on_disk;
|
||||
});
|
||||
return get_cf_stats(db, &replica::column_family_stats::live_disk_space_used);
|
||||
});
|
||||
|
||||
ss::get_keyspaces.set(r, [&db] (const_req req) {
|
||||
@@ -1269,7 +1202,6 @@ void unset_column_family(http_context& ctx, routes& r) {
|
||||
cf::get_sstable_count_per_level.unset(r);
|
||||
cf::get_sstables_for_key.unset(r);
|
||||
cf::toppartitions.unset(r);
|
||||
ss::toppartitions_generic.unset(r);
|
||||
cf::force_major_compaction.unset(r);
|
||||
ss::get_load.unset(r);
|
||||
ss::get_metrics_load.unset(r);
|
||||
|
||||
@@ -17,8 +17,9 @@
|
||||
#include "gms/feature_service.hh"
|
||||
#include "schema/schema_builder.hh"
|
||||
#include "sstables/sstables_manager.hh"
|
||||
#include "utils/hash.hh"
|
||||
#include <optional>
|
||||
#include <stdexcept>
|
||||
#include <sstream>
|
||||
#include <time.h>
|
||||
#include <algorithm>
|
||||
#include <functional>
|
||||
@@ -503,7 +504,6 @@ void set_sstables_loader(http_context& ctx, routes& r, sharded<sstables_loader>&
|
||||
auto bucket = req->get_query_param("bucket");
|
||||
auto prefix = req->get_query_param("prefix");
|
||||
auto scope = parse_stream_scope(req->get_query_param("scope"));
|
||||
auto primary_replica_only = validate_bool_x(req->get_query_param("primary_replica_only"), false);
|
||||
|
||||
rjson::chunked_content content = co_await util::read_entire_stream(*req->content_stream);
|
||||
rjson::value parsed = rjson::parse(std::move(content));
|
||||
@@ -513,16 +513,7 @@ void set_sstables_loader(http_context& ctx, routes& r, sharded<sstables_loader>&
|
||||
auto sstables = parsed.GetArray() |
|
||||
std::views::transform([] (const auto& s) { return sstring(rjson::to_string_view(s)); }) |
|
||||
std::ranges::to<std::vector>();
|
||||
apilog.info("Restore invoked with following parameters: keyspace={}, table={}, endpoint={}, bucket={}, prefix={}, sstables_count={}, scope={}, primary_replica_only={}",
|
||||
keyspace,
|
||||
table,
|
||||
endpoint,
|
||||
bucket,
|
||||
prefix,
|
||||
sstables.size(),
|
||||
scope,
|
||||
primary_replica_only);
|
||||
auto task_id = co_await sst_loader.local().download_new_sstables(keyspace, table, prefix, std::move(sstables), endpoint, bucket, scope, primary_replica_only);
|
||||
auto task_id = co_await sst_loader.local().download_new_sstables(keyspace, table, prefix, std::move(sstables), endpoint, bucket, scope);
|
||||
co_return json::json_return_type(fmt::to_string(task_id));
|
||||
});
|
||||
|
||||
@@ -534,15 +525,13 @@ void unset_sstables_loader(http_context& ctx, routes& r) {
|
||||
}
|
||||
|
||||
void set_view_builder(http_context& ctx, routes& r, sharded<db::view::view_builder>& vb, sharded<gms::gossiper>& g) {
|
||||
ss::view_build_statuses.set(r, [&ctx, &vb, &g] (std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
ss::view_build_statuses.set(r, [&ctx, &vb, &g] (std::unique_ptr<http::request> req) {
|
||||
auto keyspace = validate_keyspace(ctx, req);
|
||||
auto view = req->get_path_param("view");
|
||||
co_return json::json_return_type(stream_range_as_array(co_await vb.local().view_build_statuses(std::move(keyspace), std::move(view), g.local()), [] (const auto& i) {
|
||||
storage_service_json::mapper res;
|
||||
res.key = i.first;
|
||||
res.value = i.second;
|
||||
return res;
|
||||
}));
|
||||
return vb.local().view_build_statuses(std::move(keyspace), std::move(view), g.local()).then([] (std::unordered_map<sstring, sstring> status) {
|
||||
std::vector<storage_service_json::mapper> res;
|
||||
return make_ready_future<json::json_return_type>(map_to_key_value(std::move(status), res));
|
||||
});
|
||||
});
|
||||
|
||||
cf::get_built_indexes.set(r, [&vb](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
@@ -556,13 +545,17 @@ void set_view_builder(http_context& ctx, routes& r, sharded<db::view::view_build
|
||||
vp.insert(b.second);
|
||||
}
|
||||
}
|
||||
std::vector<sstring> res;
|
||||
replica::database& db = vb.local().get_db();
|
||||
auto uuid = validate_table(db, ks, cf_name);
|
||||
replica::column_family& cf = db.find_column_family(uuid);
|
||||
co_return cf.get_index_manager().list_indexes()
|
||||
| std::views::transform([] (const auto& i) { return i.metadata().name(); })
|
||||
| std::views::filter([&vp] (const auto& n) { return vp.contains(secondary_index::index_table_name(n)); })
|
||||
| std::ranges::to<std::vector>();
|
||||
res.reserve(cf.get_index_manager().list_indexes().size());
|
||||
for (auto&& i : cf.get_index_manager().list_indexes()) {
|
||||
if (vp.contains(secondary_index::index_table_name(i.metadata().name()))) {
|
||||
res.emplace_back(i.metadata().name());
|
||||
}
|
||||
}
|
||||
co_return res;
|
||||
});
|
||||
|
||||
}
|
||||
@@ -580,16 +573,6 @@ static future<json::json_return_type> describe_ring_as_json_for_table(const shar
|
||||
co_return json::json_return_type(stream_range_as_array(co_await ss.local().describe_ring_for_table(keyspace, table), token_range_endpoints_to_json));
|
||||
}
|
||||
|
||||
namespace {
|
||||
template <typename Key, typename Value>
|
||||
storage_service_json::mapper map_to_json(const std::pair<Key, Value>& i) {
|
||||
storage_service_json::mapper val;
|
||||
val.key = fmt::to_string(i.first);
|
||||
val.value = fmt::to_string(i.second);
|
||||
return val;
|
||||
}
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_get_token_endpoint(http_context& ctx, sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
@@ -607,7 +590,62 @@ rest_get_token_endpoint(http_context& ctx, sharded<service::storage_service>& ss
|
||||
throw bad_param_exception("Either provide both keyspace and table (for tablet table) or neither (for vnodes)");
|
||||
}
|
||||
|
||||
co_return json::json_return_type(stream_range_as_array(token_endpoints, &map_to_json<dht::token, gms::inet_address>));
|
||||
co_return json::json_return_type(stream_range_as_array(token_endpoints, [](const auto& i) {
|
||||
storage_service_json::mapper val;
|
||||
val.key = fmt::to_string(i.first);
|
||||
val.value = fmt::to_string(i.second);
|
||||
return val;
|
||||
}));
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_toppartitions_generic(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
bool filters_provided = false;
|
||||
|
||||
std::unordered_set<std::tuple<sstring, sstring>, utils::tuple_hash> table_filters {};
|
||||
if (auto filters = req->get_query_param("table_filters"); !filters.empty()) {
|
||||
filters_provided = true;
|
||||
std::stringstream ss { filters };
|
||||
std::string filter;
|
||||
while (!filters.empty() && ss.good()) {
|
||||
std::getline(ss, filter, ',');
|
||||
table_filters.emplace(parse_fully_qualified_cf_name(filter));
|
||||
}
|
||||
}
|
||||
|
||||
std::unordered_set<sstring> keyspace_filters {};
|
||||
if (auto filters = req->get_query_param("keyspace_filters"); !filters.empty()) {
|
||||
filters_provided = true;
|
||||
std::stringstream ss { filters };
|
||||
std::string filter;
|
||||
while (!filters.empty() && ss.good()) {
|
||||
std::getline(ss, filter, ',');
|
||||
keyspace_filters.emplace(std::move(filter));
|
||||
}
|
||||
}
|
||||
|
||||
// when the query is empty return immediately
|
||||
if (filters_provided && table_filters.empty() && keyspace_filters.empty()) {
|
||||
apilog.debug("toppartitions query: processing results");
|
||||
httpd::column_family_json::toppartitions_query_results results;
|
||||
|
||||
results.read_cardinality = 0;
|
||||
results.write_cardinality = 0;
|
||||
|
||||
return make_ready_future<json::json_return_type>(results);
|
||||
}
|
||||
|
||||
api::req_param<std::chrono::milliseconds, unsigned> duration{*req, "duration", 1000ms};
|
||||
api::req_param<unsigned> capacity(*req, "capacity", 256);
|
||||
api::req_param<unsigned> list_size(*req, "list_size", 10);
|
||||
|
||||
apilog.info("toppartitions query: #table_filters={} #keyspace_filters={} duration={} list_size={} capacity={}",
|
||||
!table_filters.empty() ? std::to_string(table_filters.size()) : "all", !keyspace_filters.empty() ? std::to_string(keyspace_filters.size()) : "all", duration.value, list_size.value, capacity.value);
|
||||
|
||||
return seastar::do_with(db::toppartitions_query(ctx.db, std::move(table_filters), std::move(keyspace_filters), duration.value, list_size, capacity), [] (db::toppartitions_query& q) {
|
||||
return run_toppartitions_query(q);
|
||||
});
|
||||
}
|
||||
|
||||
static
|
||||
@@ -641,6 +679,7 @@ rest_get_range_to_endpoint_map(http_context& ctx, sharded<service::storage_servi
|
||||
table_id = validate_table(ctx.db.local(), keyspace, table);
|
||||
}
|
||||
|
||||
std::vector<ss::maplist_mapper> res;
|
||||
co_return stream_range_as_array(co_await ss.local().get_range_to_address_map(keyspace, table_id),
|
||||
[](const std::pair<dht::token_range, inet_address_vector_replica_set>& entry){
|
||||
ss::maplist_mapper m;
|
||||
@@ -724,43 +763,25 @@ rest_cdc_streams_check_and_repair(sharded<service::storage_service>& ss, std::un
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_cleanup_all(http_context& ctx, sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
bool global = true;
|
||||
if (auto global_param = req->get_query_param("global"); !global_param.empty()) {
|
||||
global = validate_bool(global_param);
|
||||
}
|
||||
|
||||
apilog.info("cleanup_all global={}", global);
|
||||
|
||||
if (global) {
|
||||
co_await ss.invoke_on(0, [] (service::storage_service& ss) -> future<> {
|
||||
co_return co_await ss.do_clusterwide_vnodes_cleanup();
|
||||
});
|
||||
apilog.info("cleanup_all");
|
||||
auto done = co_await ss.invoke_on(0, [] (service::storage_service& ss) -> future<bool> {
|
||||
if (!ss.is_topology_coordinator_enabled()) {
|
||||
co_return false;
|
||||
}
|
||||
co_await ss.do_clusterwide_vnodes_cleanup();
|
||||
co_return true;
|
||||
});
|
||||
if (done) {
|
||||
co_return json::json_return_type(0);
|
||||
}
|
||||
// fall back to the local cleanup if local cleanup is requested
|
||||
// fall back to the local global cleanup if topology coordinator is not enabled
|
||||
auto& db = ctx.db;
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::global_cleanup_compaction_task_impl>({}, db);
|
||||
co_await task->done();
|
||||
|
||||
// Mark this node as clean
|
||||
co_await ss.invoke_on(0, [] (service::storage_service& ss) -> future<> {
|
||||
co_await ss.reset_cleanup_needed();
|
||||
});
|
||||
|
||||
co_return json::json_return_type(0);
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_reset_cleanup_needed(http_context& ctx, sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
apilog.info("reset_cleanup_needed");
|
||||
co_await ss.invoke_on(0, [] (service::storage_service& ss) {
|
||||
return ss.reset_cleanup_needed();
|
||||
});
|
||||
co_return json_void();
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_force_flush(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
@@ -783,31 +804,9 @@ rest_force_keyspace_flush(http_context& ctx, std::unique_ptr<http::request> req)
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_logstor_compaction(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
bool major = false;
|
||||
if (auto major_param = req->get_query_param("major"); !major_param.empty()) {
|
||||
major = validate_bool(major_param);
|
||||
}
|
||||
apilog.info("logstor_compaction: major={}", major);
|
||||
auto& db = ctx.db;
|
||||
co_await replica::database::trigger_logstor_compaction_on_all_shards(db, major);
|
||||
co_return json_void();
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_logstor_flush(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
apilog.info("logstor_flush");
|
||||
auto& db = ctx.db;
|
||||
co_await replica::database::flush_logstor_separator_on_all_shards(db);
|
||||
co_return json_void();
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_decommission(sharded<service::storage_service>& ss, sharded<db::snapshot_ctl>& ssc, std::unique_ptr<http::request> req) {
|
||||
rest_decommission(sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
apilog.info("decommission");
|
||||
return ss.local().decommission(ssc).then([] {
|
||||
return ss.local().decommission().then([] {
|
||||
return make_ready_future<json::json_return_type>(json_void());
|
||||
});
|
||||
}
|
||||
@@ -845,25 +844,6 @@ rest_remove_node(sharded<service::storage_service>& ss, std::unique_ptr<http::re
|
||||
});
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_exclude_node(sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
auto hosts = utils::split_comma_separated_list(req->get_query_param("hosts"))
|
||||
| std::views::transform([] (const sstring& s) { return locator::host_id(utils::UUID(s)); })
|
||||
| std::ranges::to<std::vector<locator::host_id>>();
|
||||
|
||||
auto& topo = ss.local().get_token_metadata().get_topology();
|
||||
for (auto host : hosts) {
|
||||
if (!topo.has_node(host)) {
|
||||
throw bad_param_exception(fmt::format("Host ID {} does not belong to this cluster", host));
|
||||
}
|
||||
}
|
||||
|
||||
apilog.info("exclude_node: hosts={}", hosts);
|
||||
co_await ss.local().mark_excluded(hosts);
|
||||
co_return json_void();
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_get_removal_status(sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
@@ -1284,7 +1264,10 @@ rest_get_ownership(http_context& ctx, sharded<service::storage_service>& ss, std
|
||||
throw httpd::bad_param_exception("storage_service/ownership cannot be used when a keyspace uses tablets");
|
||||
}
|
||||
|
||||
co_return json::json_return_type(stream_range_as_array(co_await ss.local().get_ownership(), &map_to_json<gms::inet_address, float>));
|
||||
return ss.local().get_ownership().then([] (auto&& ownership) {
|
||||
std::vector<storage_service_json::mapper> res;
|
||||
return make_ready_future<json::json_return_type>(map_to_key_value(ownership, res));
|
||||
});
|
||||
}
|
||||
|
||||
static
|
||||
@@ -1301,7 +1284,10 @@ rest_get_effective_ownership(http_context& ctx, sharded<service::storage_service
|
||||
}
|
||||
}
|
||||
|
||||
co_return json::json_return_type(stream_range_as_array(co_await ss.local().effective_ownership(keyspace_name, table_name), &map_to_json<gms::inet_address, float>));
|
||||
return ss.local().effective_ownership(keyspace_name, table_name).then([] (auto&& ownership) {
|
||||
std::vector<storage_service_json::mapper> res;
|
||||
return make_ready_future<json::json_return_type>(map_to_key_value(ownership, res));
|
||||
});
|
||||
}
|
||||
|
||||
static
|
||||
@@ -1311,7 +1297,7 @@ rest_estimate_compression_ratios(http_context& ctx, sharded<service::storage_ser
|
||||
apilog.warn("estimate_compression_ratios: called before the cluster feature was enabled");
|
||||
throw std::runtime_error("estimate_compression_ratios requires all nodes to support the SSTABLE_COMPRESSION_DICTS cluster feature");
|
||||
}
|
||||
auto ticket = co_await get_units(ss.local().get_do_sample_sstables_concurrency_limiter(), 1);
|
||||
auto ticket = get_units(ss.local().get_do_sample_sstables_concurrency_limiter(), 1);
|
||||
auto ks = api::req_param<sstring>(*req, "keyspace", {}).value;
|
||||
auto cf = api::req_param<sstring>(*req, "cf", {}).value;
|
||||
apilog.debug("estimate_compression_ratios: called with ks={} cf={}", ks, cf);
|
||||
@@ -1377,7 +1363,7 @@ rest_retrain_dict(http_context& ctx, sharded<service::storage_service>& ss, serv
|
||||
apilog.warn("retrain_dict: called before the cluster feature was enabled");
|
||||
throw std::runtime_error("retrain_dict requires all nodes to support the SSTABLE_COMPRESSION_DICTS cluster feature");
|
||||
}
|
||||
auto ticket = co_await get_units(ss.local().get_do_sample_sstables_concurrency_limiter(), 1);
|
||||
auto ticket = get_units(ss.local().get_do_sample_sstables_concurrency_limiter(), 1);
|
||||
auto ks = api::req_param<sstring>(*req, "keyspace", {}).value;
|
||||
auto cf = api::req_param<sstring>(*req, "cf", {}).value;
|
||||
apilog.debug("retrain_dict: called with ks={} cf={}", ks, cf);
|
||||
@@ -1523,54 +1509,6 @@ rest_sstable_info(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
});
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_logstor_info(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
auto keyspace = api::req_param<sstring>(*req, "keyspace", {}).value;
|
||||
auto table = api::req_param<sstring>(*req, "table", {}).value;
|
||||
if (table.empty()) {
|
||||
table = api::req_param<sstring>(*req, "cf", {}).value;
|
||||
}
|
||||
|
||||
if (keyspace.empty()) {
|
||||
throw bad_param_exception("The query parameter 'keyspace' is required");
|
||||
}
|
||||
if (table.empty()) {
|
||||
throw bad_param_exception("The query parameter 'table' is required");
|
||||
}
|
||||
|
||||
keyspace = validate_keyspace(ctx, keyspace);
|
||||
auto tid = validate_table(ctx.db.local(), keyspace, table);
|
||||
|
||||
auto& cf = ctx.db.local().find_column_family(tid);
|
||||
if (!cf.uses_logstor()) {
|
||||
throw bad_param_exception(fmt::format("Table {}.{} does not use logstor", keyspace, table));
|
||||
}
|
||||
|
||||
return do_with(replica::logstor::table_segment_stats{}, [keyspace = std::move(keyspace), table = std::move(table), tid, &ctx] (replica::logstor::table_segment_stats& merged_stats) {
|
||||
return ctx.db.map_reduce([&merged_stats](replica::logstor::table_segment_stats&& shard_stats) {
|
||||
merged_stats += shard_stats;
|
||||
}, [tid](const replica::database& db) {
|
||||
return db.get_logstor_table_segment_stats(tid);
|
||||
}).then([&merged_stats, keyspace = std::move(keyspace), table = std::move(table)] {
|
||||
ss::table_logstor_info result;
|
||||
result.keyspace = keyspace;
|
||||
result.table = table;
|
||||
result.compaction_groups = merged_stats.compaction_group_count;
|
||||
result.segments = merged_stats.segment_count;
|
||||
|
||||
for (const auto& bucket : merged_stats.histogram) {
|
||||
ss::logstor_hist_bucket hist;
|
||||
hist.count = bucket.count;
|
||||
hist.max_data_size = bucket.max_data_size;
|
||||
result.data_size_histogram.push(std::move(hist));
|
||||
}
|
||||
|
||||
return make_ready_future<json::json_return_type>(stream_object(result));
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_reload_raft_topology_state(sharded<service::storage_service>& ss, service::raft_group0_client& group0_client, std::unique_ptr<http::request> req) {
|
||||
@@ -1583,14 +1521,26 @@ rest_reload_raft_topology_state(sharded<service::storage_service>& ss, service::
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_upgrade_to_raft_topology(sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
apilog.info("Requested to schedule upgrade to raft topology, but this version does not need it since it uses raft topology by default.");
|
||||
apilog.info("Requested to schedule upgrade to raft topology");
|
||||
try {
|
||||
co_await ss.invoke_on(0, [] (auto& ss) {
|
||||
return ss.start_upgrade_to_raft_topology();
|
||||
});
|
||||
} catch (...) {
|
||||
auto ex = std::current_exception();
|
||||
apilog.error("Failed to schedule upgrade to raft topology: {}", ex);
|
||||
std::rethrow_exception(std::move(ex));
|
||||
}
|
||||
co_return json_void();
|
||||
}
|
||||
|
||||
static
|
||||
future<json::json_return_type>
|
||||
rest_raft_topology_upgrade_status(sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
co_return sstring("done");
|
||||
const auto ustate = co_await ss.invoke_on(0, [] (auto& ss) {
|
||||
return ss.get_topology_upgrade_state();
|
||||
});
|
||||
co_return sstring(format("{}", ustate));
|
||||
}
|
||||
|
||||
static
|
||||
@@ -1800,8 +1750,9 @@ rest_bind(FuncType func, BindArgs&... args) {
|
||||
return std::bind_front(func, std::ref(args)...);
|
||||
}
|
||||
|
||||
void set_storage_service(http_context& ctx, routes& r, sharded<service::storage_service>& ss, sharded<db::snapshot_ctl>& ssc, service::raft_group0_client& group0_client) {
|
||||
void set_storage_service(http_context& ctx, routes& r, sharded<service::storage_service>& ss, service::raft_group0_client& group0_client) {
|
||||
ss::get_token_endpoint.set(r, rest_bind(rest_get_token_endpoint, ctx, ss));
|
||||
ss::toppartitions_generic.set(r, rest_bind(rest_toppartitions_generic, ctx));
|
||||
ss::get_release_version.set(r, rest_bind(rest_get_release_version, ss));
|
||||
ss::get_scylla_release_version.set(r, rest_bind(rest_get_scylla_release_version, ss));
|
||||
ss::get_schema_version.set(r, rest_bind(rest_get_schema_version, ss));
|
||||
@@ -1813,15 +1764,11 @@ void set_storage_service(http_context& ctx, routes& r, sharded<service::storage_
|
||||
ss::get_natural_endpoints_v2.set(r, rest_bind(rest_get_natural_endpoints_v2, ctx, ss));
|
||||
ss::cdc_streams_check_and_repair.set(r, rest_bind(rest_cdc_streams_check_and_repair, ss));
|
||||
ss::cleanup_all.set(r, rest_bind(rest_cleanup_all, ctx, ss));
|
||||
ss::reset_cleanup_needed.set(r, rest_bind(rest_reset_cleanup_needed, ctx, ss));
|
||||
ss::force_flush.set(r, rest_bind(rest_force_flush, ctx));
|
||||
ss::force_keyspace_flush.set(r, rest_bind(rest_force_keyspace_flush, ctx));
|
||||
ss::decommission.set(r, rest_bind(rest_decommission, ss, ssc));
|
||||
ss::logstor_compaction.set(r, rest_bind(rest_logstor_compaction, ctx));
|
||||
ss::logstor_flush.set(r, rest_bind(rest_logstor_flush, ctx));
|
||||
ss::decommission.set(r, rest_bind(rest_decommission, ss));
|
||||
ss::move.set(r, rest_bind(rest_move, ss));
|
||||
ss::remove_node.set(r, rest_bind(rest_remove_node, ss));
|
||||
ss::exclude_node.set(r, rest_bind(rest_exclude_node, ss));
|
||||
ss::get_removal_status.set(r, rest_bind(rest_get_removal_status, ss));
|
||||
ss::force_remove_completion.set(r, rest_bind(rest_force_remove_completion, ss));
|
||||
ss::set_logging_level.set(r, rest_bind(rest_set_logging_level));
|
||||
@@ -1867,7 +1814,6 @@ void set_storage_service(http_context& ctx, routes& r, sharded<service::storage_
|
||||
ss::retrain_dict.set(r, rest_bind(rest_retrain_dict, ctx, ss, group0_client));
|
||||
ss::estimate_compression_ratios.set(r, rest_bind(rest_estimate_compression_ratios, ctx, ss));
|
||||
ss::sstable_info.set(r, rest_bind(rest_sstable_info, ctx));
|
||||
ss::logstor_info.set(r, rest_bind(rest_logstor_info, ctx));
|
||||
ss::reload_raft_topology_state.set(r, rest_bind(rest_reload_raft_topology_state, ss, group0_client));
|
||||
ss::upgrade_to_raft_topology.set(r, rest_bind(rest_upgrade_to_raft_topology, ss));
|
||||
ss::raft_topology_upgrade_status.set(r, rest_bind(rest_raft_topology_upgrade_status, ss));
|
||||
@@ -1884,6 +1830,7 @@ void set_storage_service(http_context& ctx, routes& r, sharded<service::storage_
|
||||
|
||||
void unset_storage_service(http_context& ctx, routes& r) {
|
||||
ss::get_token_endpoint.unset(r);
|
||||
ss::toppartitions_generic.unset(r);
|
||||
ss::get_release_version.unset(r);
|
||||
ss::get_scylla_release_version.unset(r);
|
||||
ss::get_schema_version.unset(r);
|
||||
@@ -1894,15 +1841,11 @@ void unset_storage_service(http_context& ctx, routes& r) {
|
||||
ss::get_natural_endpoints.unset(r);
|
||||
ss::cdc_streams_check_and_repair.unset(r);
|
||||
ss::cleanup_all.unset(r);
|
||||
ss::reset_cleanup_needed.unset(r);
|
||||
ss::force_flush.unset(r);
|
||||
ss::force_keyspace_flush.unset(r);
|
||||
ss::logstor_compaction.unset(r);
|
||||
ss::logstor_flush.unset(r);
|
||||
ss::decommission.unset(r);
|
||||
ss::move.unset(r);
|
||||
ss::remove_node.unset(r);
|
||||
ss::exclude_node.unset(r);
|
||||
ss::get_removal_status.unset(r);
|
||||
ss::force_remove_completion.unset(r);
|
||||
ss::set_logging_level.unset(r);
|
||||
@@ -1946,7 +1889,6 @@ void unset_storage_service(http_context& ctx, routes& r) {
|
||||
ss::get_ownership.unset(r);
|
||||
ss::get_effective_ownership.unset(r);
|
||||
ss::sstable_info.unset(r);
|
||||
ss::logstor_info.unset(r);
|
||||
ss::reload_raft_topology_state.unset(r);
|
||||
ss::upgrade_to_raft_topology.unset(r);
|
||||
ss::raft_topology_upgrade_status.unset(r);
|
||||
@@ -2026,16 +1968,12 @@ void set_snapshot(http_context& ctx, routes& r, sharded<db::snapshot_ctl>& snap_
|
||||
auto tag = req->get_query_param("tag");
|
||||
auto column_families = split(req->get_query_param("cf"), ",");
|
||||
auto sfopt = req->get_query_param("sf");
|
||||
auto tcopt = req->get_query_param("tc");
|
||||
|
||||
db::snapshot_options opts = {
|
||||
.skip_flush = strcasecmp(sfopt.c_str(), "true") == 0,
|
||||
};
|
||||
auto sf = db::snapshot_ctl::skip_flush(strcasecmp(sfopt.c_str(), "true") == 0);
|
||||
|
||||
std::vector<sstring> keynames = split(req->get_query_param("kn"), ",");
|
||||
try {
|
||||
if (column_families.empty()) {
|
||||
co_await snap_ctl.local().take_snapshot(tag, keynames, opts);
|
||||
co_await snap_ctl.local().take_snapshot(tag, keynames, sf);
|
||||
} else {
|
||||
if (keynames.empty()) {
|
||||
throw httpd::bad_param_exception("The keyspace of column families must be specified");
|
||||
@@ -2043,7 +1981,7 @@ void set_snapshot(http_context& ctx, routes& r, sharded<db::snapshot_ctl>& snap_
|
||||
if (keynames.size() > 1) {
|
||||
throw httpd::bad_param_exception("Only one keyspace allowed when specifying a column family");
|
||||
}
|
||||
co_await snap_ctl.local().take_column_family_snapshot(keynames[0], column_families, tag, opts);
|
||||
co_await snap_ctl.local().take_column_family_snapshot(keynames[0], column_families, tag, sf);
|
||||
}
|
||||
co_return json_void();
|
||||
} catch (...) {
|
||||
@@ -2052,27 +1990,6 @@ void set_snapshot(http_context& ctx, routes& r, sharded<db::snapshot_ctl>& snap_
|
||||
}
|
||||
});
|
||||
|
||||
ss::take_cluster_snapshot.set(r, [&snap_ctl](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
apilog.info("take_cluster_snapshot: {}", req->get_query_params());
|
||||
auto tag = req->get_query_param("tag");
|
||||
auto column_families = split(req->get_query_param("table"), ",");
|
||||
// Note: not published/active. Retain as internal option, but...
|
||||
auto sfopt = req->get_query_param("skip_flush");
|
||||
|
||||
db::snapshot_options opts = {
|
||||
.skip_flush = strcasecmp(sfopt.c_str(), "true") == 0,
|
||||
};
|
||||
|
||||
std::vector<sstring> keynames = split(req->get_query_param("keyspace"), ",");
|
||||
try {
|
||||
co_await snap_ctl.local().take_cluster_column_family_snapshot(keynames, column_families, tag, opts);
|
||||
co_return json_void();
|
||||
} catch (...) {
|
||||
apilog.error("take_cluster_snapshot failed: {}", std::current_exception());
|
||||
throw;
|
||||
}
|
||||
});
|
||||
|
||||
ss::del_snapshot.set(r, [&snap_ctl](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
apilog.info("del_snapshot: {}", req->get_query_params());
|
||||
auto tag = req->get_query_param("tag");
|
||||
@@ -2099,8 +2016,7 @@ void set_snapshot(http_context& ctx, routes& r, sharded<db::snapshot_ctl>& snap_
|
||||
auto info = parse_scrub_options(ctx, std::move(req));
|
||||
|
||||
if (!info.snapshot_tag.empty()) {
|
||||
db::snapshot_options opts = {.skip_flush = false};
|
||||
co_await snap_ctl.local().take_column_family_snapshot(info.keyspace, info.column_families, info.snapshot_tag, opts);
|
||||
co_await snap_ctl.local().take_column_family_snapshot(info.keyspace, info.column_families, info.snapshot_tag, db::snapshot_ctl::skip_flush::no);
|
||||
}
|
||||
|
||||
compaction::compaction_stats stats;
|
||||
@@ -2163,7 +2079,6 @@ void unset_snapshot(http_context& ctx, routes& r) {
|
||||
ss::start_backup.unset(r);
|
||||
cf::get_true_snapshots_size.unset(r);
|
||||
cf::get_all_true_snapshots_size.unset(r);
|
||||
ss::decommission.unset(r);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -66,7 +66,7 @@ struct scrub_info {
|
||||
|
||||
scrub_info parse_scrub_options(const http_context& ctx, std::unique_ptr<http::request> req);
|
||||
|
||||
void set_storage_service(http_context& ctx, httpd::routes& r, sharded<service::storage_service>& ss, sharded<db::snapshot_ctl>&, service::raft_group0_client&);
|
||||
void set_storage_service(http_context& ctx, httpd::routes& r, sharded<service::storage_service>& ss, service::raft_group0_client&);
|
||||
void unset_storage_service(http_context& ctx, httpd::routes& r);
|
||||
void set_sstables_loader(http_context& ctx, httpd::routes& r, sharded<sstables_loader>& sst_loader);
|
||||
void unset_sstables_loader(http_context& ctx, httpd::routes& r);
|
||||
|
||||
@@ -190,13 +190,6 @@ void set_system(http_context& ctx, routes& r) {
|
||||
return make_ready_future<json::json_return_type>(seastar::to_sstring(format));
|
||||
});
|
||||
});
|
||||
|
||||
hs::get_chosen_sstable_version.set(r, [&ctx] (std::unique_ptr<request> req) {
|
||||
return smp::submit_to(0, [&ctx] {
|
||||
auto format = ctx.db.local().get_user_sstables_manager().get_preferred_sstable_version();
|
||||
return make_ready_future<json::json_return_type>(seastar::to_sstring(format));
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
#include <seastar/core/chunked_fifo.hh>
|
||||
#include <seastar/core/coroutine.hh>
|
||||
#include <seastar/coroutine/exception.hh>
|
||||
#include <seastar/coroutine/maybe_yield.hh>
|
||||
#include <seastar/http/exception.hh>
|
||||
|
||||
#include "task_manager.hh"
|
||||
@@ -265,7 +264,7 @@ void set_task_manager(http_context& ctx, routes& r, sharded<tasks::task_manager>
|
||||
if (id) {
|
||||
module->unregister_task(id);
|
||||
}
|
||||
co_await coroutine::maybe_yield();
|
||||
co_await maybe_yield();
|
||||
}
|
||||
});
|
||||
co_return json_void();
|
||||
|
||||
130
api/tasks.cc
130
api/tasks.cc
@@ -38,78 +38,76 @@ static auto wrap_ks_cf(http_context &ctx, ks_cf_func f) {
|
||||
};
|
||||
}
|
||||
|
||||
static future<shared_ptr<compaction::major_keyspace_compaction_task_impl>> force_keyspace_compaction(http_context& ctx, std::unique_ptr<http::request> req) {
|
||||
auto& db = ctx.db;
|
||||
auto [ keyspace, table_infos ] = parse_table_infos(ctx, *req, "cf");
|
||||
auto flush = validate_bool_x(req->get_query_param("flush_memtables"), true);
|
||||
auto consider_only_existing_data = validate_bool_x(req->get_query_param("consider_only_existing_data"), false);
|
||||
apilog.info("force_keyspace_compaction: keyspace={} tables={}, flush={} consider_only_existing_data={}", keyspace, table_infos, flush, consider_only_existing_data);
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
std::optional<compaction::flush_mode> fmopt;
|
||||
if (!flush && !consider_only_existing_data) {
|
||||
fmopt = compaction::flush_mode::skip;
|
||||
}
|
||||
return compaction_module.make_and_start_task<compaction::major_keyspace_compaction_task_impl>({}, std::move(keyspace), tasks::task_id::create_null_id(), db, table_infos, fmopt, consider_only_existing_data);
|
||||
}
|
||||
|
||||
static future<shared_ptr<compaction::upgrade_sstables_compaction_task_impl>> upgrade_sstables(http_context& ctx, std::unique_ptr<http::request> req, sstring keyspace, std::vector<table_info> table_infos) {
|
||||
auto& db = ctx.db;
|
||||
bool exclude_current_version = req_param<bool>(*req, "exclude_current_version", false);
|
||||
|
||||
apilog.info("upgrade_sstables: keyspace={} tables={} exclude_current_version={}", keyspace, table_infos, exclude_current_version);
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
return compaction_module.make_and_start_task<compaction::upgrade_sstables_compaction_task_impl>({}, std::move(keyspace), db, table_infos, exclude_current_version);
|
||||
}
|
||||
|
||||
static future<shared_ptr<compaction::cleanup_keyspace_compaction_task_impl>> force_keyspace_cleanup(http_context& ctx, sharded<service::storage_service>& ss, std::unique_ptr<http::request> req) {
|
||||
auto& db = ctx.db;
|
||||
auto [keyspace, table_infos] = parse_table_infos(ctx, *req);
|
||||
const auto& rs = db.local().find_keyspace(keyspace).get_replication_strategy();
|
||||
if (rs.is_local() || !rs.is_vnode_based()) {
|
||||
auto reason = rs.is_local() ? "require" : "support";
|
||||
apilog.info("Keyspace {} does not {} cleanup", keyspace, reason);
|
||||
co_return nullptr;
|
||||
}
|
||||
apilog.info("force_keyspace_cleanup: keyspace={} tables={}", keyspace, table_infos);
|
||||
if (!co_await ss.local().is_vnodes_cleanup_allowed(keyspace)) {
|
||||
auto msg = "Can not perform cleanup operation when topology changes";
|
||||
apilog.warn("force_keyspace_cleanup: keyspace={} tables={}: {}", keyspace, table_infos, msg);
|
||||
co_await coroutine::return_exception(std::runtime_error(msg));
|
||||
}
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
co_return co_await compaction_module.make_and_start_task<compaction::cleanup_keyspace_compaction_task_impl>(
|
||||
{}, std::move(keyspace), db, table_infos, compaction::flush_mode::all_tables, tasks::is_user_task::yes);
|
||||
}
|
||||
|
||||
void set_tasks_compaction_module(http_context& ctx, routes& r, sharded<service::storage_service>& ss, sharded<db::snapshot_ctl>& snap_ctl) {
|
||||
t::force_keyspace_compaction_async.set(r, [&ctx](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
auto task = co_await force_keyspace_compaction(ctx, std::move(req));
|
||||
auto& db = ctx.db;
|
||||
auto [ keyspace, table_infos ] = parse_table_infos(ctx, *req, "cf");
|
||||
auto flush = validate_bool_x(req->get_query_param("flush_memtables"), true);
|
||||
apilog.debug("force_keyspace_compaction_async: keyspace={} tables={}, flush={}", keyspace, table_infos, flush);
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
std::optional<compaction::flush_mode> fmopt;
|
||||
if (!flush) {
|
||||
fmopt = compaction::flush_mode::skip;
|
||||
}
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::major_keyspace_compaction_task_impl>({}, std::move(keyspace), tasks::task_id::create_null_id(), db, table_infos, fmopt);
|
||||
|
||||
co_return json::json_return_type(task->get_status().id.to_sstring());
|
||||
});
|
||||
|
||||
ss::force_keyspace_compaction.set(r, [&ctx](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
auto task = co_await force_keyspace_compaction(ctx, std::move(req));
|
||||
auto& db = ctx.db;
|
||||
auto [ keyspace, table_infos ] = parse_table_infos(ctx, *req, "cf");
|
||||
auto flush = validate_bool_x(req->get_query_param("flush_memtables"), true);
|
||||
auto consider_only_existing_data = validate_bool_x(req->get_query_param("consider_only_existing_data"), false);
|
||||
apilog.info("force_keyspace_compaction: keyspace={} tables={}, flush={} consider_only_existing_data={}", keyspace, table_infos, flush, consider_only_existing_data);
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
std::optional<compaction::flush_mode> fmopt;
|
||||
if (!flush && !consider_only_existing_data) {
|
||||
fmopt = compaction::flush_mode::skip;
|
||||
}
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::major_keyspace_compaction_task_impl>({}, std::move(keyspace), tasks::task_id::create_null_id(), db, table_infos, fmopt, consider_only_existing_data);
|
||||
co_await task->done();
|
||||
co_return json_void();
|
||||
});
|
||||
|
||||
t::force_keyspace_cleanup_async.set(r, [&ctx, &ss](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
tasks::task_id id = tasks::task_id::create_null_id();
|
||||
auto task = co_await force_keyspace_cleanup(ctx, ss, std::move(req));
|
||||
if (task) {
|
||||
id = task->get_status().id;
|
||||
auto& db = ctx.db;
|
||||
auto [keyspace, table_infos] = parse_table_infos(ctx, *req);
|
||||
apilog.info("force_keyspace_cleanup_async: keyspace={} tables={}", keyspace, table_infos);
|
||||
if (!co_await ss.local().is_vnodes_cleanup_allowed(keyspace)) {
|
||||
auto msg = "Can not perform cleanup operation when topology changes";
|
||||
apilog.warn("force_keyspace_cleanup_async: keyspace={} tables={}: {}", keyspace, table_infos, msg);
|
||||
co_await coroutine::return_exception(std::runtime_error(msg));
|
||||
}
|
||||
co_return json::json_return_type(id.to_sstring());
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::cleanup_keyspace_compaction_task_impl>({}, std::move(keyspace), db, table_infos, compaction::flush_mode::all_tables, tasks::is_user_task::yes);
|
||||
|
||||
co_return json::json_return_type(task->get_status().id.to_sstring());
|
||||
});
|
||||
|
||||
ss::force_keyspace_cleanup.set(r, [&ctx, &ss](std::unique_ptr<http::request> req) -> future<json::json_return_type> {
|
||||
auto task = co_await force_keyspace_cleanup(ctx, ss, std::move(req));
|
||||
if (task) {
|
||||
co_await task->done();
|
||||
auto& db = ctx.db;
|
||||
auto [keyspace, table_infos] = parse_table_infos(ctx, *req);
|
||||
const auto& rs = db.local().find_keyspace(keyspace).get_replication_strategy();
|
||||
if (rs.is_local() || !rs.is_vnode_based()) {
|
||||
auto reason = rs.is_local() ? "require" : "support";
|
||||
apilog.info("Keyspace {} does not {} cleanup", keyspace, reason);
|
||||
co_return json::json_return_type(0);
|
||||
}
|
||||
apilog.info("force_keyspace_cleanup: keyspace={} tables={}", keyspace, table_infos);
|
||||
if (!co_await ss.local().is_vnodes_cleanup_allowed(keyspace)) {
|
||||
auto msg = "Can not perform cleanup operation when topology changes";
|
||||
apilog.warn("force_keyspace_cleanup: keyspace={} tables={}: {}", keyspace, table_infos, msg);
|
||||
co_await coroutine::return_exception(std::runtime_error(msg));
|
||||
}
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::cleanup_keyspace_compaction_task_impl>(
|
||||
{}, std::move(keyspace), db, table_infos, compaction::flush_mode::all_tables, tasks::is_user_task::yes);
|
||||
co_await task->done();
|
||||
co_return json::json_return_type(0);
|
||||
});
|
||||
|
||||
@@ -131,12 +129,25 @@ void set_tasks_compaction_module(http_context& ctx, routes& r, sharded<service::
|
||||
}));
|
||||
|
||||
t::upgrade_sstables_async.set(r, wrap_ks_cf(ctx, [] (http_context& ctx, std::unique_ptr<http::request> req, sstring keyspace, std::vector<table_info> table_infos) -> future<json::json_return_type> {
|
||||
auto task = co_await upgrade_sstables(ctx, std::move(req), std::move(keyspace), std::move(table_infos));
|
||||
auto& db = ctx.db;
|
||||
bool exclude_current_version = req_param<bool>(*req, "exclude_current_version", false);
|
||||
|
||||
apilog.info("upgrade_sstables: keyspace={} tables={} exclude_current_version={}", keyspace, table_infos, exclude_current_version);
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::upgrade_sstables_compaction_task_impl>({}, std::move(keyspace), db, table_infos, exclude_current_version);
|
||||
|
||||
co_return json::json_return_type(task->get_status().id.to_sstring());
|
||||
}));
|
||||
|
||||
ss::upgrade_sstables.set(r, wrap_ks_cf(ctx, [] (http_context& ctx, std::unique_ptr<http::request> req, sstring keyspace, std::vector<table_info> table_infos) -> future<json::json_return_type> {
|
||||
auto task = co_await upgrade_sstables(ctx, std::move(req), std::move(keyspace), std::move(table_infos));
|
||||
auto& db = ctx.db;
|
||||
bool exclude_current_version = req_param<bool>(*req, "exclude_current_version", false);
|
||||
|
||||
apilog.info("upgrade_sstables: keyspace={} tables={} exclude_current_version={}", keyspace, table_infos, exclude_current_version);
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
auto task = co_await compaction_module.make_and_start_task<compaction::upgrade_sstables_compaction_task_impl>({}, std::move(keyspace), db, table_infos, exclude_current_version);
|
||||
co_await task->done();
|
||||
co_return json::json_return_type(0);
|
||||
}));
|
||||
@@ -146,8 +157,7 @@ void set_tasks_compaction_module(http_context& ctx, routes& r, sharded<service::
|
||||
auto info = parse_scrub_options(ctx, std::move(req));
|
||||
|
||||
if (!info.snapshot_tag.empty()) {
|
||||
db::snapshot_options opts = {.skip_flush = false};
|
||||
co_await snap_ctl.local().take_column_family_snapshot(info.keyspace, info.column_families, info.snapshot_tag, opts);
|
||||
co_await snap_ctl.local().take_column_family_snapshot(info.keyspace, info.column_families, info.snapshot_tag, db::snapshot_ctl::skip_flush::no);
|
||||
}
|
||||
|
||||
auto& compaction_module = db.local().get_compaction_manager().get_task_manager_module();
|
||||
|
||||
@@ -62,17 +62,6 @@ void set_token_metadata(http_context& ctx, routes& r, sharded<locator::shared_to
|
||||
return addr | std::ranges::to<std::vector>();
|
||||
});
|
||||
|
||||
ss::get_excluded_nodes.set(r, [&tm](const_req req) {
|
||||
const auto& local_tm = *tm.local().get();
|
||||
std::vector<sstring> eps;
|
||||
local_tm.get_topology().for_each_node([&] (auto& node) {
|
||||
if (node.is_excluded()) {
|
||||
eps.push_back(node.host_id().to_sstring());
|
||||
}
|
||||
});
|
||||
return eps;
|
||||
});
|
||||
|
||||
ss::get_joining_nodes.set(r, [&tm, &g](const_req req) {
|
||||
const auto& local_tm = *tm.local().get();
|
||||
const auto& points = local_tm.get_bootstrap_tokens();
|
||||
@@ -141,7 +130,6 @@ void unset_token_metadata(http_context& ctx, routes& r) {
|
||||
ss::get_leaving_nodes.unset(r);
|
||||
ss::get_moving_nodes.unset(r);
|
||||
ss::get_joining_nodes.unset(r);
|
||||
ss::get_excluded_nodes.unset(r);
|
||||
ss::get_host_id_map.unset(r);
|
||||
httpd::endpoint_snitch_info_json::get_datacenter.unset(r);
|
||||
httpd::endpoint_snitch_info_json::get_rack.unset(r);
|
||||
|
||||
@@ -5,7 +5,6 @@ target_sources(scylla_audit
|
||||
PRIVATE
|
||||
audit.cc
|
||||
audit_cf_storage_helper.cc
|
||||
audit_composite_storage_helper.cc
|
||||
audit_syslog_storage_helper.cc)
|
||||
target_include_directories(scylla_audit
|
||||
PUBLIC
|
||||
@@ -17,7 +16,4 @@ target_link_libraries(scylla_audit
|
||||
PRIVATE
|
||||
cql3)
|
||||
|
||||
if (Scylla_USE_PRECOMPILED_HEADER_USE)
|
||||
target_precompile_headers(scylla_audit REUSE_FROM scylla-precompiled-header)
|
||||
endif()
|
||||
add_whole_archive(audit scylla_audit)
|
||||
|
||||
120
audit/audit.cc
120
audit/audit.cc
@@ -13,11 +13,9 @@
|
||||
#include "cql3/statements/batch_statement.hh"
|
||||
#include "cql3/statements/modification_statement.hh"
|
||||
#include "storage_helper.hh"
|
||||
#include "audit_cf_storage_helper.hh"
|
||||
#include "audit_syslog_storage_helper.hh"
|
||||
#include "audit_composite_storage_helper.hh"
|
||||
#include "audit.hh"
|
||||
#include "../db/config.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
#include <boost/algorithm/string/split.hpp>
|
||||
#include <boost/algorithm/string/trim.hpp>
|
||||
@@ -28,47 +26,6 @@ namespace audit {
|
||||
|
||||
logging::logger logger("audit");
|
||||
|
||||
static std::set<sstring> parse_audit_modes(const sstring& data) {
|
||||
std::set<sstring> result;
|
||||
if (!data.empty()) {
|
||||
std::vector<sstring> audit_modes;
|
||||
boost::split(audit_modes, data, boost::is_any_of(","));
|
||||
if (audit_modes.empty()) {
|
||||
return {};
|
||||
}
|
||||
for (sstring& audit_mode : audit_modes) {
|
||||
boost::trim(audit_mode);
|
||||
if (audit_mode == "none") {
|
||||
return {};
|
||||
}
|
||||
if (audit_mode != "table" && audit_mode != "syslog") {
|
||||
throw audit_exception(fmt::format("Bad configuration: invalid 'audit': {}", audit_mode));
|
||||
}
|
||||
result.insert(std::move(audit_mode));
|
||||
}
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
static std::unique_ptr<storage_helper> create_storage_helper(const std::set<sstring>& audit_modes, cql3::query_processor& qp, service::migration_manager& mm) {
|
||||
SCYLLA_ASSERT(!audit_modes.empty() && !audit_modes.contains("none"));
|
||||
|
||||
std::vector<std::unique_ptr<storage_helper>> helpers;
|
||||
for (const sstring& audit_mode : audit_modes) {
|
||||
if (audit_mode == "table") {
|
||||
helpers.emplace_back(std::make_unique<audit_cf_storage_helper>(qp, mm));
|
||||
} else if (audit_mode == "syslog") {
|
||||
helpers.emplace_back(std::make_unique<audit_syslog_storage_helper>(qp, mm));
|
||||
}
|
||||
}
|
||||
|
||||
SCYLLA_ASSERT(!helpers.empty());
|
||||
if (helpers.size() == 1) {
|
||||
return std::move(helpers.front());
|
||||
}
|
||||
return std::make_unique<audit_composite_storage_helper>(std::move(helpers));
|
||||
}
|
||||
|
||||
static sstring category_to_string(statement_category category)
|
||||
{
|
||||
switch (category) {
|
||||
@@ -146,9 +103,7 @@ static std::set<sstring> parse_audit_keyspaces(const sstring& data) {
|
||||
}
|
||||
|
||||
audit::audit(locator::shared_token_metadata& token_metadata,
|
||||
cql3::query_processor& qp,
|
||||
service::migration_manager& mm,
|
||||
std::set<sstring>&& audit_modes,
|
||||
sstring&& storage_helper_name,
|
||||
std::set<sstring>&& audited_keyspaces,
|
||||
std::map<sstring, std::set<sstring>>&& audited_tables,
|
||||
category_set&& audited_categories,
|
||||
@@ -157,21 +112,28 @@ audit::audit(locator::shared_token_metadata& token_metadata,
|
||||
, _audited_keyspaces(std::move(audited_keyspaces))
|
||||
, _audited_tables(std::move(audited_tables))
|
||||
, _audited_categories(std::move(audited_categories))
|
||||
, _storage_helper_class_name(std::move(storage_helper_name))
|
||||
, _cfg(cfg)
|
||||
, _cfg_keyspaces_observer(cfg.audit_keyspaces.observe([this] (sstring const& new_value){ update_config<std::set<sstring>>(new_value, parse_audit_keyspaces, _audited_keyspaces); }))
|
||||
, _cfg_tables_observer(cfg.audit_tables.observe([this] (sstring const& new_value){ update_config<std::map<sstring, std::set<sstring>>>(new_value, parse_audit_tables, _audited_tables); }))
|
||||
, _cfg_categories_observer(cfg.audit_categories.observe([this] (sstring const& new_value){ update_config<category_set>(new_value, parse_audit_categories, _audited_categories); }))
|
||||
{
|
||||
_storage_helper_ptr = create_storage_helper(std::move(audit_modes), qp, mm);
|
||||
}
|
||||
{ }
|
||||
|
||||
audit::~audit() = default;
|
||||
|
||||
future<> audit::start_audit(const db::config& cfg, sharded<locator::shared_token_metadata>& stm, sharded<cql3::query_processor>& qp, sharded<service::migration_manager>& mm) {
|
||||
std::set<sstring> audit_modes = parse_audit_modes(cfg.audit());
|
||||
if (audit_modes.empty()) {
|
||||
future<> audit::create_audit(const db::config& cfg, sharded<locator::shared_token_metadata>& stm) {
|
||||
sstring storage_helper_name;
|
||||
if (cfg.audit() == "table") {
|
||||
storage_helper_name = "audit_cf_storage_helper";
|
||||
} else if (cfg.audit() == "syslog") {
|
||||
storage_helper_name = "audit_syslog_storage_helper";
|
||||
} else if (cfg.audit() == "none") {
|
||||
// Audit is off
|
||||
logger.info("Audit is disabled");
|
||||
|
||||
return make_ready_future<>();
|
||||
} else {
|
||||
throw audit_exception(fmt::format("Bad configuration: invalid 'audit': {}", cfg.audit()));
|
||||
}
|
||||
category_set audited_categories = parse_audit_categories(cfg.audit_categories());
|
||||
std::map<sstring, std::set<sstring>> audited_tables = parse_audit_tables(cfg.audit_tables());
|
||||
@@ -181,20 +143,19 @@ future<> audit::start_audit(const db::config& cfg, sharded<locator::shared_token
|
||||
cfg.audit(), cfg.audit_categories(), cfg.audit_keyspaces(), cfg.audit_tables());
|
||||
|
||||
return audit_instance().start(std::ref(stm),
|
||||
std::ref(qp),
|
||||
std::ref(mm),
|
||||
std::move(audit_modes),
|
||||
std::move(storage_helper_name),
|
||||
std::move(audited_keyspaces),
|
||||
std::move(audited_tables),
|
||||
std::move(audited_categories),
|
||||
std::cref(cfg))
|
||||
.then([&cfg] {
|
||||
if (!audit_instance().local_is_initialized()) {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
return audit_instance().invoke_on_all([&cfg] (audit& local_audit) {
|
||||
return local_audit.start(cfg);
|
||||
});
|
||||
std::cref(cfg));
|
||||
}
|
||||
|
||||
future<> audit::start_audit(const db::config& cfg, sharded<cql3::query_processor>& qp, sharded<service::migration_manager>& mm) {
|
||||
if (!audit_instance().local_is_initialized()) {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
return audit_instance().invoke_on_all([&cfg, &qp, &mm] (audit& local_audit) {
|
||||
return local_audit.start(cfg, qp.local(), mm.local());
|
||||
});
|
||||
}
|
||||
|
||||
@@ -209,14 +170,26 @@ future<> audit::stop_audit() {
|
||||
});
|
||||
}
|
||||
|
||||
audit_info_ptr audit::create_audit_info(statement_category cat, const sstring& keyspace, const sstring& table, bool batch) {
|
||||
audit_info_ptr audit::create_audit_info(statement_category cat, const sstring& keyspace, const sstring& table) {
|
||||
if (!audit_instance().local_is_initialized()) {
|
||||
return nullptr;
|
||||
}
|
||||
return std::make_unique<audit_info>(cat, keyspace, table, batch);
|
||||
return std::make_unique<audit_info>(cat, keyspace, table);
|
||||
}
|
||||
|
||||
future<> audit::start(const db::config& cfg) {
|
||||
audit_info_ptr audit::create_no_audit_info() {
|
||||
return audit_info_ptr();
|
||||
}
|
||||
|
||||
future<> audit::start(const db::config& cfg, cql3::query_processor& qp, service::migration_manager& mm) {
|
||||
try {
|
||||
_storage_helper_ptr = create_object<storage_helper>(_storage_helper_class_name, qp, mm);
|
||||
} catch (no_such_class& e) {
|
||||
logger.error("Can't create audit storage helper {}: not supported", _storage_helper_class_name);
|
||||
throw;
|
||||
} catch (...) {
|
||||
throw;
|
||||
}
|
||||
return _storage_helper_ptr->start(cfg);
|
||||
}
|
||||
|
||||
@@ -263,21 +236,18 @@ future<> audit::log_login(const sstring& username, socket_address client_ip, boo
|
||||
}
|
||||
|
||||
future<> inspect(shared_ptr<cql3::cql_statement> statement, service::query_state& query_state, const cql3::query_options& options, bool error) {
|
||||
auto audit_info = statement->get_audit_info();
|
||||
if (!audit_info) {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
if (audit_info->batch()) {
|
||||
cql3::statements::batch_statement* batch = static_cast<cql3::statements::batch_statement*>(statement.get());
|
||||
cql3::statements::batch_statement* batch = dynamic_cast<cql3::statements::batch_statement*>(statement.get());
|
||||
if (batch != nullptr) {
|
||||
return do_for_each(batch->statements().begin(), batch->statements().end(), [&query_state, &options, error] (auto&& m) {
|
||||
return inspect(m.statement, query_state, options, error);
|
||||
});
|
||||
} else {
|
||||
if (audit::local_audit_instance().should_log(audit_info)) {
|
||||
auto audit_info = statement->get_audit_info();
|
||||
if (bool(audit_info) && audit::local_audit_instance().should_log(audit_info)) {
|
||||
return audit::local_audit_instance().log(audit_info, query_state, options, error);
|
||||
}
|
||||
return make_ready_future<>();
|
||||
}
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
future<> inspect_login(const sstring& username, socket_address client_ip, bool error) {
|
||||
|
||||
@@ -75,13 +75,11 @@ class audit_info final {
|
||||
sstring _keyspace;
|
||||
sstring _table;
|
||||
sstring _query;
|
||||
bool _batch;
|
||||
public:
|
||||
audit_info(statement_category cat, sstring keyspace, sstring table, bool batch)
|
||||
audit_info(statement_category cat, sstring keyspace, sstring table)
|
||||
: _category(cat)
|
||||
, _keyspace(std::move(keyspace))
|
||||
, _table(std::move(table))
|
||||
, _batch(batch)
|
||||
{ }
|
||||
void set_query_string(const std::string_view& query_string) {
|
||||
_query = sstring(query_string);
|
||||
@@ -91,7 +89,6 @@ public:
|
||||
const sstring& query() const { return _query; }
|
||||
sstring category_string() const;
|
||||
statement_category category() const { return _category; }
|
||||
bool batch() const { return _batch; }
|
||||
};
|
||||
|
||||
using audit_info_ptr = std::unique_ptr<audit_info>;
|
||||
@@ -105,6 +102,7 @@ class audit final : public seastar::async_sharded_service<audit> {
|
||||
std::map<sstring, std::set<sstring>> _audited_tables;
|
||||
category_set _audited_categories;
|
||||
|
||||
sstring _storage_helper_class_name;
|
||||
std::unique_ptr<storage_helper> _storage_helper_ptr;
|
||||
|
||||
const db::config& _cfg;
|
||||
@@ -127,19 +125,18 @@ public:
|
||||
static audit& local_audit_instance() {
|
||||
return audit_instance().local();
|
||||
}
|
||||
static future<> start_audit(const db::config& cfg, sharded<locator::shared_token_metadata>& stm, sharded<cql3::query_processor>& qp, sharded<service::migration_manager>& mm);
|
||||
static future<> create_audit(const db::config& cfg, sharded<locator::shared_token_metadata>& stm);
|
||||
static future<> start_audit(const db::config& cfg, sharded<cql3::query_processor>& qp, sharded<service::migration_manager>& mm);
|
||||
static future<> stop_audit();
|
||||
static audit_info_ptr create_audit_info(statement_category cat, const sstring& keyspace, const sstring& table, bool batch = false);
|
||||
audit(locator::shared_token_metadata& stm,
|
||||
cql3::query_processor& qp,
|
||||
service::migration_manager& mm,
|
||||
std::set<sstring>&& audit_modes,
|
||||
static audit_info_ptr create_audit_info(statement_category cat, const sstring& keyspace, const sstring& table);
|
||||
static audit_info_ptr create_no_audit_info();
|
||||
audit(locator::shared_token_metadata& stm, sstring&& storage_helper_name,
|
||||
std::set<sstring>&& audited_keyspaces,
|
||||
std::map<sstring, std::set<sstring>>&& audited_tables,
|
||||
category_set&& audited_categories,
|
||||
const db::config& cfg);
|
||||
~audit();
|
||||
future<> start(const db::config& cfg);
|
||||
future<> start(const db::config& cfg, cql3::query_processor& qp, service::migration_manager& mm);
|
||||
future<> stop();
|
||||
future<> shutdown();
|
||||
bool should_log(const audit_info* audit_info) const;
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
#include "cql3/query_processor.hh"
|
||||
#include "data_dictionary/keyspace_metadata.hh"
|
||||
#include "utils/UUID_gen.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
#include "cql3/query_options.hh"
|
||||
#include "cql3/statements/ks_prop_defs.hh"
|
||||
#include "service/migration_manager.hh"
|
||||
@@ -197,4 +198,7 @@ cql3::query_options audit_cf_storage_helper::make_login_data(socket_address node
|
||||
return cql3::query_options(cql3::default_cql_config, db::consistency_level::ONE, std::nullopt, std::move(values), false, cql3::query_options::specific_options::DEFAULT);
|
||||
}
|
||||
|
||||
using registry = class_registrator<storage_helper, audit_cf_storage_helper, cql3::query_processor&, service::migration_manager&>;
|
||||
static registry registrator1("audit_cf_storage_helper");
|
||||
|
||||
}
|
||||
|
||||
@@ -1,68 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2025 ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#include <seastar/core/loop.hh>
|
||||
#include <seastar/core/future-util.hh>
|
||||
|
||||
#include "audit/audit_composite_storage_helper.hh"
|
||||
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
namespace audit {
|
||||
|
||||
audit_composite_storage_helper::audit_composite_storage_helper(std::vector<std::unique_ptr<storage_helper>>&& storage_helpers)
|
||||
: _storage_helpers(std::move(storage_helpers))
|
||||
{}
|
||||
|
||||
future<> audit_composite_storage_helper::start(const db::config& cfg) {
|
||||
auto res = seastar::parallel_for_each(
|
||||
_storage_helpers,
|
||||
[&cfg] (std::unique_ptr<storage_helper>& h) {
|
||||
return h->start(cfg);
|
||||
}
|
||||
);
|
||||
return res;
|
||||
}
|
||||
|
||||
future<> audit_composite_storage_helper::stop() {
|
||||
auto res = seastar::parallel_for_each(
|
||||
_storage_helpers,
|
||||
[] (std::unique_ptr<storage_helper>& h) {
|
||||
return h->stop();
|
||||
}
|
||||
);
|
||||
return res;
|
||||
}
|
||||
|
||||
future<> audit_composite_storage_helper::write(const audit_info* audit_info,
|
||||
socket_address node_ip,
|
||||
socket_address client_ip,
|
||||
db::consistency_level cl,
|
||||
const sstring& username,
|
||||
bool error) {
|
||||
return seastar::parallel_for_each(
|
||||
_storage_helpers,
|
||||
[audit_info, node_ip, client_ip, cl, &username, error](std::unique_ptr<storage_helper>& h) {
|
||||
return h->write(audit_info, node_ip, client_ip, cl, username, error);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
future<> audit_composite_storage_helper::write_login(const sstring& username,
|
||||
socket_address node_ip,
|
||||
socket_address client_ip,
|
||||
bool error) {
|
||||
return seastar::parallel_for_each(
|
||||
_storage_helpers,
|
||||
[&username, node_ip, client_ip, error](std::unique_ptr<storage_helper>& h) {
|
||||
return h->write_login(username, node_ip, client_ip, error);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
} // namespace audit
|
||||
@@ -1,37 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2025 ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
#pragma once
|
||||
|
||||
#include "audit/audit.hh"
|
||||
#include <seastar/core/future.hh>
|
||||
|
||||
#include "storage_helper.hh"
|
||||
|
||||
namespace audit {
|
||||
|
||||
class audit_composite_storage_helper : public storage_helper {
|
||||
std::vector<std::unique_ptr<storage_helper>> _storage_helpers;
|
||||
|
||||
public:
|
||||
explicit audit_composite_storage_helper(std::vector<std::unique_ptr<storage_helper>>&&);
|
||||
virtual ~audit_composite_storage_helper() = default;
|
||||
virtual future<> start(const db::config& cfg) override;
|
||||
virtual future<> stop() override;
|
||||
virtual future<> write(const audit_info* audit_info,
|
||||
socket_address node_ip,
|
||||
socket_address client_ip,
|
||||
db::consistency_level cl,
|
||||
const sstring& username,
|
||||
bool error) override;
|
||||
virtual future<> write_login(const sstring& username,
|
||||
socket_address node_ip,
|
||||
socket_address client_ip,
|
||||
bool error) override;
|
||||
};
|
||||
|
||||
} // namespace audit
|
||||
@@ -21,6 +21,7 @@
|
||||
#include <fmt/chrono.h>
|
||||
|
||||
#include "cql3/query_processor.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
namespace cql3 {
|
||||
|
||||
@@ -53,10 +54,10 @@ static std::string json_escape(std::string_view str) {
|
||||
|
||||
}
|
||||
|
||||
future<> audit_syslog_storage_helper::syslog_send_helper(temporary_buffer<char> msg) {
|
||||
future<> audit_syslog_storage_helper::syslog_send_helper(const sstring& msg) {
|
||||
try {
|
||||
auto lock = co_await get_units(_semaphore, 1, std::chrono::hours(1));
|
||||
co_await _sender.send(_syslog_address, std::span(&msg, 1));
|
||||
co_await _sender.send(_syslog_address, net::packet{msg.data(), msg.size()});
|
||||
}
|
||||
catch (const std::exception& e) {
|
||||
auto error_msg = seastar::format(
|
||||
@@ -90,7 +91,7 @@ future<> audit_syslog_storage_helper::start(const db::config& cfg) {
|
||||
co_return;
|
||||
}
|
||||
|
||||
co_await syslog_send_helper(temporary_buffer<char>::copy_of("Initializing syslog audit backend."));
|
||||
co_await syslog_send_helper("Initializing syslog audit backend.");
|
||||
}
|
||||
|
||||
future<> audit_syslog_storage_helper::stop() {
|
||||
@@ -120,7 +121,7 @@ future<> audit_syslog_storage_helper::write(const audit_info* audit_info,
|
||||
audit_info->table(),
|
||||
username);
|
||||
|
||||
co_await syslog_send_helper(std::move(msg).release());
|
||||
co_await syslog_send_helper(msg);
|
||||
}
|
||||
|
||||
future<> audit_syslog_storage_helper::write_login(const sstring& username,
|
||||
@@ -139,7 +140,10 @@ future<> audit_syslog_storage_helper::write_login(const sstring& username,
|
||||
client_ip,
|
||||
username);
|
||||
|
||||
co_await syslog_send_helper(std::move(msg).release());
|
||||
co_await syslog_send_helper(msg.c_str());
|
||||
}
|
||||
|
||||
using registry = class_registrator<storage_helper, audit_syslog_storage_helper, cql3::query_processor&, service::migration_manager&>;
|
||||
static registry registrator1("audit_syslog_storage_helper");
|
||||
|
||||
}
|
||||
|
||||
@@ -26,7 +26,7 @@ class audit_syslog_storage_helper : public storage_helper {
|
||||
net::datagram_channel _sender;
|
||||
seastar::semaphore _semaphore;
|
||||
|
||||
future<> syslog_send_helper(seastar::temporary_buffer<char> msg);
|
||||
future<> syslog_send_helper(const sstring& msg);
|
||||
public:
|
||||
explicit audit_syslog_storage_helper(cql3::query_processor&, service::migration_manager&);
|
||||
virtual ~audit_syslog_storage_helper();
|
||||
|
||||
@@ -9,7 +9,6 @@ target_sources(scylla_auth
|
||||
allow_all_authorizer.cc
|
||||
authenticated_user.cc
|
||||
authenticator.cc
|
||||
cache.cc
|
||||
certificate_authenticator.cc
|
||||
common.cc
|
||||
default_authorizer.cc
|
||||
@@ -17,14 +16,15 @@ target_sources(scylla_auth
|
||||
password_authenticator.cc
|
||||
passwords.cc
|
||||
permission.cc
|
||||
permissions_cache.cc
|
||||
resource.cc
|
||||
role_or_anonymous.cc
|
||||
roles-metadata.cc
|
||||
sasl_challenge.cc
|
||||
saslauthd_authenticator.cc
|
||||
service.cc
|
||||
standard_role_manager.cc
|
||||
transitional.cc
|
||||
maintenance_socket_authenticator.cc
|
||||
maintenance_socket_role_manager.cc)
|
||||
target_include_directories(scylla_auth
|
||||
PUBLIC
|
||||
@@ -44,8 +44,5 @@ target_link_libraries(scylla_auth
|
||||
|
||||
add_whole_archive(auth scylla_auth)
|
||||
|
||||
if (Scylla_USE_PRECOMPILED_HEADER_USE)
|
||||
target_precompile_headers(scylla_auth REUSE_FROM scylla-precompiled-header)
|
||||
endif()
|
||||
check_headers(check-headers scylla_auth
|
||||
GLOB_RECURSE ${CMAKE_CURRENT_SOURCE_DIR}/*.hh)
|
||||
GLOB_RECURSE ${CMAKE_CURRENT_SOURCE_DIR}/*.hh)
|
||||
@@ -9,9 +9,20 @@
|
||||
#include "auth/allow_all_authenticator.hh"
|
||||
|
||||
#include "service/migration_manager.hh"
|
||||
#include "utils/alien_worker.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
constexpr std::string_view allow_all_authenticator_name("org.apache.cassandra.auth.AllowAllAuthenticator");
|
||||
|
||||
// To ensure correct initialization order, we unfortunately need to use a string literal.
|
||||
static const class_registrator<
|
||||
authenticator,
|
||||
allow_all_authenticator,
|
||||
cql3::query_processor&,
|
||||
::service::raft_group0_client&,
|
||||
::service::migration_manager&,
|
||||
utils::alien_worker&> registration("org.apache.cassandra.auth.AllowAllAuthenticator");
|
||||
|
||||
}
|
||||
|
||||
@@ -12,8 +12,8 @@
|
||||
|
||||
#include "auth/authenticated_user.hh"
|
||||
#include "auth/authenticator.hh"
|
||||
#include "auth/cache.hh"
|
||||
#include "auth/common.hh"
|
||||
#include "utils/alien_worker.hh"
|
||||
|
||||
namespace cql3 {
|
||||
class query_processor;
|
||||
@@ -29,7 +29,7 @@ extern const std::string_view allow_all_authenticator_name;
|
||||
|
||||
class allow_all_authenticator final : public authenticator {
|
||||
public:
|
||||
allow_all_authenticator(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&, cache&) {
|
||||
allow_all_authenticator(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&, utils::alien_worker&) {
|
||||
}
|
||||
|
||||
virtual future<> start() override {
|
||||
|
||||
@@ -9,9 +9,18 @@
|
||||
#include "auth/allow_all_authorizer.hh"
|
||||
|
||||
#include "auth/common.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
constexpr std::string_view allow_all_authorizer_name("org.apache.cassandra.auth.AllowAllAuthorizer");
|
||||
|
||||
// To ensure correct initialization order, we unfortunately need to use a string literal.
|
||||
static const class_registrator<
|
||||
authorizer,
|
||||
allow_all_authorizer,
|
||||
cql3::query_processor&,
|
||||
::service::raft_group0_client&,
|
||||
::service::migration_manager&> registration("org.apache.cassandra.auth.AllowAllAuthorizer");
|
||||
|
||||
}
|
||||
|
||||
@@ -26,7 +26,7 @@ extern const std::string_view allow_all_authorizer_name;
|
||||
|
||||
class allow_all_authorizer final : public authorizer {
|
||||
public:
|
||||
allow_all_authorizer(cql3::query_processor&) {
|
||||
allow_all_authorizer(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&) {
|
||||
}
|
||||
|
||||
virtual future<> start() override {
|
||||
|
||||
357
auth/cache.cc
357
auth/cache.cc
@@ -1,357 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2017-present ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#include "auth/cache.hh"
|
||||
#include "auth/common.hh"
|
||||
#include "auth/role_or_anonymous.hh"
|
||||
#include "auth/roles-metadata.hh"
|
||||
#include "cql3/query_processor.hh"
|
||||
#include "cql3/untyped_result_set.hh"
|
||||
#include "db/consistency_level_type.hh"
|
||||
#include "db/system_keyspace.hh"
|
||||
#include "schema/schema.hh"
|
||||
#include <iterator>
|
||||
#include <seastar/core/abort_source.hh>
|
||||
#include <seastar/coroutine/maybe_yield.hh>
|
||||
#include <seastar/core/format.hh>
|
||||
#include <seastar/core/metrics.hh>
|
||||
#include <seastar/core/do_with.hh>
|
||||
|
||||
namespace auth {
|
||||
|
||||
logging::logger logger("auth-cache");
|
||||
|
||||
cache::cache(cql3::query_processor& qp, abort_source& as) noexcept
|
||||
: _current_version(0)
|
||||
, _qp(qp)
|
||||
, _loading_sem(1)
|
||||
, _as(as)
|
||||
, _permission_loader(nullptr)
|
||||
, _permission_loader_sem(8) {
|
||||
namespace sm = seastar::metrics;
|
||||
_metrics.add_group("auth_cache", {
|
||||
sm::make_gauge("roles", [this] { return _roles.size(); },
|
||||
sm::description("Number of roles currently cached")),
|
||||
sm::make_gauge("permissions", [this] {
|
||||
return _cached_permissions_count;
|
||||
}, sm::description("Total number of permission sets currently cached across all roles"))
|
||||
});
|
||||
}
|
||||
|
||||
void cache::set_permission_loader(permission_loader_func loader) {
|
||||
_permission_loader = std::move(loader);
|
||||
}
|
||||
|
||||
lw_shared_ptr<const cache::role_record> cache::get(std::string_view role) const noexcept {
|
||||
auto it = _roles.find(role);
|
||||
if (it == _roles.end()) {
|
||||
return {};
|
||||
}
|
||||
return it->second;
|
||||
}
|
||||
|
||||
void cache::for_each_role(const std::function<void(const role_name_t&, const role_record&)>& func) const {
|
||||
for (const auto& [name, record] : _roles) {
|
||||
func(name, *record);
|
||||
}
|
||||
}
|
||||
|
||||
size_t cache::roles_count() const noexcept {
|
||||
return _roles.size();
|
||||
}
|
||||
|
||||
future<permission_set> cache::get_permissions(const role_or_anonymous& role, const resource& r) {
|
||||
std::unordered_map<resource, permission_set>* perms_cache;
|
||||
lw_shared_ptr<role_record> role_ptr;
|
||||
|
||||
if (is_anonymous(role)) {
|
||||
perms_cache = &_anonymous_permissions;
|
||||
} else {
|
||||
const auto& role_name = *role.name;
|
||||
auto role_it = _roles.find(role_name);
|
||||
if (role_it == _roles.end()) {
|
||||
// Role might have been deleted but there are some connections
|
||||
// left which reference it. They should no longer have access to anything.
|
||||
return make_ready_future<permission_set>(permissions::NONE);
|
||||
}
|
||||
role_ptr = role_it->second;
|
||||
perms_cache = &role_ptr->cached_permissions;
|
||||
}
|
||||
|
||||
if (auto it = perms_cache->find(r); it != perms_cache->end()) {
|
||||
return make_ready_future<permission_set>(it->second);
|
||||
}
|
||||
// keep alive role_ptr as it holds perms_cache (except anonymous)
|
||||
return do_with(std::move(role_ptr), [this, &role, &r, perms_cache] (auto& role_ptr) {
|
||||
return load_permissions(role, r, perms_cache);
|
||||
});
|
||||
}
|
||||
|
||||
future<permission_set> cache::load_permissions(const role_or_anonymous& role, const resource& r, std::unordered_map<resource, permission_set>* perms_cache) {
|
||||
SCYLLA_ASSERT(_permission_loader);
|
||||
auto units = co_await get_units(_permission_loader_sem, 1, _as);
|
||||
|
||||
// Check again, perhaps we were blocked and other call loaded
|
||||
// the permissions already. This is a protection against misses storm.
|
||||
if (auto it = perms_cache->find(r); it != perms_cache->end()) {
|
||||
co_return it->second;
|
||||
}
|
||||
auto perms = co_await _permission_loader(role, r);
|
||||
add_permissions(*perms_cache, r, perms);
|
||||
co_return perms;
|
||||
}
|
||||
|
||||
future<> cache::prune(const resource& r) {
|
||||
auto units = co_await get_units(_loading_sem, 1, _as);
|
||||
_anonymous_permissions.erase(r);
|
||||
for (auto& it : _roles) {
|
||||
// Prunning can run concurrently with other functions but it
|
||||
// can only cause cached_permissions extra reload via get_permissions.
|
||||
remove_permissions(it.second->cached_permissions, r);
|
||||
co_await coroutine::maybe_yield();
|
||||
}
|
||||
}
|
||||
|
||||
future<> cache::reload_all_permissions() noexcept {
|
||||
SCYLLA_ASSERT(_permission_loader);
|
||||
auto units = co_await get_units(_loading_sem, 1, _as);
|
||||
auto copy_keys = [] (const std::unordered_map<resource, permission_set>& m) {
|
||||
std::vector<resource> keys;
|
||||
keys.reserve(m.size());
|
||||
for (const auto& [res, _] : m) {
|
||||
keys.push_back(res);
|
||||
}
|
||||
return keys;
|
||||
};
|
||||
const role_or_anonymous anon;
|
||||
for (const auto& res : copy_keys(_anonymous_permissions)) {
|
||||
_anonymous_permissions[res] = co_await _permission_loader(anon, res);
|
||||
}
|
||||
for (auto& [role, entry] : _roles) {
|
||||
auto& perms_cache = entry->cached_permissions;
|
||||
auto r = role_or_anonymous(role);
|
||||
for (const auto& res : copy_keys(perms_cache)) {
|
||||
perms_cache[res] = co_await _permission_loader(r, res);
|
||||
}
|
||||
}
|
||||
logger.debug("Reloaded auth cache with {} entries", _roles.size());
|
||||
}
|
||||
|
||||
future<lw_shared_ptr<cache::role_record>> cache::fetch_role(const role_name_t& role) const {
|
||||
auto rec = make_lw_shared<role_record>();
|
||||
rec->version = _current_version;
|
||||
|
||||
auto fetch = [this, &role](const sstring& q) {
|
||||
return _qp.execute_internal(q, db::consistency_level::LOCAL_ONE,
|
||||
internal_distributed_query_state(), {role},
|
||||
cql3::query_processor::cache_internal::yes);
|
||||
};
|
||||
// roles
|
||||
{
|
||||
static const sstring q = format("SELECT * FROM {}.{} WHERE role = ?", db::system_keyspace::NAME, meta::roles_table::name);
|
||||
auto rs = co_await fetch(q);
|
||||
if (!rs->empty()) {
|
||||
auto& r = rs->one();
|
||||
rec->is_superuser = r.get_or<bool>("is_superuser", false);
|
||||
rec->can_login = r.get_or<bool>("can_login", false);
|
||||
rec->salted_hash = r.get_or<sstring>("salted_hash", "");
|
||||
if (r.has("member_of")) {
|
||||
auto mo = r.get_set<sstring>("member_of");
|
||||
rec->member_of.insert(
|
||||
std::make_move_iterator(mo.begin()),
|
||||
std::make_move_iterator(mo.end()));
|
||||
}
|
||||
} else {
|
||||
// role got deleted
|
||||
co_return nullptr;
|
||||
}
|
||||
}
|
||||
// members
|
||||
{
|
||||
static const sstring q = format("SELECT role, member FROM {}.{} WHERE role = ?", db::system_keyspace::NAME, ROLE_MEMBERS_CF);
|
||||
auto rs = co_await fetch(q);
|
||||
for (const auto& r : *rs) {
|
||||
rec->members.insert(r.get_as<sstring>("member"));
|
||||
co_await coroutine::maybe_yield();
|
||||
}
|
||||
}
|
||||
// attributes
|
||||
{
|
||||
static const sstring q = format("SELECT role, name, value FROM {}.{} WHERE role = ?", db::system_keyspace::NAME, ROLE_ATTRIBUTES_CF);
|
||||
auto rs = co_await fetch(q);
|
||||
for (const auto& r : *rs) {
|
||||
rec->attributes[r.get_as<sstring>("name")] =
|
||||
r.get_as<sstring>("value");
|
||||
co_await coroutine::maybe_yield();
|
||||
}
|
||||
}
|
||||
// permissions
|
||||
{
|
||||
static const sstring q = format("SELECT role, resource, permissions FROM {}.{} WHERE role = ?", db::system_keyspace::NAME, PERMISSIONS_CF);
|
||||
auto rs = co_await fetch(q);
|
||||
for (const auto& r : *rs) {
|
||||
auto resource = r.get_as<sstring>("resource");
|
||||
auto perms_strings = r.get_set<sstring>("permissions");
|
||||
std::unordered_set<sstring> perms_set(perms_strings.begin(), perms_strings.end());
|
||||
auto pset = permissions::from_strings(perms_set);
|
||||
rec->permissions[std::move(resource)] = std::move(pset);
|
||||
co_await coroutine::maybe_yield();
|
||||
}
|
||||
}
|
||||
co_return rec;
|
||||
}
|
||||
|
||||
future<> cache::prune_all() noexcept {
|
||||
for (auto it = _roles.begin(); it != _roles.end(); ) {
|
||||
if (it->second->version != _current_version) {
|
||||
remove_role(it++);
|
||||
co_await coroutine::maybe_yield();
|
||||
} else {
|
||||
++it;
|
||||
}
|
||||
}
|
||||
co_return;
|
||||
}
|
||||
|
||||
future<> cache::load_all() {
|
||||
SCYLLA_ASSERT(this_shard_id() == 0);
|
||||
auto units = co_await get_units(_loading_sem, 1, _as);
|
||||
|
||||
++_current_version;
|
||||
|
||||
logger.info("Loading all roles");
|
||||
const uint32_t page_size = 128;
|
||||
auto loader = [this](const cql3::untyped_result_set::row& r) -> future<stop_iteration> {
|
||||
const auto name = r.get_as<sstring>("role");
|
||||
auto role = co_await fetch_role(name);
|
||||
if (role) {
|
||||
add_role(name, role);
|
||||
}
|
||||
co_return stop_iteration::no;
|
||||
};
|
||||
co_await _qp.query_internal(format("SELECT * FROM {}.{}",
|
||||
db::system_keyspace::NAME, meta::roles_table::name),
|
||||
db::consistency_level::LOCAL_ONE, {}, page_size, loader);
|
||||
|
||||
co_await prune_all();
|
||||
for (const auto& [name, role] : _roles) {
|
||||
co_await distribute_role(name, role);
|
||||
}
|
||||
co_await container().invoke_on_others([this](cache& c) -> future<> {
|
||||
auto units = co_await get_units(c._loading_sem, 1, c._as);
|
||||
c._current_version = _current_version;
|
||||
co_await c.prune_all();
|
||||
});
|
||||
}
|
||||
|
||||
future<> cache::gather_inheriting_roles(std::unordered_set<role_name_t>& roles, lw_shared_ptr<cache::role_record> role, const role_name_t& name) {
|
||||
if (!role) {
|
||||
// Role might have been removed or not yet added, either way
|
||||
// their members will be handled by another top call to this function.
|
||||
co_return;
|
||||
}
|
||||
for (const auto& member_name : role->members) {
|
||||
bool is_new = roles.insert(member_name).second;
|
||||
if (!is_new) {
|
||||
continue;
|
||||
}
|
||||
lw_shared_ptr<cache::role_record> member_role;
|
||||
auto r = _roles.find(member_name);
|
||||
if (r != _roles.end()) {
|
||||
member_role = r->second;
|
||||
}
|
||||
co_await gather_inheriting_roles(roles, member_role, member_name);
|
||||
}
|
||||
}
|
||||
|
||||
future<> cache::load_roles(std::unordered_set<role_name_t> roles) {
|
||||
SCYLLA_ASSERT(this_shard_id() == 0);
|
||||
auto units = co_await get_units(_loading_sem, 1, _as);
|
||||
|
||||
std::unordered_set<role_name_t> roles_to_clear_perms;
|
||||
for (const auto& name : roles) {
|
||||
logger.info("Loading role {}", name);
|
||||
auto role = co_await fetch_role(name);
|
||||
if (role) {
|
||||
add_role(name, role);
|
||||
co_await gather_inheriting_roles(roles_to_clear_perms, role, name);
|
||||
} else {
|
||||
if (auto it = _roles.find(name); it != _roles.end()) {
|
||||
auto old_role = it->second;
|
||||
remove_role(it);
|
||||
co_await gather_inheriting_roles(roles_to_clear_perms, old_role, name);
|
||||
}
|
||||
}
|
||||
co_await distribute_role(name, role);
|
||||
}
|
||||
|
||||
co_await container().invoke_on_all([&roles_to_clear_perms] (cache& c) -> future<> {
|
||||
for (const auto& name : roles_to_clear_perms) {
|
||||
c.clear_role_permissions(name);
|
||||
co_await coroutine::maybe_yield();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
future<> cache::distribute_role(const role_name_t& name, lw_shared_ptr<role_record> role) {
|
||||
auto role_ptr = role.get();
|
||||
co_await container().invoke_on_others([&name, role_ptr](cache& c) -> future<> {
|
||||
auto units = co_await get_units(c._loading_sem, 1, c._as);
|
||||
if (!role_ptr) {
|
||||
c.remove_role(name);
|
||||
co_return;
|
||||
}
|
||||
auto role_copy = make_lw_shared<role_record>(*role_ptr);
|
||||
c.add_role(name, std::move(role_copy));
|
||||
});
|
||||
}
|
||||
|
||||
bool cache::includes_table(const table_id& id) noexcept {
|
||||
return id == db::system_keyspace::roles()->id()
|
||||
|| id == db::system_keyspace::role_members()->id()
|
||||
|| id == db::system_keyspace::role_attributes()->id()
|
||||
|| id == db::system_keyspace::role_permissions()->id();
|
||||
}
|
||||
|
||||
void cache::add_role(const role_name_t& name, lw_shared_ptr<role_record> role) {
|
||||
if (auto it = _roles.find(name); it != _roles.end()) {
|
||||
_cached_permissions_count -= it->second->cached_permissions.size();
|
||||
}
|
||||
_cached_permissions_count += role->cached_permissions.size();
|
||||
_roles[name] = std::move(role);
|
||||
}
|
||||
|
||||
void cache::remove_role(const role_name_t& name) {
|
||||
if (auto it = _roles.find(name); it != _roles.end()) {
|
||||
remove_role(it);
|
||||
}
|
||||
}
|
||||
|
||||
void cache::remove_role(roles_map::iterator it) {
|
||||
_cached_permissions_count -= it->second->cached_permissions.size();
|
||||
_roles.erase(it);
|
||||
}
|
||||
|
||||
void cache::clear_role_permissions(const role_name_t& name) {
|
||||
if (auto it = _roles.find(name); it != _roles.end()) {
|
||||
_cached_permissions_count -= it->second->cached_permissions.size();
|
||||
it->second->cached_permissions.clear();
|
||||
}
|
||||
}
|
||||
|
||||
void cache::add_permissions(std::unordered_map<resource, permission_set>& cache, const resource& r, permission_set perms) {
|
||||
if (cache.emplace(r, perms).second) {
|
||||
++_cached_permissions_count;
|
||||
}
|
||||
}
|
||||
|
||||
void cache::remove_permissions(std::unordered_map<resource, permission_set>& cache, const resource& r) {
|
||||
_cached_permissions_count -= cache.erase(r);
|
||||
}
|
||||
|
||||
} // namespace auth
|
||||
102
auth/cache.hh
102
auth/cache.hh
@@ -1,102 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2025-present ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <seastar/core/abort_source.hh>
|
||||
#include <string_view>
|
||||
#include <unordered_set>
|
||||
#include <unordered_map>
|
||||
|
||||
#include <seastar/core/sstring.hh>
|
||||
#include <seastar/core/future.hh>
|
||||
#include <seastar/core/sharded.hh>
|
||||
#include <seastar/core/shared_ptr.hh>
|
||||
#include <seastar/core/semaphore.hh>
|
||||
#include <seastar/core/metrics_registration.hh>
|
||||
|
||||
#include "absl-flat_hash_map.hh"
|
||||
|
||||
#include "auth/permission.hh"
|
||||
#include "auth/common.hh"
|
||||
#include "auth/resource.hh"
|
||||
#include "auth/role_or_anonymous.hh"
|
||||
|
||||
namespace cql3 { class query_processor; }
|
||||
|
||||
namespace auth {
|
||||
|
||||
class cache : public peering_sharded_service<cache> {
|
||||
public:
|
||||
using role_name_t = sstring;
|
||||
using version_tag_t = char;
|
||||
using permission_loader_func = std::function<future<permission_set>(const role_or_anonymous&, const resource&)>;
|
||||
|
||||
struct role_record {
|
||||
bool can_login = false;
|
||||
bool is_superuser = false;
|
||||
std::unordered_set<role_name_t> member_of;
|
||||
std::unordered_set<role_name_t> members;
|
||||
sstring salted_hash;
|
||||
std::unordered_map<sstring, sstring, sstring_hash, sstring_eq> attributes;
|
||||
std::unordered_map<sstring, permission_set, sstring_hash, sstring_eq> permissions;
|
||||
private:
|
||||
friend cache;
|
||||
// cached permissions include effects of role's inheritance
|
||||
std::unordered_map<resource, permission_set> cached_permissions;
|
||||
version_tag_t version; // used for seamless cache reloads
|
||||
};
|
||||
|
||||
explicit cache(cql3::query_processor& qp, abort_source& as) noexcept;
|
||||
lw_shared_ptr<const role_record> get(std::string_view role) const noexcept;
|
||||
void set_permission_loader(permission_loader_func loader);
|
||||
future<permission_set> get_permissions(const role_or_anonymous& role, const resource& r);
|
||||
future<> prune(const resource& r);
|
||||
future<> reload_all_permissions() noexcept;
|
||||
future<> load_all();
|
||||
future<> load_roles(std::unordered_set<role_name_t> roles);
|
||||
static bool includes_table(const table_id&) noexcept;
|
||||
|
||||
// Returns the number of roles in the cache.
|
||||
size_t roles_count() const noexcept;
|
||||
|
||||
// The callback doesn't suspend (no co_await) so it observes the state
|
||||
// of the cache atomically.
|
||||
void for_each_role(const std::function<void(const role_name_t&, const role_record&)>& func) const;
|
||||
|
||||
private:
|
||||
using roles_map = absl::flat_hash_map<role_name_t, lw_shared_ptr<role_record>, sstring_hash, sstring_eq>;
|
||||
roles_map _roles;
|
||||
// anonymous permissions map exists mainly due to compatibility with
|
||||
// higher layers which use role_or_anonymous to get permissions.
|
||||
std::unordered_map<resource, permission_set> _anonymous_permissions;
|
||||
version_tag_t _current_version;
|
||||
cql3::query_processor& _qp;
|
||||
semaphore _loading_sem; // protects iteration of _roles map
|
||||
abort_source& _as;
|
||||
permission_loader_func _permission_loader;
|
||||
semaphore _permission_loader_sem; // protects against reload storms on a single role change
|
||||
metrics::metric_groups _metrics;
|
||||
size_t _cached_permissions_count = 0;
|
||||
|
||||
future<lw_shared_ptr<role_record>> fetch_role(const role_name_t& role) const;
|
||||
future<> prune_all() noexcept;
|
||||
future<> distribute_role(const role_name_t& name, const lw_shared_ptr<role_record> role);
|
||||
future<> gather_inheriting_roles(std::unordered_set<role_name_t>& roles, lw_shared_ptr<cache::role_record> role, const role_name_t& name);
|
||||
|
||||
void add_role(const role_name_t& name, lw_shared_ptr<role_record> role);
|
||||
void remove_role(const role_name_t& name);
|
||||
void remove_role(roles_map::iterator it);
|
||||
void clear_role_permissions(const role_name_t& name);
|
||||
void add_permissions(std::unordered_map<resource, permission_set>& cache, const resource& r, permission_set perms);
|
||||
void remove_permissions(std::unordered_map<resource, permission_set>& cache, const resource& r);
|
||||
|
||||
future<permission_set> load_permissions(const role_or_anonymous& role, const resource& r, std::unordered_map<resource, permission_set>* perms_cache);
|
||||
};
|
||||
|
||||
} // namespace auth
|
||||
@@ -8,16 +8,18 @@
|
||||
*/
|
||||
|
||||
#include "auth/certificate_authenticator.hh"
|
||||
#include "auth/cache.hh"
|
||||
|
||||
#include <boost/regex.hpp>
|
||||
#include <fmt/ranges.h>
|
||||
|
||||
#include "utils/class_registrator.hh"
|
||||
#include "utils/to_string.hh"
|
||||
#include "data_dictionary/data_dictionary.hh"
|
||||
#include "cql3/query_processor.hh"
|
||||
#include "db/config.hh"
|
||||
|
||||
static const auto CERT_AUTH_NAME = "com.scylladb.auth.CertificateAuthenticator";
|
||||
const std::string_view auth::certificate_authenticator_name(CERT_AUTH_NAME);
|
||||
|
||||
static logging::logger clogger("certificate_authenticator");
|
||||
|
||||
@@ -27,11 +29,18 @@ static const std::string cfg_query_attr = "query";
|
||||
static const std::string cfg_source_subject = "SUBJECT";
|
||||
static const std::string cfg_source_altname = "ALTNAME";
|
||||
|
||||
static const class_registrator<auth::authenticator
|
||||
, auth::certificate_authenticator
|
||||
, cql3::query_processor&
|
||||
, ::service::raft_group0_client&
|
||||
, ::service::migration_manager&
|
||||
, utils::alien_worker&> cert_auth_reg(CERT_AUTH_NAME);
|
||||
|
||||
enum class auth::certificate_authenticator::query_source {
|
||||
subject, altname
|
||||
};
|
||||
|
||||
auth::certificate_authenticator::certificate_authenticator(cql3::query_processor& qp, ::service::raft_group0_client&, ::service::migration_manager&, auth::cache&)
|
||||
auth::certificate_authenticator::certificate_authenticator(cql3::query_processor& qp, ::service::raft_group0_client&, ::service::migration_manager&, utils::alien_worker&)
|
||||
: _queries([&] {
|
||||
auto& conf = qp.db().get_config();
|
||||
auto queries = conf.auth_certificate_role_queries();
|
||||
@@ -66,9 +75,9 @@ auth::certificate_authenticator::certificate_authenticator(cql3::query_processor
|
||||
throw std::invalid_argument(fmt::format("Invalid source: {}", map.at(cfg_source_attr)));
|
||||
}
|
||||
continue;
|
||||
} catch (const std::out_of_range&) {
|
||||
} catch (std::out_of_range&) {
|
||||
// just fallthrough
|
||||
} catch (const boost::regex_error&) {
|
||||
} catch (boost::regex_error&) {
|
||||
std::throw_with_nested(std::invalid_argument(fmt::format("Invalid query expression: {}", map.at(cfg_query_attr))));
|
||||
}
|
||||
}
|
||||
@@ -89,7 +98,7 @@ future<> auth::certificate_authenticator::stop() {
|
||||
}
|
||||
|
||||
std::string_view auth::certificate_authenticator::qualified_java_name() const {
|
||||
return "com.scylladb.auth.CertificateAuthenticator";
|
||||
return certificate_authenticator_name;
|
||||
}
|
||||
|
||||
bool auth::certificate_authenticator::require_authentication() const {
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
#pragma once
|
||||
|
||||
#include "auth/authenticator.hh"
|
||||
#include "utils/alien_worker.hh"
|
||||
#include <boost/regex_fwd.hpp> // IWYU pragma: keep
|
||||
|
||||
namespace cql3 {
|
||||
@@ -25,13 +26,13 @@ class raft_group0_client;
|
||||
|
||||
namespace auth {
|
||||
|
||||
class cache;
|
||||
extern const std::string_view certificate_authenticator_name;
|
||||
|
||||
class certificate_authenticator : public authenticator {
|
||||
enum class query_source;
|
||||
std::vector<std::pair<query_source, boost::regex>> _queries;
|
||||
public:
|
||||
certificate_authenticator(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&, cache&);
|
||||
certificate_authenticator(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&, utils::alien_worker&);
|
||||
~certificate_authenticator();
|
||||
|
||||
future<> start() override;
|
||||
|
||||
110
auth/common.cc
110
auth/common.cc
@@ -14,11 +14,18 @@
|
||||
#include <seastar/core/sharded.hh>
|
||||
|
||||
#include "mutation/canonical_mutation.hh"
|
||||
#include "schema/schema_fwd.hh"
|
||||
#include "mutation/timestamp.hh"
|
||||
#include "utils/assert.hh"
|
||||
#include "utils/exponential_backoff_retry.hh"
|
||||
#include "cql3/query_processor.hh"
|
||||
#include "cql3/statements/create_table_statement.hh"
|
||||
#include "schema/schema_builder.hh"
|
||||
#include "service/migration_manager.hh"
|
||||
#include "service/raft/group0_state_machine.hh"
|
||||
#include "timeout_config.hh"
|
||||
#include "utils/error_injection.hh"
|
||||
#include "db/system_keyspace.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
@@ -26,14 +33,22 @@ namespace meta {
|
||||
|
||||
namespace legacy {
|
||||
constinit const std::string_view AUTH_KS("system_auth");
|
||||
constinit const std::string_view USERS_CF("users");
|
||||
} // namespace legacy
|
||||
constinit const std::string_view AUTH_PACKAGE_NAME("org.apache.cassandra.auth.");
|
||||
} // namespace meta
|
||||
|
||||
static logging::logger auth_log("auth");
|
||||
|
||||
std::string default_superuser(cql3::query_processor& qp) {
|
||||
return qp.db().get_config().auth_superuser_name();
|
||||
bool legacy_mode(cql3::query_processor& qp) {
|
||||
return qp.auth_version < db::auth_version_t::v2;
|
||||
}
|
||||
|
||||
std::string_view get_auth_ks_name(cql3::query_processor& qp) {
|
||||
if (legacy_mode(qp)) {
|
||||
return meta::legacy::AUTH_KS;
|
||||
}
|
||||
return db::system_keyspace::NAME;
|
||||
}
|
||||
|
||||
// Func must support being invoked more than once.
|
||||
@@ -50,6 +65,47 @@ future<> do_after_system_ready(seastar::abort_source& as, seastar::noncopyable_f
|
||||
}).discard_result();
|
||||
}
|
||||
|
||||
static future<> create_legacy_metadata_table_if_missing_impl(
|
||||
std::string_view table_name,
|
||||
cql3::query_processor& qp,
|
||||
std::string_view cql,
|
||||
::service::migration_manager& mm) {
|
||||
SCYLLA_ASSERT(this_shard_id() == 0); // once_among_shards makes sure a function is executed on shard 0 only
|
||||
|
||||
auto db = qp.db();
|
||||
auto parsed_statement = cql3::query_processor::parse_statement(cql, cql3::dialect{});
|
||||
auto& parsed_cf_statement = static_cast<cql3::statements::raw::cf_statement&>(*parsed_statement);
|
||||
|
||||
parsed_cf_statement.prepare_keyspace(meta::legacy::AUTH_KS);
|
||||
|
||||
auto statement = static_pointer_cast<cql3::statements::create_table_statement>(
|
||||
parsed_cf_statement.prepare(db, qp.get_cql_stats())->statement);
|
||||
|
||||
const auto schema = statement->get_cf_meta_data(qp.db());
|
||||
const auto uuid = generate_legacy_id(schema->ks_name(), schema->cf_name());
|
||||
|
||||
schema_builder b(schema);
|
||||
b.set_uuid(uuid);
|
||||
schema_ptr table = b.build();
|
||||
|
||||
if (!db.has_schema(table->ks_name(), table->cf_name())) {
|
||||
auto group0_guard = co_await mm.start_group0_operation();
|
||||
auto ts = group0_guard.write_timestamp();
|
||||
try {
|
||||
co_return co_await mm.announce(co_await ::service::prepare_new_column_family_announcement(qp.proxy(), table, ts),
|
||||
std::move(group0_guard), format("auth: create {} metadata table", table->cf_name()));
|
||||
} catch (exceptions::already_exists_exception&) {}
|
||||
}
|
||||
}
|
||||
|
||||
future<> create_legacy_metadata_table_if_missing(
|
||||
std::string_view table_name,
|
||||
cql3::query_processor& qp,
|
||||
std::string_view cql,
|
||||
::service::migration_manager& mm) noexcept {
|
||||
return futurize_invoke(create_legacy_metadata_table_if_missing_impl, table_name, qp, cql, mm);
|
||||
}
|
||||
|
||||
::service::query_state& internal_distributed_query_state() noexcept {
|
||||
#ifdef DEBUG
|
||||
// Give the much slower debug tests more headroom for completing auth queries.
|
||||
@@ -84,6 +140,56 @@ static future<> announce_mutations_with_guard(
|
||||
return group0_client.add_entry(std::move(group0_cmd), std::move(group0_guard), as, timeout);
|
||||
}
|
||||
|
||||
future<> announce_mutations_with_batching(
|
||||
::service::raft_group0_client& group0_client,
|
||||
start_operation_func_t start_operation_func,
|
||||
std::function<::service::mutations_generator(api::timestamp_type t)> gen,
|
||||
seastar::abort_source& as,
|
||||
std::optional<::service::raft_timeout> timeout) {
|
||||
// account for command's overhead, it's better to use smaller threshold than constantly bounce off the limit
|
||||
size_t memory_threshold = group0_client.max_command_size() * 0.75;
|
||||
utils::get_local_injector().inject("auth_announce_mutations_command_max_size",
|
||||
[&memory_threshold] {
|
||||
memory_threshold = 1000;
|
||||
});
|
||||
|
||||
size_t memory_usage = 0;
|
||||
utils::chunked_vector<canonical_mutation> muts;
|
||||
|
||||
// guard has to be taken before we execute code in gen as
|
||||
// it can do read-before-write and we want announce_mutations
|
||||
// operation to be linearizable with other such calls,
|
||||
// for instance if we do select and then delete in gen
|
||||
// we want both to operate on the same data or fail
|
||||
// if someone else modified it in the middle
|
||||
std::optional<::service::group0_guard> group0_guard;
|
||||
group0_guard = co_await start_operation_func(as);
|
||||
auto timestamp = group0_guard->write_timestamp();
|
||||
|
||||
auto g = gen(timestamp);
|
||||
while (auto mut = co_await g()) {
|
||||
muts.push_back(canonical_mutation{*mut});
|
||||
memory_usage += muts.back().representation().size();
|
||||
if (memory_usage >= memory_threshold) {
|
||||
if (!group0_guard) {
|
||||
group0_guard = co_await start_operation_func(as);
|
||||
timestamp = group0_guard->write_timestamp();
|
||||
}
|
||||
co_await announce_mutations_with_guard(group0_client, std::move(muts), std::move(*group0_guard), as, timeout);
|
||||
group0_guard = std::nullopt;
|
||||
memory_usage = 0;
|
||||
muts = {};
|
||||
}
|
||||
}
|
||||
if (!muts.empty()) {
|
||||
if (!group0_guard) {
|
||||
group0_guard = co_await start_operation_func(as);
|
||||
timestamp = group0_guard->write_timestamp();
|
||||
}
|
||||
co_await announce_mutations_with_guard(group0_client, std::move(muts), std::move(*group0_guard), as, timeout);
|
||||
}
|
||||
}
|
||||
|
||||
future<> announce_mutations(
|
||||
cql3::query_processor& qp,
|
||||
::service::raft_group0_client& group0_client,
|
||||
|
||||
@@ -21,7 +21,12 @@
|
||||
|
||||
using namespace std::chrono_literals;
|
||||
|
||||
namespace replica {
|
||||
class database;
|
||||
}
|
||||
|
||||
namespace service {
|
||||
class migration_manager;
|
||||
class query_state;
|
||||
}
|
||||
|
||||
@@ -35,17 +40,20 @@ namespace meta {
|
||||
|
||||
namespace legacy {
|
||||
extern constinit const std::string_view AUTH_KS;
|
||||
extern constinit const std::string_view USERS_CF;
|
||||
} // namespace legacy
|
||||
|
||||
constexpr std::string_view DEFAULT_SUPERUSER_NAME("cassandra");
|
||||
extern constinit const std::string_view AUTH_PACKAGE_NAME;
|
||||
|
||||
} // namespace meta
|
||||
|
||||
constexpr std::string_view PERMISSIONS_CF = "role_permissions";
|
||||
constexpr std::string_view ROLE_MEMBERS_CF = "role_members";
|
||||
constexpr std::string_view ROLE_ATTRIBUTES_CF = "role_attributes";
|
||||
// This is a helper to check whether auth-v2 is on.
|
||||
bool legacy_mode(cql3::query_processor& qp);
|
||||
|
||||
std::string default_superuser(cql3::query_processor& qp);
|
||||
// We have legacy implementation using different keyspace
|
||||
// and need to parametrize depending on runtime feature.
|
||||
std::string_view get_auth_ks_name(cql3::query_processor& qp);
|
||||
|
||||
template <class Task>
|
||||
future<> once_among_shards(Task&& f) {
|
||||
@@ -59,6 +67,12 @@ future<> once_among_shards(Task&& f) {
|
||||
// Func must support being invoked more than once.
|
||||
future<> do_after_system_ready(seastar::abort_source& as, seastar::noncopyable_function<future<>()> func);
|
||||
|
||||
future<> create_legacy_metadata_table_if_missing(
|
||||
std::string_view table_name,
|
||||
cql3::query_processor&,
|
||||
std::string_view cql,
|
||||
::service::migration_manager&) noexcept;
|
||||
|
||||
///
|
||||
/// Time-outs for internal, non-local CQL queries.
|
||||
///
|
||||
@@ -66,6 +80,20 @@ future<> do_after_system_ready(seastar::abort_source& as, seastar::noncopyable_f
|
||||
|
||||
::service::raft_timeout get_raft_timeout() noexcept;
|
||||
|
||||
// Execute update query via group0 mechanism, mutations will be applied on all nodes.
|
||||
// Use this function when need to perform read before write on a single guard or if
|
||||
// you have more than one mutation and potentially exceed single command size limit.
|
||||
using start_operation_func_t = std::function<future<::service::group0_guard>(abort_source&)>;
|
||||
future<> announce_mutations_with_batching(
|
||||
::service::raft_group0_client& group0_client,
|
||||
// since we can operate also in topology coordinator context where we need stronger
|
||||
// guarantees than start_operation from group0_client gives we allow to inject custom
|
||||
// function here
|
||||
start_operation_func_t start_operation_func,
|
||||
std::function<::service::mutations_generator(api::timestamp_type t)> gen,
|
||||
seastar::abort_source& as,
|
||||
std::optional<::service::raft_timeout> timeout);
|
||||
|
||||
// Execute update query via group0 mechanism, mutations will be applied on all nodes.
|
||||
future<> announce_mutations(
|
||||
cql3::query_processor& qp,
|
||||
|
||||
@@ -26,6 +26,7 @@ extern "C" {
|
||||
#include "cql3/untyped_result_set.hh"
|
||||
#include "exceptions/exceptions.hh"
|
||||
#include "utils/log.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
@@ -36,17 +37,115 @@ std::string_view default_authorizer::qualified_java_name() const {
|
||||
static constexpr std::string_view ROLE_NAME = "role";
|
||||
static constexpr std::string_view RESOURCE_NAME = "resource";
|
||||
static constexpr std::string_view PERMISSIONS_NAME = "permissions";
|
||||
static constexpr std::string_view PERMISSIONS_CF = "role_permissions";
|
||||
|
||||
static logging::logger alogger("default_authorizer");
|
||||
|
||||
default_authorizer::default_authorizer(cql3::query_processor& qp)
|
||||
: _qp(qp) {
|
||||
// To ensure correct initialization order, we unfortunately need to use a string literal.
|
||||
static const class_registrator<
|
||||
authorizer,
|
||||
default_authorizer,
|
||||
cql3::query_processor&,
|
||||
::service::raft_group0_client&,
|
||||
::service::migration_manager&> password_auth_reg("org.apache.cassandra.auth.CassandraAuthorizer");
|
||||
|
||||
default_authorizer::default_authorizer(cql3::query_processor& qp, ::service::raft_group0_client& g0, ::service::migration_manager& mm)
|
||||
: _qp(qp)
|
||||
, _migration_manager(mm) {
|
||||
}
|
||||
|
||||
default_authorizer::~default_authorizer() {
|
||||
}
|
||||
|
||||
static const sstring legacy_table_name{"permissions"};
|
||||
|
||||
bool default_authorizer::legacy_metadata_exists() const {
|
||||
return _qp.db().has_schema(meta::legacy::AUTH_KS, legacy_table_name);
|
||||
}
|
||||
|
||||
future<bool> default_authorizer::legacy_any_granted() const {
|
||||
static const sstring query = seastar::format("SELECT * FROM {}.{} LIMIT 1", meta::legacy::AUTH_KS, PERMISSIONS_CF);
|
||||
|
||||
return _qp.execute_internal(
|
||||
query,
|
||||
db::consistency_level::LOCAL_ONE,
|
||||
{},
|
||||
cql3::query_processor::cache_internal::yes).then([](::shared_ptr<cql3::untyped_result_set> results) {
|
||||
return !results->empty();
|
||||
});
|
||||
}
|
||||
|
||||
future<> default_authorizer::migrate_legacy_metadata() {
|
||||
alogger.info("Starting migration of legacy permissions metadata.");
|
||||
static const sstring query = seastar::format("SELECT * FROM {}.{}", meta::legacy::AUTH_KS, legacy_table_name);
|
||||
|
||||
return _qp.execute_internal(
|
||||
query,
|
||||
db::consistency_level::LOCAL_ONE,
|
||||
cql3::query_processor::cache_internal::no).then([this](::shared_ptr<cql3::untyped_result_set> results) {
|
||||
return do_for_each(*results, [this](const cql3::untyped_result_set_row& row) {
|
||||
return do_with(
|
||||
row.get_as<sstring>("username"),
|
||||
parse_resource(row.get_as<sstring>(RESOURCE_NAME)),
|
||||
::service::group0_batch::unused(),
|
||||
[this, &row](const auto& username, const auto& r, auto& mc) {
|
||||
const permission_set perms = permissions::from_strings(row.get_set<sstring>(PERMISSIONS_NAME));
|
||||
return grant(username, perms, r, mc);
|
||||
});
|
||||
}).finally([results] {});
|
||||
}).then([] {
|
||||
alogger.info("Finished migrating legacy permissions metadata.");
|
||||
}).handle_exception([](std::exception_ptr ep) {
|
||||
alogger.error("Encountered an error during migration!");
|
||||
std::rethrow_exception(ep);
|
||||
});
|
||||
}
|
||||
|
||||
future<> default_authorizer::start_legacy() {
|
||||
static const sstring create_table = fmt::format(
|
||||
"CREATE TABLE {}.{} ("
|
||||
"{} text,"
|
||||
"{} text,"
|
||||
"{} set<text>,"
|
||||
"PRIMARY KEY({}, {})"
|
||||
") WITH gc_grace_seconds={}",
|
||||
meta::legacy::AUTH_KS,
|
||||
PERMISSIONS_CF,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME,
|
||||
PERMISSIONS_NAME,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME,
|
||||
90 * 24 * 60 * 60); // 3 months.
|
||||
|
||||
return once_among_shards([this] {
|
||||
return create_legacy_metadata_table_if_missing(
|
||||
PERMISSIONS_CF,
|
||||
_qp,
|
||||
create_table,
|
||||
_migration_manager).then([this] {
|
||||
_finished = do_after_system_ready(_as, [this] {
|
||||
return async([this] {
|
||||
_migration_manager.wait_for_schema_agreement(_qp.db().real_database(), db::timeout_clock::time_point::max(), &_as).get();
|
||||
|
||||
if (legacy_metadata_exists()) {
|
||||
if (!legacy_any_granted().get()) {
|
||||
migrate_legacy_metadata().get();
|
||||
return;
|
||||
}
|
||||
|
||||
alogger.warn("Ignoring legacy permissions metadata since role permissions exist.");
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
future<> default_authorizer::start() {
|
||||
if (legacy_mode(_qp)) {
|
||||
return start_legacy();
|
||||
}
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
@@ -63,7 +162,7 @@ default_authorizer::authorize(const role_or_anonymous& maybe_role, const resourc
|
||||
|
||||
const sstring query = seastar::format("SELECT {} FROM {}.{} WHERE {} = ? AND {} = ?",
|
||||
PERMISSIONS_NAME,
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME);
|
||||
@@ -87,13 +186,21 @@ default_authorizer::modify(
|
||||
std::string_view op,
|
||||
::service::group0_batch& mc) {
|
||||
const sstring query = seastar::format("UPDATE {}.{} SET {} = {} {} ? WHERE {} = ? AND {} = ?",
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
PERMISSIONS_NAME,
|
||||
PERMISSIONS_NAME,
|
||||
op,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME);
|
||||
if (legacy_mode(_qp)) {
|
||||
co_return co_await _qp.execute_internal(
|
||||
query,
|
||||
db::consistency_level::ONE,
|
||||
internal_distributed_query_state(),
|
||||
{permissions::to_strings(set), sstring(role_name), resource.name()},
|
||||
cql3::query_processor::cache_internal::no).discard_result();
|
||||
}
|
||||
co_await collect_mutations(_qp, mc, query,
|
||||
{permissions::to_strings(set), sstring(role_name), resource.name()});
|
||||
}
|
||||
@@ -112,7 +219,7 @@ future<std::vector<permission_details>> default_authorizer::list_all() const {
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME,
|
||||
PERMISSIONS_NAME,
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF);
|
||||
|
||||
const auto results = co_await _qp.execute_internal(
|
||||
@@ -137,16 +244,74 @@ future<std::vector<permission_details>> default_authorizer::list_all() const {
|
||||
future<> default_authorizer::revoke_all(std::string_view role_name, ::service::group0_batch& mc) {
|
||||
try {
|
||||
const sstring query = seastar::format("DELETE FROM {}.{} WHERE {} = ?",
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
ROLE_NAME);
|
||||
co_await collect_mutations(_qp, mc, query, {sstring(role_name)});
|
||||
} catch (const exceptions::request_execution_exception& e) {
|
||||
if (legacy_mode(_qp)) {
|
||||
co_await _qp.execute_internal(
|
||||
query,
|
||||
db::consistency_level::ONE,
|
||||
internal_distributed_query_state(),
|
||||
{sstring(role_name)},
|
||||
cql3::query_processor::cache_internal::no).discard_result();
|
||||
} else {
|
||||
co_await collect_mutations(_qp, mc, query, {sstring(role_name)});
|
||||
}
|
||||
} catch (exceptions::request_execution_exception& e) {
|
||||
alogger.warn("CassandraAuthorizer failed to revoke all permissions of {}: {}", role_name, e);
|
||||
}
|
||||
}
|
||||
|
||||
future<> default_authorizer::revoke_all_legacy(const resource& resource) {
|
||||
static const sstring query = seastar::format("SELECT {} FROM {}.{} WHERE {} = ? ALLOW FILTERING",
|
||||
ROLE_NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
RESOURCE_NAME);
|
||||
|
||||
return _qp.execute_internal(
|
||||
query,
|
||||
db::consistency_level::LOCAL_ONE,
|
||||
{resource.name()},
|
||||
cql3::query_processor::cache_internal::no).then_wrapped([this, resource](future<::shared_ptr<cql3::untyped_result_set>> f) {
|
||||
try {
|
||||
auto res = f.get();
|
||||
return parallel_for_each(
|
||||
res->begin(),
|
||||
res->end(),
|
||||
[this, res, resource](const cql3::untyped_result_set::row& r) {
|
||||
static const sstring query = seastar::format("DELETE FROM {}.{} WHERE {} = ? AND {} = ?",
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME);
|
||||
|
||||
return _qp.execute_internal(
|
||||
query,
|
||||
db::consistency_level::LOCAL_ONE,
|
||||
{r.get_as<sstring>(ROLE_NAME), resource.name()},
|
||||
cql3::query_processor::cache_internal::no).discard_result().handle_exception(
|
||||
[resource](auto ep) {
|
||||
try {
|
||||
std::rethrow_exception(ep);
|
||||
} catch (exceptions::request_execution_exception& e) {
|
||||
alogger.warn("CassandraAuthorizer failed to revoke all permissions on {}: {}", resource, e);
|
||||
}
|
||||
|
||||
});
|
||||
});
|
||||
} catch (exceptions::request_execution_exception& e) {
|
||||
alogger.warn("CassandraAuthorizer failed to revoke all permissions on {}: {}", resource, e);
|
||||
return make_ready_future();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
future<> default_authorizer::revoke_all(const resource& resource, ::service::group0_batch& mc) {
|
||||
if (legacy_mode(_qp)) {
|
||||
co_return co_await revoke_all_legacy(resource);
|
||||
}
|
||||
|
||||
if (resource.kind() == resource_kind::data &&
|
||||
data_resource_view(resource).is_keyspace()) {
|
||||
revoke_all_keyspace_resources(resource, mc);
|
||||
@@ -157,7 +322,7 @@ future<> default_authorizer::revoke_all(const resource& resource, ::service::gro
|
||||
auto gen = [this, name] (api::timestamp_type t) -> ::service::mutations_generator {
|
||||
const sstring query = seastar::format("SELECT {} FROM {}.{} WHERE {} = ? ALLOW FILTERING",
|
||||
ROLE_NAME,
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
RESOURCE_NAME);
|
||||
auto res = co_await _qp.execute_internal(
|
||||
@@ -167,7 +332,7 @@ future<> default_authorizer::revoke_all(const resource& resource, ::service::gro
|
||||
cql3::query_processor::cache_internal::no);
|
||||
for (const auto& r : *res) {
|
||||
const sstring query = seastar::format("DELETE FROM {}.{} WHERE {} = ? AND {} = ?",
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME);
|
||||
@@ -192,7 +357,7 @@ void default_authorizer::revoke_all_keyspace_resources(const resource& ks_resour
|
||||
const sstring query = seastar::format("SELECT {}, {} FROM {}.{}",
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME,
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF);
|
||||
auto res = co_await _qp.execute_internal(
|
||||
query,
|
||||
@@ -207,7 +372,7 @@ void default_authorizer::revoke_all_keyspace_resources(const resource& ks_resour
|
||||
continue;
|
||||
}
|
||||
const sstring query = seastar::format("DELETE FROM {}.{} WHERE {} = ? AND {} = ?",
|
||||
db::system_keyspace::NAME,
|
||||
get_auth_ks_name(_qp),
|
||||
PERMISSIONS_CF,
|
||||
ROLE_NAME,
|
||||
RESOURCE_NAME);
|
||||
|
||||
@@ -27,12 +27,14 @@ namespace auth {
|
||||
class default_authorizer : public authorizer {
|
||||
cql3::query_processor& _qp;
|
||||
|
||||
::service::migration_manager& _migration_manager;
|
||||
|
||||
abort_source _as{};
|
||||
|
||||
future<> _finished{make_ready_future<>()};
|
||||
|
||||
public:
|
||||
default_authorizer(cql3::query_processor&);
|
||||
default_authorizer(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&);
|
||||
|
||||
~default_authorizer();
|
||||
|
||||
@@ -57,6 +59,16 @@ public:
|
||||
virtual const resource_set& protected_resources() const override;
|
||||
|
||||
private:
|
||||
future<> start_legacy();
|
||||
|
||||
bool legacy_metadata_exists() const;
|
||||
|
||||
future<> revoke_all_legacy(const resource&);
|
||||
|
||||
future<bool> legacy_any_granted() const;
|
||||
|
||||
future<> migrate_legacy_metadata();
|
||||
|
||||
future<> modify(std::string_view, permission_set, const resource&, std::string_view, ::service::group0_batch&);
|
||||
|
||||
void revoke_all_keyspace_resources(const resource& ks_resource, ::service::group0_batch& mc);
|
||||
|
||||
@@ -24,6 +24,7 @@
|
||||
#include "exceptions/exceptions.hh"
|
||||
#include "seastarx.hh"
|
||||
#include "service/raft/raft_group0_client.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
#include "db/config.hh"
|
||||
#include "utils/exponential_backoff_retry.hh"
|
||||
|
||||
@@ -32,7 +33,7 @@ namespace {
|
||||
logger mylog{"ldap_role_manager"}; // `log` is taken by math.
|
||||
|
||||
struct url_desc_deleter {
|
||||
void operator()(LDAPURLDesc* p) {
|
||||
void operator()(LDAPURLDesc *p) {
|
||||
ldap_free_urldesc(p);
|
||||
}
|
||||
};
|
||||
@@ -40,7 +41,7 @@ struct url_desc_deleter {
|
||||
using url_desc_ptr = std::unique_ptr<LDAPURLDesc, url_desc_deleter>;
|
||||
|
||||
url_desc_ptr parse_url(std::string_view url) {
|
||||
LDAPURLDesc* desc = nullptr;
|
||||
LDAPURLDesc *desc = nullptr;
|
||||
if (ldap_url_parse(url.data(), &desc)) {
|
||||
mylog.error("error in ldap_url_parse({})", url);
|
||||
}
|
||||
@@ -53,12 +54,8 @@ std::vector<sstring> get_attr_values(LDAP* ld, LDAPMessage* res, const char* att
|
||||
mylog.debug("Analyzing search results");
|
||||
for (auto e = ldap_first_entry(ld, res); e; e = ldap_next_entry(ld, e)) {
|
||||
struct deleter {
|
||||
void operator()(berval** p) {
|
||||
ldap_value_free_len(p);
|
||||
}
|
||||
void operator()(char* p) {
|
||||
ldap_memfree(p);
|
||||
}
|
||||
void operator()(berval** p) { ldap_value_free_len(p); }
|
||||
void operator()(char* p) { ldap_memfree(p); }
|
||||
};
|
||||
const std::unique_ptr<char, deleter> dname(ldap_get_dn(ld, e));
|
||||
mylog.debug("Analyzing entry {}", dname.get());
|
||||
@@ -75,37 +72,40 @@ std::vector<sstring> get_attr_values(LDAP* ld, LDAPMessage* res, const char* att
|
||||
return values;
|
||||
}
|
||||
|
||||
const char* ldap_role_manager_full_name = "com.scylladb.auth.LDAPRoleManager";
|
||||
|
||||
} // anonymous namespace
|
||||
|
||||
namespace auth {
|
||||
|
||||
ldap_role_manager::ldap_role_manager(std::string_view query_template, std::string_view target_attr, std::string_view bind_name, std::string_view bind_password,
|
||||
uint32_t permissions_update_interval_in_ms, utils::observer<uint32_t> permissions_update_interval_in_ms_observer, cql3::query_processor& qp,
|
||||
::service::raft_group0_client& rg0c, ::service::migration_manager& mm, cache& cache)
|
||||
: _std_mgr(qp, rg0c, mm, cache)
|
||||
, _group0_client(rg0c)
|
||||
, _query_template(query_template)
|
||||
, _target_attr(target_attr)
|
||||
, _bind_name(bind_name)
|
||||
, _bind_password(bind_password)
|
||||
, _permissions_update_interval_in_ms(permissions_update_interval_in_ms)
|
||||
, _permissions_update_interval_in_ms_observer(std::move(permissions_update_interval_in_ms_observer))
|
||||
, _connection_factory(bind(std::mem_fn(&ldap_role_manager::reconnect), std::ref(*this)))
|
||||
, _cache(cache)
|
||||
, _cache_pruner(make_ready_future<>()) {
|
||||
static const class_registrator<
|
||||
role_manager,
|
||||
ldap_role_manager,
|
||||
cql3::query_processor&,
|
||||
::service::raft_group0_client&,
|
||||
::service::migration_manager&> registration(ldap_role_manager_full_name);
|
||||
|
||||
ldap_role_manager::ldap_role_manager(
|
||||
std::string_view query_template, std::string_view target_attr, std::string_view bind_name, std::string_view bind_password,
|
||||
cql3::query_processor& qp, ::service::raft_group0_client& rg0c, ::service::migration_manager& mm)
|
||||
: _std_mgr(qp, rg0c, mm), _group0_client(rg0c), _query_template(query_template), _target_attr(target_attr), _bind_name(bind_name)
|
||||
, _bind_password(bind_password)
|
||||
, _connection_factory(bind(std::mem_fn(&ldap_role_manager::reconnect), std::ref(*this))) {
|
||||
}
|
||||
|
||||
ldap_role_manager::ldap_role_manager(cql3::query_processor& qp, ::service::raft_group0_client& rg0c, ::service::migration_manager& mm, cache& cache)
|
||||
: ldap_role_manager(qp.db().get_config().ldap_url_template(), qp.db().get_config().ldap_attr_role(), qp.db().get_config().ldap_bind_dn(),
|
||||
qp.db().get_config().ldap_bind_passwd(), qp.db().get_config().permissions_update_interval_in_ms(),
|
||||
qp.db().get_config().permissions_update_interval_in_ms.observe([this](const uint32_t& v) {
|
||||
_permissions_update_interval_in_ms = v;
|
||||
}),
|
||||
qp, rg0c, mm, cache) {
|
||||
ldap_role_manager::ldap_role_manager(cql3::query_processor& qp, ::service::raft_group0_client& rg0c, ::service::migration_manager& mm)
|
||||
: ldap_role_manager(
|
||||
qp.db().get_config().ldap_url_template(),
|
||||
qp.db().get_config().ldap_attr_role(),
|
||||
qp.db().get_config().ldap_bind_dn(),
|
||||
qp.db().get_config().ldap_bind_passwd(),
|
||||
qp,
|
||||
rg0c,
|
||||
mm) {
|
||||
}
|
||||
|
||||
std::string_view ldap_role_manager::qualified_java_name() const noexcept {
|
||||
return "com.scylladb.auth.LDAPRoleManager";
|
||||
return ldap_role_manager_full_name;
|
||||
}
|
||||
|
||||
const resource_set& ldap_role_manager::protected_resources() const {
|
||||
@@ -114,24 +114,9 @@ const resource_set& ldap_role_manager::protected_resources() const {
|
||||
|
||||
future<> ldap_role_manager::start() {
|
||||
if (!parse_url(get_url("dummy-user"))) { // Just need host and port -- any user should do.
|
||||
return make_exception_future(std::runtime_error(fmt::format("error getting LDAP server address from template {}", _query_template)));
|
||||
return make_exception_future(
|
||||
std::runtime_error(fmt::format("error getting LDAP server address from template {}", _query_template)));
|
||||
}
|
||||
_cache_pruner = futurize_invoke([this]() -> future<> {
|
||||
while (true) {
|
||||
try {
|
||||
co_await seastar::sleep_abortable(std::chrono::milliseconds(_permissions_update_interval_in_ms), _as);
|
||||
} catch (const seastar::sleep_aborted&) {
|
||||
co_return; // ignore
|
||||
}
|
||||
co_await _cache.container().invoke_on_all([](cache& c) -> future<> {
|
||||
try {
|
||||
co_await c.reload_all_permissions();
|
||||
} catch (...) {
|
||||
mylog.warn("Cache reload all permissions failed: {}", std::current_exception());
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
return _std_mgr.start();
|
||||
}
|
||||
|
||||
@@ -165,7 +150,7 @@ future<conn_ptr> ldap_role_manager::connect() {
|
||||
future<conn_ptr> ldap_role_manager::reconnect() {
|
||||
unsigned retries_left = 5;
|
||||
using namespace std::literals::chrono_literals;
|
||||
conn_ptr conn = co_await exponential_backoff_retry::do_until_value(1s, 32s, _as, [this, &retries_left]() -> future<std::optional<conn_ptr>> {
|
||||
conn_ptr conn = co_await exponential_backoff_retry::do_until_value(1s, 32s, _as, [this, &retries_left] () -> future<std::optional<conn_ptr>> {
|
||||
if (!retries_left) {
|
||||
co_return conn_ptr{};
|
||||
}
|
||||
@@ -188,13 +173,7 @@ future<conn_ptr> ldap_role_manager::reconnect() {
|
||||
|
||||
future<> ldap_role_manager::stop() {
|
||||
_as.request_abort();
|
||||
return std::move(_cache_pruner)
|
||||
.then([this] {
|
||||
return _std_mgr.stop();
|
||||
})
|
||||
.then([this] {
|
||||
return _connection_factory.stop();
|
||||
});
|
||||
return _std_mgr.stop().then([this] { return _connection_factory.stop(); });
|
||||
}
|
||||
|
||||
future<> ldap_role_manager::create(std::string_view name, const role_config& config, ::service::group0_batch& mc) {
|
||||
@@ -223,42 +202,43 @@ future<role_set> ldap_role_manager::query_granted(std::string_view grantee_name,
|
||||
if (!desc) {
|
||||
return make_exception_future<role_set>(std::runtime_error(format("Error parsing URL {}", url)));
|
||||
}
|
||||
return _connection_factory.with_connection(
|
||||
[this, desc = std::move(desc), grantee_name_ = sstring(grantee_name)](ldap_connection& conn) -> future<role_set> {
|
||||
sstring grantee_name = std::move(grantee_name_);
|
||||
ldap_msg_ptr res = co_await conn.search(desc->lud_dn, desc->lud_scope, desc->lud_filter, desc->lud_attrs,
|
||||
/*attrsonly=*/0, /*serverctrls=*/nullptr, /*clientctrls=*/nullptr,
|
||||
/*timeout=*/nullptr, /*sizelimit=*/0);
|
||||
mylog.trace("query_granted: got search results");
|
||||
const auto mtype = ldap_msgtype(res.get());
|
||||
if (mtype != LDAP_RES_SEARCH_ENTRY && mtype != LDAP_RES_SEARCH_RESULT && mtype != LDAP_RES_SEARCH_REFERENCE) {
|
||||
mylog.error("ldap search yielded result {} of type {}", static_cast<const void*>(res.get()), mtype);
|
||||
co_return coroutine::exception(std::make_exception_ptr(std::runtime_error("ldap_role_manager: search result has wrong type")));
|
||||
return _connection_factory.with_connection([this, desc = std::move(desc), grantee_name_ = sstring(grantee_name)]
|
||||
(ldap_connection& conn) -> future<role_set> {
|
||||
sstring grantee_name = std::move(grantee_name_);
|
||||
ldap_msg_ptr res = co_await conn.search(desc->lud_dn, desc->lud_scope, desc->lud_filter, desc->lud_attrs,
|
||||
/*attrsonly=*/0, /*serverctrls=*/nullptr, /*clientctrls=*/nullptr,
|
||||
/*timeout=*/nullptr, /*sizelimit=*/0);
|
||||
mylog.trace("query_granted: got search results");
|
||||
const auto mtype = ldap_msgtype(res.get());
|
||||
if (mtype != LDAP_RES_SEARCH_ENTRY && mtype != LDAP_RES_SEARCH_RESULT && mtype != LDAP_RES_SEARCH_REFERENCE) {
|
||||
mylog.error("ldap search yielded result {} of type {}", static_cast<const void*>(res.get()), mtype);
|
||||
co_return coroutine::exception(std::make_exception_ptr(std::runtime_error("ldap_role_manager: search result has wrong type")));
|
||||
}
|
||||
std::vector<sstring> values = get_attr_values(conn.get_ldap(), res.get(), _target_attr.c_str());
|
||||
auth::role_set valid_roles{grantee_name};
|
||||
|
||||
// Each value is a role to be granted.
|
||||
co_await parallel_for_each(values, [this, &valid_roles] (const sstring& ldap_role) {
|
||||
return _std_mgr.exists(ldap_role).then([&valid_roles, &ldap_role] (bool exists) {
|
||||
if (exists) {
|
||||
valid_roles.insert(ldap_role);
|
||||
} else {
|
||||
mylog.error("unrecognized role received from LDAP: {}", ldap_role);
|
||||
}
|
||||
std::vector<sstring> values = get_attr_values(conn.get_ldap(), res.get(), _target_attr.c_str());
|
||||
auth::role_set valid_roles{grantee_name};
|
||||
|
||||
// Each value is a role to be granted.
|
||||
co_await parallel_for_each(values, [this, &valid_roles](const sstring& ldap_role) {
|
||||
return _std_mgr.exists(ldap_role).then([&valid_roles, &ldap_role](bool exists) {
|
||||
if (exists) {
|
||||
valid_roles.insert(ldap_role);
|
||||
} else {
|
||||
mylog.error("unrecognized role received from LDAP: {}", ldap_role);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
co_return std::move(valid_roles);
|
||||
});
|
||||
});
|
||||
|
||||
co_return std::move(valid_roles);
|
||||
});
|
||||
}
|
||||
|
||||
future<role_to_directly_granted_map> ldap_role_manager::query_all_directly_granted(::service::query_state& qs) {
|
||||
future<role_to_directly_granted_map>
|
||||
ldap_role_manager::query_all_directly_granted(::service::query_state& qs) {
|
||||
role_to_directly_granted_map result;
|
||||
auto roles = co_await query_all(qs);
|
||||
for (auto& role : roles) {
|
||||
for (auto& role: roles) {
|
||||
auto granted_set = co_await query_granted(role, recursive_role_query::no);
|
||||
for (auto& granted : granted_set) {
|
||||
for (auto& granted: granted_set) {
|
||||
if (granted != role) {
|
||||
result.insert({role, granted});
|
||||
}
|
||||
@@ -272,7 +252,7 @@ future<role_set> ldap_role_manager::query_all(::service::query_state& qs) {
|
||||
}
|
||||
|
||||
future<> ldap_role_manager::create_role(std::string_view role_name) {
|
||||
return smp::submit_to(0, [this, role_name]() -> future<> {
|
||||
return smp::submit_to(0, [this, role_name] () -> future<> {
|
||||
int retries = 10;
|
||||
while (true) {
|
||||
auto guard = co_await _group0_client.start_operation(_as, ::service::raft_timeout{});
|
||||
@@ -284,8 +264,8 @@ future<> ldap_role_manager::create_role(std::string_view role_name) {
|
||||
} catch (const role_already_exists&) {
|
||||
// ok
|
||||
} catch (const ::service::group0_concurrent_modification& ex) {
|
||||
mylog.warn("Failed to auto-create role \"{}\" due to guard conflict.{}.", role_name,
|
||||
retries ? " Retrying" : " Number of retries exceeded, giving up");
|
||||
mylog.warn("Failed to auto-create role \"{}\" due to guard conflict.{}.",
|
||||
role_name, retries ? " Retrying" : " Number of retries exceeded, giving up");
|
||||
if (retries--) {
|
||||
continue;
|
||||
}
|
||||
@@ -330,7 +310,8 @@ future<bool> ldap_role_manager::can_login(std::string_view role_name) {
|
||||
return _std_mgr.can_login(role_name);
|
||||
}
|
||||
|
||||
future<std::optional<sstring>> ldap_role_manager::get_attribute(std::string_view role_name, std::string_view attribute_name, ::service::query_state& qs) {
|
||||
future<std::optional<sstring>> ldap_role_manager::get_attribute(
|
||||
std::string_view role_name, std::string_view attribute_name, ::service::query_state& qs) {
|
||||
return _std_mgr.get_attribute(role_name, attribute_name, qs);
|
||||
}
|
||||
|
||||
|
||||
@@ -10,12 +10,10 @@
|
||||
#pragma once
|
||||
|
||||
#include <seastar/core/abort_source.hh>
|
||||
#include <seastar/core/future.hh>
|
||||
#include <stdexcept>
|
||||
|
||||
#include "ent/ldap/ldap_connection.hh"
|
||||
#include "standard_role_manager.hh"
|
||||
#include "auth/cache.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
@@ -35,30 +33,22 @@ class ldap_role_manager : public role_manager {
|
||||
seastar::sstring _target_attr; ///< LDAP entry attribute containing the Scylla role name.
|
||||
seastar::sstring _bind_name; ///< Username for LDAP simple bind.
|
||||
seastar::sstring _bind_password; ///< Password for LDAP simple bind.
|
||||
|
||||
uint32_t _permissions_update_interval_in_ms;
|
||||
utils::observer<uint32_t> _permissions_update_interval_in_ms_observer;
|
||||
|
||||
mutable ldap_reuser _connection_factory; // Potentially modified by query_granted().
|
||||
seastar::abort_source _as;
|
||||
cache& _cache;
|
||||
seastar::future<> _cache_pruner;
|
||||
public:
|
||||
ldap_role_manager(
|
||||
std::string_view query_template, ///< LDAP query template as described in Scylla documentation.
|
||||
std::string_view target_attr, ///< LDAP entry attribute containing the Scylla role name.
|
||||
std::string_view bind_name, ///< LDAP bind credentials.
|
||||
std::string_view bind_password, ///< LDAP bind credentials.
|
||||
uint32_t permissions_update_interval_in_ms,
|
||||
utils::observer<uint32_t> permissions_update_interval_in_ms_observer,
|
||||
cql3::query_processor& qp, ///< Passed to standard_role_manager.
|
||||
::service::raft_group0_client& rg0c, ///< Passed to standard_role_manager.
|
||||
::service::migration_manager& mm, ///< Passed to standard_role_manager.
|
||||
cache& cache ///< Passed to standard_role_manager.
|
||||
::service::migration_manager& mm ///< Passed to standard_role_manager.
|
||||
);
|
||||
|
||||
/// Retrieves LDAP configuration entries from qp and invokes the other constructor.
|
||||
ldap_role_manager(cql3::query_processor& qp, ::service::raft_group0_client& rg0c, ::service::migration_manager& mm, cache& cache);
|
||||
/// Retrieves LDAP configuration entries from qp and invokes the other constructor. Required by
|
||||
/// class_registrator<role_manager>.
|
||||
ldap_role_manager(cql3::query_processor& qp, ::service::raft_group0_client& rg0c, ::service::migration_manager& mm);
|
||||
|
||||
/// Thrown when query-template parsing fails.
|
||||
struct url_error : public std::runtime_error {
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2026-present ScyllaDB
|
||||
*
|
||||
* Modified by ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: (LicenseRef-ScyllaDB-Source-Available-1.0 and Apache-2.0)
|
||||
*/
|
||||
|
||||
#include "auth/maintenance_socket_authenticator.hh"
|
||||
|
||||
|
||||
namespace auth {
|
||||
|
||||
maintenance_socket_authenticator::~maintenance_socket_authenticator() {
|
||||
}
|
||||
|
||||
future<> maintenance_socket_authenticator::start() {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
future<> maintenance_socket_authenticator::ensure_superuser_is_created() const {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
bool maintenance_socket_authenticator::require_authentication() const {
|
||||
return false;
|
||||
}
|
||||
|
||||
} // namespace auth
|
||||
@@ -1,36 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2026-present ScyllaDB
|
||||
*
|
||||
* Modified by ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: (LicenseRef-ScyllaDB-Source-Available-1.0 and Apache-2.0)
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <seastar/core/shared_future.hh>
|
||||
|
||||
#include "password_authenticator.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
// maintenance_socket_authenticator is used for clients connecting to the
|
||||
// maintenance socket. It does not require authentication,
|
||||
// while still allowing the managing of roles and their credentials.
|
||||
class maintenance_socket_authenticator : public password_authenticator {
|
||||
public:
|
||||
using password_authenticator::password_authenticator;
|
||||
|
||||
virtual ~maintenance_socket_authenticator();
|
||||
|
||||
virtual future<> start() override;
|
||||
|
||||
virtual future<> ensure_superuser_is_created() const override;
|
||||
|
||||
bool require_authentication() const override;
|
||||
};
|
||||
|
||||
} // namespace auth
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2026-present ScyllaDB
|
||||
*
|
||||
* Modified by ScyllaDB
|
||||
*/
|
||||
|
||||
/*
|
||||
* SPDX-License-Identifier: (LicenseRef-ScyllaDB-Source-Available-1.0 and Apache-2.0)
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "auth/default_authorizer.hh"
|
||||
#include "auth/permission.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
// maintenance_socket_authorizer is used for clients connecting to the
|
||||
// maintenance socket. It grants all permissions unconditionally (like
|
||||
// AllowAllAuthorizer) while still supporting grant/revoke operations
|
||||
// (delegated to the underlying CassandraAuthorizer / default_authorizer).
|
||||
class maintenance_socket_authorizer : public default_authorizer {
|
||||
public:
|
||||
using default_authorizer::default_authorizer;
|
||||
|
||||
~maintenance_socket_authorizer() override = default;
|
||||
|
||||
future<> start() override {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
future<permission_set> authorize(const role_or_anonymous&, const resource&) const override {
|
||||
return make_ready_future<permission_set>(permissions::ALL);
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace auth
|
||||
@@ -11,50 +11,23 @@
|
||||
#include <seastar/core/future.hh>
|
||||
#include <stdexcept>
|
||||
#include <string_view>
|
||||
#include "auth/cache.hh"
|
||||
#include "cql3/description.hh"
|
||||
#include "utils/log.hh"
|
||||
#include "utils/on_internal_error.hh"
|
||||
#include "utils/class_registrator.hh"
|
||||
|
||||
namespace auth {
|
||||
|
||||
static logging::logger log("maintenance_socket_role_manager");
|
||||
constexpr std::string_view maintenance_socket_role_manager_name = "com.scylladb.auth.MaintenanceSocketRoleManager";
|
||||
|
||||
future<> maintenance_socket_role_manager::ensure_role_operations_are_enabled() {
|
||||
if (_is_maintenance_mode) {
|
||||
on_internal_error(log, "enabling role operations not allowed in maintenance mode");
|
||||
}
|
||||
static const class_registrator<
|
||||
role_manager,
|
||||
maintenance_socket_role_manager,
|
||||
cql3::query_processor&,
|
||||
::service::raft_group0_client&,
|
||||
::service::migration_manager&> registration(sstring{maintenance_socket_role_manager_name});
|
||||
|
||||
if (_std_mgr.has_value()) {
|
||||
on_internal_error(log, "role operations are already enabled");
|
||||
}
|
||||
|
||||
_std_mgr.emplace(_qp, _group0_client, _migration_manager, _cache);
|
||||
return _std_mgr->start();
|
||||
}
|
||||
|
||||
void maintenance_socket_role_manager::set_maintenance_mode() {
|
||||
if (_std_mgr.has_value()) {
|
||||
on_internal_error(log, "cannot enter maintenance mode after role operations have been enabled");
|
||||
}
|
||||
_is_maintenance_mode = true;
|
||||
}
|
||||
|
||||
maintenance_socket_role_manager::maintenance_socket_role_manager(
|
||||
cql3::query_processor& qp,
|
||||
::service::raft_group0_client& rg0c,
|
||||
::service::migration_manager& mm,
|
||||
cache& c)
|
||||
: _qp(qp)
|
||||
, _group0_client(rg0c)
|
||||
, _migration_manager(mm)
|
||||
, _cache(c)
|
||||
, _std_mgr(std::nullopt)
|
||||
, _is_maintenance_mode(false) {
|
||||
}
|
||||
|
||||
std::string_view maintenance_socket_role_manager::qualified_java_name() const noexcept {
|
||||
return "com.scylladb.auth.MaintenanceSocketRoleManager";
|
||||
return maintenance_socket_role_manager_name;
|
||||
}
|
||||
|
||||
const resource_set& maintenance_socket_role_manager::protected_resources() const {
|
||||
@@ -68,161 +41,81 @@ future<> maintenance_socket_role_manager::start() {
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::stop() {
|
||||
return _std_mgr ? _std_mgr->stop() : make_ready_future<>();
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::ensure_superuser_is_created() {
|
||||
return _std_mgr ? _std_mgr->ensure_superuser_is_created() : make_ready_future<>();
|
||||
}
|
||||
|
||||
template<typename T = void>
|
||||
future<T> operation_not_available_in_maintenance_mode_exception(std::string_view operation) {
|
||||
return make_exception_future<T>(
|
||||
std::runtime_error(fmt::format("role manager: {} operation not available through maintenance socket in maintenance mode", operation)));
|
||||
}
|
||||
|
||||
template<typename T = void>
|
||||
future<T> manager_not_ready_exception(std::string_view operation) {
|
||||
return make_exception_future<T>(
|
||||
std::runtime_error(fmt::format("role manager: {} operation not available because manager not ready yet (role operations not enabled)", operation)));
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::validate_operation(std::string_view name) const {
|
||||
if (_is_maintenance_mode) {
|
||||
return operation_not_available_in_maintenance_mode_exception(name);
|
||||
}
|
||||
if (!_std_mgr) {
|
||||
return manager_not_ready_exception(name);
|
||||
}
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::create(std::string_view role_name, const role_config& c, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("CREATE");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->create(role_name, c, mc);
|
||||
future<> maintenance_socket_role_manager::ensure_superuser_is_created() {
|
||||
return make_ready_future<>();
|
||||
}
|
||||
|
||||
template<typename T = void>
|
||||
future<T> operation_not_supported_exception(std::string_view operation) {
|
||||
return make_exception_future<T>(
|
||||
std::runtime_error(fmt::format("role manager: {} operation not supported through maintenance socket", operation)));
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::create(std::string_view role_name, const role_config&, ::service::group0_batch&) {
|
||||
return operation_not_supported_exception("CREATE");
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::drop(std::string_view role_name, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("DROP");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->drop(role_name, mc);
|
||||
return operation_not_supported_exception("DROP");
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::alter(std::string_view role_name, const role_config_update& u, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("ALTER");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->alter(role_name, u, mc);
|
||||
future<> maintenance_socket_role_manager::alter(std::string_view role_name, const role_config_update&, ::service::group0_batch&) {
|
||||
return operation_not_supported_exception("ALTER");
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::grant(std::string_view grantee_name, std::string_view role_name, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("GRANT");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->grant(grantee_name, role_name, mc);
|
||||
return operation_not_supported_exception("GRANT");
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::revoke(std::string_view revokee_name, std::string_view role_name, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("REVOKE");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->revoke(revokee_name, role_name, mc);
|
||||
return operation_not_supported_exception("REVOKE");
|
||||
}
|
||||
|
||||
future<role_set> maintenance_socket_role_manager::query_granted(std::string_view grantee_name, recursive_role_query m) {
|
||||
auto f = validate_operation("QUERY GRANTED");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<role_set>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->query_granted(grantee_name, m);
|
||||
future<role_set> maintenance_socket_role_manager::query_granted(std::string_view grantee_name, recursive_role_query) {
|
||||
return operation_not_supported_exception<role_set>("QUERY GRANTED");
|
||||
}
|
||||
|
||||
future<role_to_directly_granted_map> maintenance_socket_role_manager::query_all_directly_granted(::service::query_state& qs) {
|
||||
auto f = validate_operation("QUERY ALL DIRECTLY GRANTED");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<role_to_directly_granted_map>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->query_all_directly_granted(qs);
|
||||
future<role_to_directly_granted_map> maintenance_socket_role_manager::query_all_directly_granted(::service::query_state&) {
|
||||
return operation_not_supported_exception<role_to_directly_granted_map>("QUERY ALL DIRECTLY GRANTED");
|
||||
}
|
||||
|
||||
future<role_set> maintenance_socket_role_manager::query_all(::service::query_state& qs) {
|
||||
auto f = validate_operation("QUERY ALL");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<role_set>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->query_all(qs);
|
||||
future<role_set> maintenance_socket_role_manager::query_all(::service::query_state&) {
|
||||
return operation_not_supported_exception<role_set>("QUERY ALL");
|
||||
}
|
||||
|
||||
future<bool> maintenance_socket_role_manager::exists(std::string_view role_name) {
|
||||
auto f = validate_operation("EXISTS");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<bool>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->exists(role_name);
|
||||
return operation_not_supported_exception<bool>("EXISTS");
|
||||
}
|
||||
|
||||
future<bool> maintenance_socket_role_manager::is_superuser(std::string_view role_name) {
|
||||
auto f = validate_operation("IS SUPERUSER");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<bool>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->is_superuser(role_name);
|
||||
return make_ready_future<bool>(true);
|
||||
}
|
||||
|
||||
future<bool> maintenance_socket_role_manager::can_login(std::string_view role_name) {
|
||||
auto f = validate_operation("CAN LOGIN");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<bool>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->can_login(role_name);
|
||||
return make_ready_future<bool>(true);
|
||||
}
|
||||
|
||||
future<std::optional<sstring>> maintenance_socket_role_manager::get_attribute(std::string_view role_name, std::string_view attribute_name, ::service::query_state& qs) {
|
||||
auto f = validate_operation("GET ATTRIBUTE");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<std::optional<sstring>>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->get_attribute(role_name, attribute_name, qs);
|
||||
future<std::optional<sstring>> maintenance_socket_role_manager::get_attribute(std::string_view role_name, std::string_view attribute_name, ::service::query_state&) {
|
||||
return operation_not_supported_exception<std::optional<sstring>>("GET ATTRIBUTE");
|
||||
}
|
||||
|
||||
future<role_manager::attribute_vals> maintenance_socket_role_manager::query_attribute_for_all(std::string_view attribute_name, ::service::query_state& qs) {
|
||||
auto f = validate_operation("QUERY ATTRIBUTE FOR ALL");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<role_manager::attribute_vals>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->query_attribute_for_all(attribute_name, qs);
|
||||
future<role_manager::attribute_vals> maintenance_socket_role_manager::query_attribute_for_all(std::string_view attribute_name, ::service::query_state&) {
|
||||
return operation_not_supported_exception<role_manager::attribute_vals>("QUERY ATTRIBUTE");
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::set_attribute(std::string_view role_name, std::string_view attribute_name, std::string_view attribute_value, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("SET ATTRIBUTE");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->set_attribute(role_name, attribute_name, attribute_value, mc);
|
||||
return operation_not_supported_exception("SET ATTRIBUTE");
|
||||
}
|
||||
|
||||
future<> maintenance_socket_role_manager::remove_attribute(std::string_view role_name, std::string_view attribute_name, ::service::group0_batch& mc) {
|
||||
auto f = validate_operation("REMOVE ATTRIBUTE");
|
||||
if (f.failed()) {
|
||||
return f;
|
||||
}
|
||||
return _std_mgr->remove_attribute(role_name, attribute_name, mc);
|
||||
return operation_not_supported_exception("REMOVE ATTRIBUTE");
|
||||
}
|
||||
|
||||
future<std::vector<cql3::description>> maintenance_socket_role_manager::describe_role_grants() {
|
||||
auto f = validate_operation("DESCRIBE ROLE GRANTS");
|
||||
if (f.failed()) {
|
||||
return make_exception_future<std::vector<cql3::description>>(f.get_exception());
|
||||
}
|
||||
return _std_mgr->describe_role_grants();
|
||||
return operation_not_supported_exception<std::vector<cql3::description>>("DESCRIBE SCHEMA WITH INTERNALS");
|
||||
}
|
||||
|
||||
} // namespace auth
|
||||
|
||||
@@ -8,10 +8,8 @@
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "auth/cache.hh"
|
||||
#include "auth/resource.hh"
|
||||
#include "auth/role_manager.hh"
|
||||
#include "auth/standard_role_manager.hh"
|
||||
#include <seastar/core/future.hh>
|
||||
|
||||
namespace cql3 {
|
||||
@@ -25,26 +23,13 @@ class raft_group0_client;
|
||||
|
||||
namespace auth {
|
||||
|
||||
// This role manager is used by the maintenance socket. It has disabled all role management operations
|
||||
// in maintenance mode. In normal mode it delegates all operations to a standard_role_manager,
|
||||
// which is created on demand when the node joins the cluster.
|
||||
extern const std::string_view maintenance_socket_role_manager_name;
|
||||
|
||||
// This role manager is used by the maintenance socket. It has disabled all role management operations to not depend on
|
||||
// system_auth keyspace, which may be not yet created when the maintenance socket starts listening.
|
||||
class maintenance_socket_role_manager final : public role_manager {
|
||||
cql3::query_processor& _qp;
|
||||
::service::raft_group0_client& _group0_client;
|
||||
::service::migration_manager& _migration_manager;
|
||||
cache& _cache;
|
||||
std::optional<standard_role_manager> _std_mgr;
|
||||
bool _is_maintenance_mode;
|
||||
|
||||
public:
|
||||
void set_maintenance_mode() override;
|
||||
|
||||
// Ensures role management operations are enabled.
|
||||
// It must be called once the node has joined the cluster.
|
||||
// In the meantime all role management operations will fail.
|
||||
future<> ensure_role_operations_are_enabled() override;
|
||||
|
||||
maintenance_socket_role_manager(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&, cache&);
|
||||
maintenance_socket_role_manager(cql3::query_processor&, ::service::raft_group0_client&, ::service::migration_manager&) {}
|
||||
|
||||
virtual std::string_view qualified_java_name() const noexcept override;
|
||||
|
||||
@@ -56,21 +41,21 @@ public:
|
||||
|
||||
virtual future<> ensure_superuser_is_created() override;
|
||||
|
||||
virtual future<> create(std::string_view role_name, const role_config& c, ::service::group0_batch& mc) override;
|
||||
virtual future<> create(std::string_view role_name, const role_config&, ::service::group0_batch&) override;
|
||||
|
||||
virtual future<> drop(std::string_view role_name, ::service::group0_batch& mc) override;
|
||||
|
||||
virtual future<> alter(std::string_view role_name, const role_config_update& u, ::service::group0_batch& mc) override;
|
||||
virtual future<> alter(std::string_view role_name, const role_config_update&, ::service::group0_batch&) override;
|
||||
|
||||
virtual future<> grant(std::string_view grantee_name, std::string_view role_name, ::service::group0_batch& mc) override;
|
||||
|
||||
virtual future<> revoke(std::string_view revokee_name, std::string_view role_name, ::service::group0_batch& mc) override;
|
||||
|
||||
virtual future<role_set> query_granted(std::string_view grantee_name, recursive_role_query m) override;
|
||||
virtual future<role_set> query_granted(std::string_view grantee_name, recursive_role_query) override;
|
||||
|
||||
virtual future<role_to_directly_granted_map> query_all_directly_granted(::service::query_state& qs) override;
|
||||
virtual future<role_to_directly_granted_map> query_all_directly_granted(::service::query_state&) override;
|
||||
|
||||
virtual future<role_set> query_all(::service::query_state& qs) override;
|
||||
virtual future<role_set> query_all(::service::query_state&) override;
|
||||
|
||||
virtual future<bool> exists(std::string_view role_name) override;
|
||||
|
||||
@@ -78,19 +63,15 @@ public:
|
||||
|
||||
virtual future<bool> can_login(std::string_view role_name) override;
|
||||
|
||||
virtual future<std::optional<sstring>> get_attribute(std::string_view role_name, std::string_view attribute_name, ::service::query_state& qs) override;
|
||||
virtual future<std::optional<sstring>> get_attribute(std::string_view role_name, std::string_view attribute_name, ::service::query_state&) override;
|
||||
|
||||
virtual future<role_manager::attribute_vals> query_attribute_for_all(std::string_view attribute_name, ::service::query_state& qs) override;
|
||||
virtual future<role_manager::attribute_vals> query_attribute_for_all(std::string_view attribute_name, ::service::query_state&) override;
|
||||
|
||||
virtual future<> set_attribute(std::string_view role_name, std::string_view attribute_name, std::string_view attribute_value, ::service::group0_batch& mc) override;
|
||||
|
||||
virtual future<> remove_attribute(std::string_view role_name, std::string_view attribute_name, ::service::group0_batch& mc) override;
|
||||
|
||||
virtual future<std::vector<cql3::description>> describe_role_grants() override;
|
||||
|
||||
private:
|
||||
future<> validate_operation(std::string_view name) const;
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user