main: improve process file limit handling

We check that the number of open files is sufficent for normal
work (with lots of connections and sstables), but we can improve
it a little. Systemd sets up a low file soft limit by default (so that
select() doesn't break on file descriptors larger than 1023) and
recommends[1] raising the soft limit to the more generous hard limit
if the application doesn't use select(), as ours does not.

Follow the recommendation and bump the limit. Note that this applies
only to scylla started from the command line, as systemd integration
already raises the soft limit.

[1] http://0pointer.net/blog/file-descriptor-limits.html

Closes #8756
This commit is contained in:
Avi Kivity
2021-05-30 15:02:52 +03:00
committed by Nadav Har'El
parent 7521301b72
commit ec60f44b64

16
main.cc
View File

@@ -239,12 +239,24 @@ public:
static
void
verify_rlimit(bool developer_mode) {
adjust_and_verify_rlimit(bool developer_mode) {
struct rlimit lim;
int r = getrlimit(RLIMIT_NOFILE, &lim);
if (r == -1) {
throw std::system_error(errno, std::system_category());
}
// First, try to increase the soft limit to the hard limit
// Ref: http://0pointer.net/blog/file-descriptor-limits.html
if (lim.rlim_cur < lim.rlim_max) {
lim.rlim_cur = lim.rlim_max;
r = setrlimit(RLIMIT_NOFILE, &lim);
if (r == -1) {
startlog.warn("adjusting RLIMIT_NOFILE failed with {}", std::system_error(errno, std::system_category()));
}
}
auto recommended = 200'000U;
auto min = 10'000U;
if (lim.rlim_cur < min) {
@@ -566,7 +578,7 @@ int main(int ac, char** av) {
default_sg.set_shares(200);
}).get();
verify_rlimit(cfg->developer_mode());
adjust_and_verify_rlimit(cfg->developer_mode());
verify_adequate_memory_per_shard(cfg->developer_mode());
if (cfg->partitioner() != "org.apache.cassandra.dht.Murmur3Partitioner") {
if (cfg->enable_deprecated_partitioners()) {