Files
scylladb/scripts/create-relocatable-package.py
Glauber Costa da260ecd61 systemd: put scylla processes in systemd slices.
It is well known that seastar applications, like Scylla, do not play
well with external processes: CPU usage from external processes may
confuse the I/O and CPU schedulers and create stalls.

We have also recently seen that memory usage from other application's
anonymous and page cache memory can bring the system to OOM.

Linux has a very good infrastructure for resource control contributed by
amazingly bright engineers in the form of cgroup controllers. This
infrastructure is exposed by SystemD in the form of slices: a
hierarchical structure to which controllers can be attached.

In true systemd way, the hierarchy is implicit in the filenames of the
slice files. a "-" symbol defines the hierarchy, so the files that this
patch presents, scylla-server and scylla-helper, essentially create a
"scylla" cgroup at the top level with "server" and "helper" children.

Later we mark the Services needed to run scylla as belonging to one
or the other through the Slice= directive.

Scylla DBAs can benefit from this setup by using the systemd-run
utility to fire ad-hoc commands.

Let's say for example that someone wants to hypothetically run a backup
and transfer files to an external object store like S3, making sure that
the amount of page cache used won't create swap pressure leading to
database timeouts.

One can then run something like:

```
   sudo systemd-run --uid=`id -u scylla` --gid=`id -g scylla` -t --slice=scylla-helper.slice /path/to/my/magical_backup_tool
```

(or even better, the backup tool can itself be a systemd timer)

Changes from last version:
- No longer use the CPUQuota
- Minor typo fixes
- postinstall fixup for small machines

Benchmark results:
==================

Test: read from disk, with 100% disk util using a single i3.xlarge (4 vCPUs).
We have to fill the cache as we read, so this should stress CPU, memory and
disk I/O.

cassandra-stress command:
```
  cassandra-stress read no-warmup duration=5m -rate threads=20 -node 10.2.209.188 -pop dist=uniform\(1..150000000\)
```

Baseline results:

```
Results:
Op rate                   :   13,830 op/s  [READ: 13,830 op/s]
Partition rate            :   13,830 pk/s  [READ: 13,830 pk/s]
Row rate                  :   13,830 row/s [READ: 13,830 row/s]
Latency mean              :    1.4 ms [READ: 1.4 ms]
Latency median            :    1.4 ms [READ: 1.4 ms]
Latency 95th percentile   :    2.4 ms [READ: 2.4 ms]
Latency 99th percentile   :    2.8 ms [READ: 2.8 ms]
Latency 99.9th percentile :    3.4 ms [READ: 3.4 ms]
Latency max               :   12.0 ms [READ: 12.0 ms]
Total partitions          :  4,149,130 [READ: 4,149,130]
Total errors              :          0 [READ: 0]
Total GC count            : 0
Total GC memory           : 0.000 KiB
Total GC time             :    0.0 seconds
Avg GC time               :    NaN ms
StdDev GC time            :    0.0 ms
Total operation time      : 00:05:00
```

Question 1:
===========

Does putting scylla in a special slice affect its performance ?

Results with Scylla running in a slice:

```
Results:
Op rate                   :   13,811 op/s  [READ: 13,811 op/s]
Partition rate            :   13,811 pk/s  [READ: 13,811 pk/s]
Row rate                  :   13,811 row/s [READ: 13,811 row/s]
Latency mean              :    1.4 ms [READ: 1.4 ms]
Latency median            :    1.4 ms [READ: 1.4 ms]
Latency 95th percentile   :    2.2 ms [READ: 2.2 ms]
Latency 99th percentile   :    2.6 ms [READ: 2.6 ms]
Latency 99.9th percentile :    3.3 ms [READ: 3.3 ms]
Latency max               :   23.2 ms [READ: 23.2 ms]
Total partitions          :  4,151,409 [READ: 4,151,409]
Total errors              :          0 [READ: 0]
Total GC count            : 0
Total GC memory           : 0.000 KiB
Total GC time             :    0.0 seconds
Avg GC time               :    NaN ms
StdDev GC time            :    0.0 ms
Total operation time      : 00:05:00
```

*Conclusion* : No significant change

Question 2:
===========

What happens when there is a CPU hog running in the same server as scylla?

CPU hog:

```
   taskset -c 0 /bin/sh -c "while true; do true; done" &
   taskset -c 1 /bin/sh -c "while true; do true; done" &
   taskset -c 2 /bin/sh -c "while true; do true; done" &
   taskset -c 3 /bin/sh -c "while true; do true; done" &
   sleep 330
```

Scenario 1: CPU hog runs freely:

```
Results:
Op rate                   :    2,939 op/s  [READ: 2,939 op/s]
Partition rate            :    2,939 pk/s  [READ: 2,939 pk/s]
Row rate                  :    2,939 row/s [READ: 2,939 row/s]
Latency mean              :    6.8 ms [READ: 6.8 ms]
Latency median            :    5.3 ms [READ: 5.3 ms]
Latency 95th percentile   :   11.0 ms [READ: 11.0 ms]
Latency 99th percentile   :   14.9 ms [READ: 14.9 ms]
Latency 99.9th percentile :   17.1 ms [READ: 17.1 ms]
Latency max               :   26.3 ms [READ: 26.3 ms]
Total partitions          :    884,460 [READ: 884,460]
Total errors              :          0 [READ: 0]
Total GC count            : 0
Total GC memory           : 0.000 KiB
Total GC time             :    0.0 seconds
Avg GC time               :    NaN ms
StdDev GC time            :    0.0 ms
Total operation time      : 00:05:00
```

Scenario 2: CPU hog runs inside scylla-helper slice

```
Results:
Op rate                   :   13,527 op/s  [READ: 13,527 op/s]
Partition rate            :   13,527 pk/s  [READ: 13,527 pk/s]
Row rate                  :   13,527 row/s [READ: 13,527 row/s]
Latency mean              :    1.5 ms [READ: 1.5 ms]
Latency median            :    1.4 ms [READ: 1.4 ms]
Latency 95th percentile   :    2.4 ms [READ: 2.4 ms]
Latency 99th percentile   :    2.9 ms [READ: 2.9 ms]
Latency 99.9th percentile :    3.8 ms [READ: 3.8 ms]
Latency max               :   18.7 ms [READ: 18.7 ms]
Total partitions          :  4,069,934 [READ: 4,069,934]
Total errors              :          0 [READ: 0]
Total GC count            : 0
Total GC memory           : 0.000 KiB
Total GC time             :    0.0 seconds
Avg GC time               :    NaN ms
StdDev GC time            :    0.0 ms
Total operation time      : 00:05:00
```

*Conclusion*: With systemd slice we can keep the performance very close to
baseline

Question 3:
===========

What happens when there is a CPU hog running in the same server as scylla?

I/O hog: (Data in the cluster is 2x size of memory)

```
while true; do
	find /var/lib/scylla/data -type f -exec grep glauber {} +
done
```

Scenario 1: I/O hog runs freely:

```
Results:
Op rate                   :    7,680 op/s  [READ: 7,680 op/s]
Partition rate            :    7,680 pk/s  [READ: 7,680 pk/s]
Row rate                  :    7,680 row/s [READ: 7,680 row/s]
Latency mean              :    2.6 ms [READ: 2.6 ms]
Latency median            :    1.3 ms [READ: 1.3 ms]
Latency 95th percentile   :    7.8 ms [READ: 7.8 ms]
Latency 99th percentile   :   10.9 ms [READ: 10.9 ms]
Latency 99.9th percentile :   16.9 ms [READ: 16.9 ms]
Latency max               :   40.8 ms [READ: 40.8 ms]
Total partitions          :  2,306,723 [READ: 2,306,723]
Total errors              :          0 [READ: 0]
Total GC count            : 0
Total GC memory           : 0.000 KiB
Total GC time             :    0.0 seconds
Avg GC time               :    NaN ms
StdDev GC time            :    0.0 ms
Total operation time      : 00:05:00
```

Scenario 2: I/O hog runs in the scylla-helper systemd slice:

```
Results:
Op rate                   :   13,277 op/s  [READ: 13,277 op/s]
Partition rate            :   13,277 pk/s  [READ: 13,277 pk/s]
Row rate                  :   13,277 row/s [READ: 13,277 row/s]
Latency mean              :    1.5 ms [READ: 1.5 ms]
Latency median            :    1.4 ms [READ: 1.4 ms]
Latency 95th percentile   :    2.4 ms [READ: 2.4 ms]
Latency 99th percentile   :    2.9 ms [READ: 2.9 ms]
Latency 99.9th percentile :    3.5 ms [READ: 3.5 ms]
Latency max               :  183.4 ms [READ: 183.4 ms]
Total partitions          :  3,984,080 [READ: 3,984,080]
Total errors              :          0 [READ: 0]
Total GC count            : 0
Total GC memory           : 0.000 KiB
Total GC time             :    0.0 seconds
Avg GC time               :    NaN ms
StdDev GC time            :    0.0 ms
Total operation time      : 00:05:00
```

*Conclusion*: With systemd slice we can keep the performance very close to
baseline

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2019-08-19 14:31:28 -04:00

180 lines
6.5 KiB
Python
Executable File

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Copyright (C) 2018 ScyllaDB
#
#
# This file is part of Scylla.
#
# Scylla is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Scylla is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Scylla. If not, see <http://www.gnu.org/licenses/>.
#
import argparse
import io
import os
import subprocess
import tarfile
import pathlib
def ldd(executable):
'''Given an executable file, return a dictionary with the keys
containing its shared library dependencies and the values pointing
at the files they resolve to. A fake key ld.so points at the
dynamic loader.'''
libraries = {}
for ldd_line in subprocess.check_output(
['ldd', executable],
universal_newlines=True).splitlines():
elements = ldd_line.split()
if ldd_line.endswith('not found'):
raise Exception('ldd could not resolve {}'.format(elements[0]))
if elements[1] != '=>':
if elements[0].startswith('linux-vdso.so'):
# provided by kernel
continue
libraries['ld.so'] = os.path.realpath(elements[0])
else:
libraries[elements[0]] = os.path.realpath(elements[2])
return libraries
ap = argparse.ArgumentParser(description='Create a relocatable scylla package.')
ap.add_argument('dest',
help='Destination file (tar format)')
ap.add_argument('--mode', dest='mode', default='release',
help='Build mode (debug/release) to use')
args = ap.parse_args()
executables = ['build/{}/scylla'.format(args.mode),
'build/{}/iotune'.format(args.mode),
'/usr/bin/lscpu',
'/usr/bin/gawk',
'/usr/bin/gzip',
'/usr/sbin/ifconfig',
'/usr/sbin/ethtool',
'/usr/bin/netstat',
'/usr/bin/hwloc-distrib',
'/usr/bin/hwloc-calc']
output = args.dest
libs = {}
for exe in executables:
libs.update(ldd(exe))
# manually add libthread_db for debugging thread
libs.update({'libthread_db-1.0.so': '/lib64/libthread_db-1.0.so'})
ld_so = libs['ld.so']
have_gnutls = any([lib.startswith('libgnutls.so')
for lib in libs.keys()])
# Although tarfile.open() can write directly to a compressed tar by using
# the "w|gz" mode, it does so using a slow Python implementation. It is as
# much as 3 times faster (!) to output to a pipe running the external gzip
# command. We can complete the compression even faster by using the pigz
# command - a parallel implementation of gzip utilizing all processors
# instead of just one.
gzip_process = subprocess.Popen("pigz > "+output, shell=True, stdin=subprocess.PIPE)
ar = tarfile.open(fileobj=gzip_process.stdin, mode='w|')
pathlib.Path('build/SCYLLA-RELOCATABLE-FILE').touch()
ar.add('build/SCYLLA-RELOCATABLE-FILE', arcname='SCYLLA-RELOCATABLE-FILE')
# This thunk is a shell script that arranges for the executable to be invoked,
# under the following conditions:
#
# - the same argument vector is passed to the executable, including argv[0]
# - the executable name (/proc/pid/comm, shown in top(1)) is the same
# - the dynamic linker is taken from this package rather than the executable's
# default (which is hardcoded to point to /lib64/ld-linux-x86_64.so or similar)
# - LD_LIBRARY_PATH points to the lib/ directory so shared library dependencies
# are satisified from there rather than the system default (e.g. /lib64)
# To do that, the dynamic linker is invoked using a symbolic link named after the
# executable, not its standard name. We use "bash -a" to set argv[0].
# The full tangled web looks like:
#
# foobar/bin/scylla a shell script invoking everything
# foobar/libexec/scylla.bin the real binary
# foobar/libexec/scylla a symlink to ../lib/ld.so
# foobar/libreloc/ld.so the dynamic linker
# foobar/libreloc/lib... all the other libraries
# the transformations (done by the thunk and symlinks) are:
#
# bin/scylla args -> libexec/scylla libexec/scylla.bin args -> lib/ld.so libexec/scylla.bin args
thunk = b'''\
#!/bin/bash
x="$(readlink -f "$0")"
b="$(basename "$x")"
d="$(dirname "$x")/.."
ldso="$d/libexec/$b"
realexe="$d/libexec/$b.bin"
export GNUTLS_SYSTEM_PRIORITY_FILE="${GNUTLS_SYSTEM_PRIORITY_FILE-$d/libreloc/gnutls.config}"
LD_LIBRARY_PATH="$d/libreloc" exec -a "$0" "$ldso" "$realexe" "$@"
'''
for exe in executables:
basename = os.path.basename(exe)
ar.add(exe, arcname='libexec/' + basename + '.bin')
ti = tarfile.TarInfo(name='bin/' + basename)
ti.size = len(thunk)
ti.mode = 0o755
ti.mtime = os.stat(exe).st_mtime
ar.addfile(ti, fileobj=io.BytesIO(thunk))
ti = tarfile.TarInfo(name='libexec/' + basename)
ti.type = tarfile.SYMTYPE
ti.linkname = '../libreloc/ld.so'
ti.mtime = os.stat(exe).st_mtime
ar.addfile(ti)
for lib, libfile in libs.items():
ar.add(libfile, arcname='libreloc/' + lib)
if have_gnutls:
gnutls_config_nolink = os.path.realpath('/etc/crypto-policies/back-ends/gnutls.config')
ar.add(gnutls_config_nolink, arcname='libreloc/gnutls.config')
ar.add('conf')
ar.add('dist')
ar.add('build/SCYLLA-RELEASE-FILE', arcname='SCYLLA-RELEASE-FILE')
ar.add('build/SCYLLA-VERSION-FILE', arcname='SCYLLA-VERSION-FILE')
ar.add('build/SCYLLA-PRODUCT-FILE', arcname='SCYLLA-PRODUCT-FILE')
ar.add('seastar/scripts')
ar.add('seastar/dpdk/usertools')
ar.add('install.sh')
# scylla_post_install.sh lives at the top level together with install.sh in the src tree, but while install.sh is
# not distributed in the .rpm and .deb packages, scylla_post_install is, so we'll add it in the package
# together with the other scripts that will end up in /usr/lib/scylla
ar.add('scylla_post_install.sh', arcname="dist/common/scripts/scylla_post_install.sh")
ar.add('scripts/relocate_python_scripts.py', arcname='relocate_python_scripts.py')
ar.add('README.md')
ar.add('README-DPDK.md')
ar.add('NOTICE.txt')
ar.add('ORIGIN')
ar.add('licenses')
ar.add('swagger-ui')
ar.add('api')
ar.add('tools')
ar.add('scylla-gdb.py')
# Complete the tar output, and wait for the gzip process to complete
ar.close()
gzip_process.communicate()