The connection reset code posted an exception on the _data_received promise
to break a waiter (if any), but left the optional<promise<>> engaged. This
caused the connection destructor to attempt to post a new exception on the
same promise, which is not legal.
Fix by disengaging the optional promise, and give the same treatment to
_all_data_acked_promise.
From http://en.cppreference.com/w/cpp/language/constexpr:
A constexpr specifier used in an object declaration implies const.
However, We can not change from
static constexpr const char* TIME_FORMAT = "%a %b %d %I:%M:%S %Z %Y";
to
static constexpr char* TIME_FORMAT = "%a %b %d %I:%M:%S %Z %Y";
The compiler complains:
In file included from json/formatter.cc:22:0:
json/formatter.hh:132:42: error: deprecated conversion from string
constant to ‘char*’ [-Werror=write-strings]
static constexpr char* TIME_FORMAT = "%a %b %d %I:%M:%S %Z %Y";
Since, unlike const, constexpr does not modify a type. It just applies
to an object (or function), and incidentally implies const to the
top-level type.
Obviously, I was sleeping or something when I wrote the reg/unreg code, since
using copy semantics for anchors is equivalent with double unregistrations.
Luckily, unregister was broken as well, so counters did stay active. Which
however broke things once actual non-persistent counters were added. Doh.
* Anchors must be non-copyable
* Above makes creating std::vector<registration> from initializer list
tricky, so added helper type "registrations" which inherits vector<reg>
but constructs from initializer_list<type_instance_id>, avoiding illegal
copying.
* Both register and unregister were broken (map semantics does not overwrite
on insert, only [] or iterator operation).
* Modified the various registration callsites to use registrations and move
semantics.
Ensure the "detaching" of rte_mbuf from the received data in the copy path.
This patch completes the above in case the allocation of the buffer to
copy the newly arrived data to has failed.
In the above case we prefer to drop the arrived packet instead of consuming
the rte_mbuf from the Rx ring's mempool.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Fixes issue #41
Packet doesn't have neither L2 nor L3 headers in the tcp::output_one(),
therefore comparing p.len() to MSS + L2_HDR + L3_HDR + L4_HDR when
payload size is greater than (MSS - L2_HDR - L3_HDR) gives the wrong
negative decision in regard to whether to send this frame as an TSO frame.
Fix the above by comparing the MSS to actual payload size.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Reviewed-by: Asias He <asias@cloudius-systems.com>
This is not really HAVE_OSV but virtio-net, may need to change 'if (nic == virtio)' later, if we want to support virtio-net on Linux/SR-IOV on OSv.
Signed-off-by: Takuya ASADA <syuu@cloudius-systems.com>
This is not really HAVE_OSV but virtio-net, may need to change 'if (nic == virtio)' later, if we want to support virtio-net on Linux/SR-IOV on OSv.
Signed-off-by: Takuya ASADA <syuu@cloudius-systems.com>
Add an option to enable/disable sending and respecting PAUSE frames as defined in
802.3x and 802.3z specifications. We will configure the Link level PAUSEs
(as opposed to PFC).
In simple words Ethernel Flow Control relies on sending/receiving PAUSE (XOFF) MAC frames that
indicate the sender that receiver's buffer is almost full. The idea is to avoid receive buffer overflow. When
receiver's buffer is being freed it will send XON frame to indicate to the sender that it may
transmit again.
- Added DPDK-specific command option to toggle the feature.
- Sending PAUSEs is enabled by default.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Error counters are backend-specific thus we will register them separately
for each specific backend.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
This function was always returning a ready future, so I made it
return void and changed the callers to explicitly return ready future.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
- Moved all qp stats into a separate class: qp_stats.
- Moved the stats update function into the new stats class.
- Add the _stats_plugin_name property to net::qp class:
default value is "network".
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
- Added LRO ON/OFF native stack command line parameter.
- Implemented handling the reception of a clustered packet:
- Without hugetlbfs: allocate a single buffer to contain the whole
packet's data and copy its contents into it. If the allocation failed - build
the "packet" directly from the cluster.
- With hugetlbfs: create a packet from cluster mbuf's data buffers.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
New in v3:
- Use RTE_ETHDEV_HAS_LRO_SUPPORT defined in rte_ethdev.h instead of
RTE_ETHDEV_LRO_SUPPORT defined in config/common_linuxapp.
New in v2:
- dpdk_qp<false>::from_mbuf_lro(): Free the cluster after copying
to the allocated buffer.
- Some style cleanups.
byteorder.hh's "net::packed<T>" subclassed the basic unaligned<T> and
added a "adjust_endianess" method. This was completely unnecessary:
While ntoh's generic implementation indeed uses that method, we also
have a specialized overload for ntoh(packed<T>), so the generic
implementation would never be used for packed<T> anyway!
So for net::packed<T> we don't need anything more than unaligned<T>,
and can just make it an alias.
As a bonus, the "hton" function will now work also on unaligned<T>
(such as the result of the convenient unaligned_cast<>), not just on
net::packed<T>.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
The previous patch moved most of the functionality of net::packed<>
into a new template unaligned<> (in core/unaligned.hh). So in this
patch we implement net::packed<> in terms of unaligned<>. The former
is basically the same as the latter, with the addition of the
"adjust_endianness" feature that the networking code expects.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
This value is passed as an opaque parameter of the rte_pktmbuf_pool_init().
It should equal to a buffer size + RTE_PKTMBUF_HEADROOM.
The default value is 2K + RTE_PKTMBUF_HEADROOM.
PMD is using this value minus RTE_PKTMBUF_HEADROOM for configuring the Rx
data buffers' size when it configures the Rx HW ring.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Do not store the tcp header in unacked queue. When partial ack of a
segment happens, trim off the acked part of a segment. When retransmits,
recalculate the tcp header and retransmit only the unacked part.
We neglected to set offload_info::needs_csum on reset packets, resuling in
them being ignored by the recipient. Symptoms include connection attempts
to closed ports (seastar = passive end) hanging instead the active end
understanding the port is closed.
Allocate exactly the available fragment size in order to catch buffer
overflows.
We get similar behaviour in dpdk, since without huge pages, it must copy
the packet into a newly allocated buffer.
Two bugs:
1. get_header<type>(offset) was used with size given as the offset
2. opt_end failed to account for the mandatory tcp header, and thus was
20 bytes to large, resulting in overflow.
boost::join() provided by boost/algorithm/string.hpp conflicts with
boost::join() from boost/range/join.hpp. It looks like a boost issue
but let's not pollute the namespace unnecesssarily.
Regarding the change in configure.py, it looks like scollectd.cc is
part of the 'core' package, but it needs 'net/api.hh', so I added
'net/net.cc' to core.