Security update for ceph

This update for ceph fixes the following issues:

Security issues fixed:

- CVE-2018-7262: rgw: malformed http headers can crash rgw (bsc#1081379).
- CVE-2017-16818: User reachable asserts allow for DoS (bsc#1063014).

Bug fixes:

- bsc#1061461: OSDs keep generating coredumps after adding new OSD node to cluster.
- bsc#1079076: RGW openssl fixes.
- bsc#1067088: Upgrade to SES5 restarted all nodes, majority of OSDs aborts during start.
- bsc#1056125: Some OSDs are down when doing performance testing on rbd image in EC Pool.
- bsc#1087269: allow_ec_overwrites option not in command options list.
- bsc#1051598: Fix mountpoint check for systemctl enable --runtime.
- bsc#1070357: Zabbix mgr module doesn't recover from HEALTH_ERR.
- bsc#1066502: After upgrading a single OSD from SES 4 to SES 5 the OSDs do not rejoin the cluster.
- bsc#1067119: Crushtool decompile creates wrong device entries (device 20 device20) for not existing / deleted OSDs.
- bsc#1060904: Loglevel misleading during keystone authentication.
- bsc#1056967: Monitors goes down after pool creation on cluster with 120 OSDs.
- bsc#1067705: Issues with RGW Multi-Site Federation between SES5 and RH Ceph Storage 2.
- bsc#1059458: Stopping / restarting rados gateway as part of deepsea stage.4 executions causes core-dump of radosgw.
- bsc#1087493: Commvault cannot reconnect to storage after restarting haproxy.
- bsc#1066182: Container synchronization between two Ceph clusters failed.
- bsc#1081600: Crash in civetweb/RGW.
- bsc#1054061: NFS-GANESHA service failing while trying to list mountpoint on client.
- bsc#1074301: OSDs keep aborting: SnapMapper failed asserts.
- bsc#1086340: XFS metadata corruption on rbd-nbd mapped image with journaling feature enabled.
- bsc#1080788: fsid mismatch when creating additional OSDs.
- bsc#1071386: Metadata spill onto block.slow.

This update was imported from the SUSE:SLE-12-SP3:Update update project.

Fixed bugs
bnc#1061461
OSDs keep generating coredumps after adding new OSD node to cluster
bnc#1079076
L3: SES5_RGW_SegFault
bnc#1081379
VUL-0: CVE-2018-7262: ceph: rgw: malformed http headers can crash rgw
bnc#1067088
Upgrade to SES5 restarted all nodes, majority of OSDs aborts during start
bnc#1056125
some OSDs are down when doing performance testing on rbd image in EC Pool
bnc#1087269
allow_ec_overwrites option not in command options list
bnc#1051598
ceph-disk omits "--runtime" when enabling ceph-osd@$ID.service units (was: ERROR: unable to open OSD superblock)
bnc#1070357
L3: Zabbix mgr module doesn't recover from HEALTH_ERR
bnc#1066502
After upgrading a single OSD from SES 4 to SES 5 the OSDs do not rejoin the cluster
bnc#1067119
Crushtool decompile creates wrong device entries (device 20 device20) for not existing / deleted OSDs
bnc#1060904
Loglevel misleading during keystone authentication
bnc#1056967
Monitors goes down after pool creation on cluster with 120 OSDs
bnc#1067705
SES5: Issues with RGW Multi-Site Federation between SES5 and RH Ceph Storage 2
bnc#1059458
stopping / restarting rados gateway as part of deepsea stage.4 executions causes core-dump of radosgw
bnc#1087493
L3-Question: Commvault cannot reconnect to storage after restarting haproxy
bnc#1066182
Container synchronization between two Ceph clusters failed
bnc#1081600
crash in civetweb/RGW
bnc#1054061
NFS-GANESHA service failing while trying to list mountpoint on client
bnc#1074301
L3: OSDs keep aborting: SnapMapper failed asserts
bnc#1063014
VUL-0: CVE-2017-16818: ceph: User reachable asserts allow for DoS
bnc#1086340
SES5: XFS metadata corruption on rbd-nbd mapped image with journaling feature enabled
bnc#1080788
fsid mismatch when creating additional OSDs
bnc#1071386
metadata spill onto block.slow
Selected Binaries
openSUSE Build Service is sponsored by