File _patchinfo of Package patchinfo.8176
<patchinfo incident="8176"> <issue id="1061461" tracker="bnc">OSDs keep generating coredumps after adding new OSD node to cluster</issue> <issue id="1079076" tracker="bnc">L3: SES5_RGW_SegFault</issue> <issue id="1081379" tracker="bnc">VUL-0: CVE-2018-7262: ceph: rgw: malformed http headers can crash rgw</issue> <issue id="1067088" tracker="bnc">Upgrade to SES5 restarted all nodes, majority of OSDs aborts during start</issue> <issue id="1056125" tracker="bnc">some OSDs are down when doing performance testing on rbd image in EC Pool</issue> <issue id="1087269" tracker="bnc">allow_ec_overwrites option not in command options list</issue> <issue id="1051598" tracker="bnc">ceph-disk omits "--runtime" when enabling ceph-osd@$ID.service units (was: ERROR: unable to open OSD superblock)</issue> <issue id="1070357" tracker="bnc">L3: Zabbix mgr module doesn't recover from HEALTH_ERR</issue> <issue id="1066502" tracker="bnc">After upgrading a single OSD from SES 4 to SES 5 the OSDs do not rejoin the cluster</issue> <issue id="1067119" tracker="bnc">Crushtool decompile creates wrong device entries (device 20 device20) for not existing / deleted OSDs</issue> <issue id="1060904" tracker="bnc">Loglevel misleading during keystone authentication</issue> <issue id="1056967" tracker="bnc">Monitors goes down after pool creation on cluster with 120 OSDs</issue> <issue id="1067705" tracker="bnc">SES5: Issues with RGW Multi-Site Federation between SES5 and RH Ceph Storage 2</issue> <issue id="1059458" tracker="bnc">stopping / restarting rados gateway as part of deepsea stage.4 executions causes core-dump of radosgw</issue> <issue id="1087493" tracker="bnc">L3-Question: Commvault cannot reconnect to storage after restarting haproxy</issue> <issue id="1066182" tracker="bnc">Container synchronization between two Ceph clusters failed</issue> <issue id="1081600" tracker="bnc">crash in civetweb/RGW</issue> <issue id="1054061" tracker="bnc">NFS-GANESHA service failing while trying to list mountpoint on client</issue> <issue id="1074301" tracker="bnc">L3: OSDs keep aborting: SnapMapper failed asserts</issue> <issue id="1063014" tracker="bnc">VUL-0: CVE-2017-16818: ceph: User reachable asserts allow for DoS</issue> <issue id="1086340" tracker="bnc">SES5: XFS metadata corruption on rbd-nbd mapped image with journaling feature enabled</issue> <issue id="1080788" tracker="bnc">fsid mismatch when creating additional OSDs</issue> <issue id="1071386" tracker="bnc">metadata spill onto block.slow</issue> <issue id="2018-7262" tracker="cve" /> <issue id="2017-16818" tracker="cve" /> <category>security</category> <rating>important</rating> <packager>smithfarm</packager> <description>This update for ceph fixes the following issues: Security issues fixed: - CVE-2018-7262: rgw: malformed http headers can crash rgw (bsc#1081379). - CVE-2017-16818: User reachable asserts allow for DoS (bsc#1063014). Bug fixes: - bsc#1061461: OSDs keep generating coredumps after adding new OSD node to cluster. - bsc#1079076: RGW openssl fixes. - bsc#1067088: Upgrade to SES5 restarted all nodes, majority of OSDs aborts during start. - bsc#1056125: Some OSDs are down when doing performance testing on rbd image in EC Pool. - bsc#1087269: allow_ec_overwrites option not in command options list. - bsc#1051598: Fix mountpoint check for systemctl enable --runtime. - bsc#1070357: Zabbix mgr module doesn't recover from HEALTH_ERR. - bsc#1066502: After upgrading a single OSD from SES 4 to SES 5 the OSDs do not rejoin the cluster. - bsc#1067119: Crushtool decompile creates wrong device entries (device 20 device20) for not existing / deleted OSDs. - bsc#1060904: Loglevel misleading during keystone authentication. - bsc#1056967: Monitors goes down after pool creation on cluster with 120 OSDs. - bsc#1067705: Issues with RGW Multi-Site Federation between SES5 and RH Ceph Storage 2. - bsc#1059458: Stopping / restarting rados gateway as part of deepsea stage.4 executions causes core-dump of radosgw. - bsc#1087493: Commvault cannot reconnect to storage after restarting haproxy. - bsc#1066182: Container synchronization between two Ceph clusters failed. - bsc#1081600: Crash in civetweb/RGW. - bsc#1054061: NFS-GANESHA service failing while trying to list mountpoint on client. - bsc#1074301: OSDs keep aborting: SnapMapper failed asserts. - bsc#1086340: XFS metadata corruption on rbd-nbd mapped image with journaling feature enabled. - bsc#1080788: fsid mismatch when creating additional OSDs. - bsc#1071386: Metadata spill onto block.slow. This update was imported from the SUSE:SLE-12-SP3:Update update project.</description> <summary>Security update for ceph</summary> </patchinfo>