Recommended update for ceph

This update for ceph fixes the following issues:

Ceph was updated to 14.2.5-371-g3551250731:

This is the upstream Nautilus 14.2.5 point release, see https://ceph.io/releases/v14-2-5-nautilus-released/

* health warnings will be issued if daemons have recently crashed (bsc#1158923)
* pg_num must be a power of two, otherwise HEALTH_WARN (bsc#1158925)
* pool size must be > 1, otherwise HEALTH_WARN (bsc#1158926)
* health warning if average OSD heartbeat ping time exceeds threshold (bsc#1158927)
* changes in the telemetry MGR module (bsc#1158929)
* new OSD daemon command dump_recovery_reservations (bsc#1158930)
* new OSD daemon command dump_scrub_reservations (bsc#1158931)
* RGW now supports S3 Object Lock set of APIs (bsc#1158932)
* RGW now supports List Objects V2 (bsc#1158933)
* mon: keep v1 address type when explicitly (bsc#1140879)
* doc: mention --namespace option in rados manpage (bsc#1157611)
* mgr/dashboard: Remove env_build from e2e:ci
* ceph-volume: check if we run in an selinux environment
* qa/dashboard_e2e_tests.sh: Automatically use correct chromedriver version (bsc#1155950)
* rebase on tip of upstream nautilus, SHA1 9989c20373e2294b7479ec4bd6ac5cce80b01645
* rgw: add S3 object lock feature to support object worm (jsc#SES-582)
* os/bluestore: apply garbage collection against excessive blob count growth (bsc#1124556)
* doc: update bluestore cache settings and clarify data fraction (bsc#1131817)
* mgr/dashboard: Allow the decrease of pg's of an existing pool (bsc#1132337)
* core: Improve health status for backfill_toofull and recovery_toofull and
fix backfill_toofull seen on cluster where the most full OSD is at 1% (bsc#1134365)
* mgr/dashboard: Set RO as the default access_type for RGW NFS exports (bsc#1137227)
* mgr/dashboard: Allow disabling redirection on standby Dashboards (bsc#1140504)
* rgw: dns name is not case sensitive (bsc#1141203)
* os/bluestore: shallow fsck mode and legacy statfs auto repair (bsc#1145571)
* mgr/dashboard: Display WWN and LUN number in iSCSI target details (bsc#1145756)
* mgr/dashboard: access_control: add grafana scope read access to *-manager roles (bsc#1148360)
* mgr/dashboard: internationalization support with AOT enabled (bsc#1148498)
* mgr/dashboard: Fix data point alignment in MDS counters chart (bsc#1153876)
* mgr/balancer: python3 compatibility issue (bsc#1154230)
* mgr/dashboard: add debug mode, and accept expected exception when SSL handshaking (bsc#1155045)
* mgr/{dashboard,prometheus}: return FQDN instead of '0.0.0.0' (bsc#1155463)
* core: Improve health status for backfill_toofull and recovery_toofull and
fix backfill_toofull seen on cluster where the most full OSD is at 1% (bsc#1155655)
* mon: ensure prepare_failure() marks no_reply on op (bsc#1156571)
* mgr/dashboard: Automatically use correct chromedriver version
+ Revert "rgw_file: introduce fast S3 Unix stats (immutable)"
because it is incompatible with NFS-Ganesha 2.8
* include hotfix from upstream v14.2.6 release (bsc#1160920):
* mon/PGMap.h: disable network stats in dump_osd_stats
* osd_stat_t::dump: Add option for ceph-mgr python callers to skip ping network

This update was imported from the SUSE:SLE-15-SP1:Update update project.

Fixed bugs
bnc#1160920
ceph-mgr's finisher queue can grow indefinitely, making python modules/commands unresponsive
bnc#1148360
Ceph dashboard: user can't see performance graphs
bnc#1158927
[tracker bug] Nautilus 14.2.5: health warning if average OSD heartbeat ping time exceeds threshold
bnc#1153876
Dashboard: false alignment of MDS chart data points
bnc#1145756
How to configure LUN number and UUID for iSCSI exported RBDs via ceph dashboard?
bnc#1156571
L3: SES6: 20 slow ops, oldest one blocked for 388 sec, daemons [mon,idm01,mon,skipper] have slow ops.
bnc#1155045
ENGINE Error in HTTPServer.tick on 14.2.4
bnc#1158923
[tracker bug] Nautilus 14.2.5: Ceph will now issue health warnings if daemons have recently crashed
bnc#1154230
L3: Balancer module fails with: 'dict_keys' object does not support indexing
bnc#1155463
SES6: Can not access ceph dashboard, "dashboard": "https://0.0.0.0:8443/"
bnc#1137227
NFS Ganesha Object Gateway exports should default to read-only and warn if RW is requested
bnc#1134365
backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR, Intermittent HEALTH_ERR on rebuild)
bnc#1132337
Dashboard: Unable to decrease the number of PG's on a pool
bnc#1158930
[tracker bug] Nautilus 14.2.5: new OSD daemon command dump_recovery_reservations
bnc#1140879
safe-to-destroy.sh and smoke.sh tests (from "make check" suite) appear to be incompatible with msgr V1
bnc#1158932
[tracker bug] Nautilus 14.2.5: RGW now supports S3 Object Lock set of APIs
bnc#1158925
[tracker bug] Nautilus 14.2.5: pg_num must be a power of two, otherwise HEALTH_WARN
bnc#1157611
SES6: rados man page is missing information on "-N NAMESPACE" option.
bnc#1158933
[tracker bug] Nautilus 14.2.5: RGW now supports List Objects V2
bnc#1158926
[tracker bug] Nautilus 14.2.5: pool size must be > 1, otherwise HEALTH_WARN
bnc#1158929
[tracker bug] Nautilus 14.2.5: changes in the telemetry MGR module
bnc#1131817
doc: Bluestore: Memory Consumption / Read Cache unclear
bnc#1158931
[tracker bug] Nautilus 14.2.5: new OSD daemon command dump_scrub_reservations
bnc#1141203
RGW REST API failed request with status code 403 signatureDoesNotMatch
bnc#1155950
SessionNotCreatedError: session not created: This version of ChromeDriver only supports Chrome version 76
bnc#1148498
Ceph Dashboard: missing translations
bnc#1155655
(low space): 23 pgs backfill_toofull with 10PiB Free
bnc#1140504
Can not configure passive manager dashboard redirect URL or disable passive manager listener for load balancer setups
bnc#1145571
Cluster upgraded from SES 5.x to SES 6 shows a warning " Legacy BlueStore stats reporting detected on 32 OSD(s)"
bnc#1124556
osd fails to start due to "no available blob id"
Selected Binaries
openSUSE Build Service is sponsored by