Recommended update for ceph

This recommended update for ceph fixes the following issues:

- Update to version 0.94.6+git.1458142870.64cdd1c:
+ ceph.spec.in: use %{_prefix} for ocf instead of hardcoding /usr
+ ceph.spec.in: do not install Ceph RA on systemd platforms
(boo#966645)

- Update to version 0.94.6+git.1456847814.9a3050b:
+ packaging: lsb_release build and runtime dependency (boo#968466)

- Update to version 0.94.6+git.1456783992.2c752aa:
+ Rebase on top of upstream 0.94.6 release:
build/ops: Ceph daemon failed to start, because the service name was
already used
build/ops: LTTng-UST tracing should be dynamically enabled
build/ops: ceph.spec.in License line does not reflect COPYING
build/ops: ceph.spec.in libcephfs_jni1 has no %post and %postun
build/ops: configure.ac: no use to add “+” before ac_ext=c
build/ops: init script reload doesn’t work on EL7
build/ops: init-rbdmap uses distro-specific functions
build/ops: logrotate reload error on Ubuntu 14.04
build/ops: miscellaneous spec file fixes
build/ops: pass tcmalloc env through to ceph-os
build/ops: rbd-replay-* moved from ceph-test-dbg to ceph-common-dbg as well
build/ops: unknown argument –quiet in udevadm settle
common: Objecter: pool op callback may hang forever
common: Objecter: potential null pointer access when do pool_snap_list
common: ThreadPool add/remove work queue methods not thread safe
common: auth/cephx: large amounts of log are produced by osd
common: client nonce collision due to unshared pid namespaces
common: common/Thread:pthread_attr_destroy(thread_attr) when done with it
common: log: Log.cc: Assign LOG_DEBUG priority to syslog calls
common: objecter: cancellation bugs
common: pure virtual method called
common: small probability sigabrt when setting rados_osd_op_timeout
common: wrong conditional for boolean function KeyServer::get_auth()
crush: crash if we see CRUSH_ITEM_NONE in early rule step
doc: man: document listwatchers cmd in “rados” manpage
doc: regenerate man pages, add orphans commands to radosgw-admin(8)
fs: CephFS restriction on removing cache tiers is overly strict
fs: fsstress.sh fails
librados: LibRadosWatchNotify.WatchNotify2Timeout
librbd: ImageWatcher shouldn’t block the notification thread
librbd: diff_iterate needs to handle holes in parent images
librbd: fix merge-diff for >2GB diff-files
librbd: invalidate object map on error even w/o holding lock
librbd: reads larger than cache size hang
mds: ceph mds add_data_pool check for EC pool is wrong
mon: MonitorDBStore: get_next_key() only if prefix matches
mon: OSDMonitor: do not assume a session exists in send_incremental()
mon: check for store writeablility before participating in election
mon: compact full epochs also
mon: include min_last_epoch_clean as part of PGMap::print_summary and
PGMap::dump
mon: map_cache can become inaccurate if osd does not receive the osdmaps
mon: should not set isvalid = true when cephx_verify_authorizer return
false
osd: Ceph pools' MAX AVAIL is 0 if some OSDs' weight is 0
osd: FileStore calls syncfs(2) even it is not supported
osd: FileStore: potential memory leak if getattrs fails
osd: IO error on kvm/rbd with an erasure coded pool tier
osd: OSD::build_past_intervals_parallel() shall reset primary and
up_primary when begin a new past_interval
osd: ReplicatedBackend: populate recovery_info.size for clone (bug symptom
is size mismatch on replicated backend on a clone in scrub)
osd: ReplicatedPG: wrong result code checking logic during sparse_read
osd: ReplicatedPG::hit_set_trim osd/ReplicatedPG.cc: 11006: FAILED
assert(obc)
osd: avoid multi set osd_op.outdata in tier pool
osd: bug with cache/tiering and snapshot reads
osd: ceph osd pool stats broken in hammer
osd: ceph-disk prepare fails if device is a symlink
osd: check for full before changing the cached obc
osd: config_opts: increase suicide timeout to 300 to match recovery
osd: disable filestore_xfs_extsize by default
osd: do not cache unused memory in attrs
osd: dumpling incrementals do not work properly on hammer and newer
osd: filestore: fix peek_queue for OpSequencer
osd: hit set clear repops fired in same epoch as map change – segfault
since they fall into the new interval even though the repops are cleared
osd: object_info_t::decode() has wrong version
osd: osd/OSD.cc: 2469: FAILED assert(pg_stat_queue.empty()) on shutdown
osd: osd/PG.cc: 288: FAILED assert(info.last_epoch_started >=
info.history.last_epoch_started)
osd: osd/PG.cc: 3837: FAILED assert(0 == "Running incompatible OSD")
osd: osd/ReplicatedPG: Recency fix
osd: pg stuck in replay
osd: race condition detected during send_failures
osd: randomize scrub times
osd: requeue_scrub when kick_object_context_blocked
osd: revert: use GMT time for hitsets
osd: segfault in agent_work
osd: should recalc the min_last_epoch_clean when decode PGMap
osd: smaller object_info_t xattrs
osd: we do not ignore notify from down osds
rbd: QEMU hangs after creating snapshot and stopping VM (boo#967509)
rbd: TaskFinisher::cancel should remove event from SafeTimer
rbd: avoid re-writing old-format image header on resize
rbd: fix bench-write
rbd: rbd-replay does not check for EOF and goes to endless loop
rbd: rbd-replay-prep and rbd-replay improvements
rbd: verify self-managed snapshot functionality on image create
rgw: Make RGW_MAX_PUT_SIZE configurable
rgw: Setting ACL on Object removes ETag
rgw: backport content-type casing
rgw: bucket listing hangs on versioned buckets
rgw: fix wrong etag calculation during POST on S3 bucket
rgw: get bucket location returns region name, not region api name
rgw: missing handling of encoding-type=url when listing keys in bucket
rgw: orphan tool should be careful about removing head objects
rgw: orphans finish segfaults
rgw: rgw-admin: document orphans commands in usage
rgw: swift API returns more than real object count and bytes used when
retrieving account metadata
rgw: swift use Civetweb ssl can not get right url
rgw: value of Swift API’s X-Object-Manifest header is not url_decoded
during segment look up
tests: fixed broken Makefiles after integration of lttng into rados
tests: fsx failed to compile
tests: notification slave needs to wait for master
tests: qa: remove legacy OS support from rbd/qemu-iotests
tests: testprofile must be removed before it is re-created
tools: ceph-monstore-tool must do out_store.close()
tools: heavy memory shuffling in rados bench
tools: race condition in rados bench
tools: tool for artificially inflate the leveldb of the mon store for
testing purposes
+ This rebase also fixes boo#967952 by dropping a conflicting downstream
patch

- Update to version 0.94.5+git.1456040245.5d49792:
+ librbd: fixed deadlock while attempting to flush AIO requests
(boo#967509)

- Update to version 0.94.5+git.1453890219.9752e6d:
+ ceph.spec.in: disable udev systemd slices on uninstall
(boo#941628)

- Update to version 0.94.5+git.1453751157.2112e13:
+ get rid of redundandacy in ceph_disk For a plain dmcrypt device,
"create" is the same as open --type plain / plainOpen (boo#957385)

Fixed bugs
bnc#968466
packaging: ceph should have an explicit lsb-release runtime dependency
bnc#941628
Uninstalling ceph from a host is not cleaning up all the systemd services
bnc#967952
ceph-disk prepare fails on new, empty devices with no partition table
bnc#966645
ceph-resource-agents missing in update repo
bnc#967509
Attaching rbd backed volumes to nova VMs hangs qemu
bnc#957385
ceph-disk opens plain crypto devices twice during prepare
Selected Binaries
openSUSE Build Service is sponsored by