File perl-Cache-FastMmap.spec of Package perl-Cache-FastMmap

# spec file for package perl-Cache-FastMmap
# Copyright (c) 2022 SUSE LLC
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.

# Please submit bugfixes or comments via

%define cpan_name Cache-FastMmap
Name:           perl-Cache-FastMmap
Version:        1.57
Release:        0
License:        Artistic-1.0 OR GPL-1.0-or-later
Summary:        Uses an mmap'ed file to act as a shared memory interprocess cache
URL:  {cpan_name}
Source1:        cpanspec.yml
BuildRequires:  perl
BuildRequires:  perl-macros
BuildRequires:  perl(Test::Deep)
Requires:       perl(Test::Deep)

In multi-process environments (eg mod_perl, forking daemons, etc), it's
common to want to cache information, but have that cache shared between
processes. Many solutions already exist, and may suit your situation

  * MLDBM::Sync - acts as a database, data is not automatically expired, slow

  * IPC::MM - hash implementation is broken, data is not automatically expired,

  * Cache::FileCache - lots of features, slow

  * Cache::SharedMemoryCache - lots of features, VERY slow. Uses IPC::ShareLite
which freeze/thaws ALL data at each read/write

  * DBI - use your favourite RDBMS. can perform well, need a DB server running.
very global. socket connection latency

  * Cache::Mmap - similar to this module, in pure perl. slows down with larger

  * BerkeleyDB - very fast (data ends up mostly in shared memory cache) but
acts as a database overall, so data is not automatically expired

In the case I was working on, I needed:

  * Automatic expiry and space management

  * Very fast access to lots of small items

  * The ability to fetch/store many items in one go

Which is why I developed this module. It tries to be quite efficient
through a number of means:

  * Core code is written in C for performance

  * It uses multiple pages within a file, and uses Fcntl to only lock a page at
a time to reduce contention when multiple processes access the cache.

  * It uses a dual level hashing system (hash to find page, then hash within
each page to find a slot) to make most 'get()' calls O(1) and fast

  * On each 'set()', if there are slots and page space available, only the slot
has to be updated and the data written at the end of the used data space.
If either runs out, a re-organisation of the page is performed to create
new slots/space which is done in an efficient way

The class also supports read-through, and write-back or write-through
callbacks to access the real data if it's not in the cache, meaning that
code like this:

  my $Value = $Cache->get($Key);
  if (!defined $Value) {
    $Value = $RealDataSource->get($Key);
    $Cache->set($Key, $Value)

Isn't required, you instead specify in the constructor:

    context => $RealDataSourceHandle,
    read_cb => sub { $_[0]->get($_[1]) },
    write_cb => sub { $_[0]->set($_[1], $_[2]) },

And then:

  my $Value = $Cache->get($Key);

  $Cache->set($Key, $NewValue);

Will just work and will be read/written to the underlying data source as
needed automatically.

%autosetup  -n %{cpan_name}-%{version}

perl Makefile.PL INSTALLDIRS=vendor OPTIMIZE="%{optflags}"

make test


%files -f %{name}.files
%doc Changes README

openSUSE Build Service is sponsored by