We have some news to share for the request index beta feature. We’ve added more options to sort your requests, counters to the individual filters and documentation for the search functionality. Checkout the blog post for more details.

File storeBackup-3.5.2.config.default of Package storeBackup

# configuration file for storeBackup.pl
# Generated by storeBackup.pl, 3.5

####################
### explanations ###
####################

# You can set a value specified with '-cf_key' (eg. logFiles) and
# continue at the next lines which have to begin with a white space:
# logFiles = /var/log/messages  /var/log/cups/access_log
#      /var/log/cups/error_log
# One ore more white spaces are interpreted as separators.
# You can use single quotes or double quotes to group strings
# together, eg. if you have a filename with a blank in its name:
# logFiles = '/var/log/my strage log'
# will result in one filename, not in three.
# If an option should have *no value*, write:
# logFiles =
# If you want the default value, comment it:
;logFile =
# You can also use environment variables, like $XXX or ${XXX} like in
# a shell. Single quotes will mask environment variables, while double
# quotes will not.
# You can mask $, {, }, ", ' with a backslash (\), eg. \$
# Lines beginning with a '#' or ';' are ignored (use this for comments)
#
# You can overwrite settings in the command line. You can remove
# the setting also in the command by using the --unset feature, eg.:
# '--unset doNotDelete' or '--unset --doNotDelete'

#####################
### configuration ###
#####################

# source directory (*** must be specified ***)
;sourceDir=

# top level directory of all linked backups (*** must be specified ***)
# storeBackup must know for consistency checking where all your backups
# are. This is done to make sure that your backups are consistent if you
# used --lateLinks.
;backupDir=

# ------------------------------------------------------------------------
# you do not need specify the options below to get a running configuration
# (but they give you more features and more control)
#


# series directory, default is 'default'
# relative path from backupDir
;series=

# directory for temporary file, default is /tmp
;tmpDir=

# List of other backup directories to consider for
# hard linking. Relative path from backupDir!
# Format (examples):
# backupSeries/2002.08.29_08.25.28 -> consider this backup
# or
# 0:backupSeries    -> last (youngest) backup in <backupDir>/backupSeries
# 1:backupSeries    -> first before last backup in <backupDir>/backupSeries
# n:backupSeries    -> n'th before last backup in <backupDir>/backupSeries
# 3-5:backupSeries  -> 3rd, 4th and 5th in <backupDir>/backupSeries
# all:backupSeries  -> all in <backupDir>/backupSeries
# This option is useful, if you want to explicitly hard link
# to backup series from different backups. You can specify eg. with
# 0:myBackup to the last backup of series 'myBackup'. If you specify
# backup series with otherBackupSeries, then only these backups will be
# used for hard linking.
# You can also use wildcards in series names. See documentation,
# section 'Using Wildcards for Replication' for details.
# Default value is to link to the last backup of all series stored in
# 'backupDir'.
;otherBackupSeries=

# lock file, if exist, new instances will finish if
# an old is already running, default is /tmp/storeBackup.lock
;lockFile=

# remove the lock files before deleting old backups
# default ('no') is to delete the lock file after deleting
# possible values are 'yes' and 'no'
;unlockBeforeDel=

# continue if one or more of the exceptional directories
# do not exist (no is stopping processing)
# default is 'no', can be 'yes' or 'no'
;contExceptDirsErr=

# Directories to exclude from the backup (relative path inside of the backup).
# You can use shell type wildcards.
# These directories have to be separated by space or newline.
;exceptDirs=

# Directories to include in the backup (relative path inside of the backup).
# You can use shell type wildcards.
# These directories have to be separated by space or newline.
;includeDirs=

# rule for excluding files / only for experienced administrators
# !!! see README file 'including / excluding files and directories'
# EXAMPLE: 
# searchRule = ( '$size > &::SIZE("3M")' and '$uid eq "hjc"' ) or
#    ( '$mtime > &::DATE("3d4h")' and not '$file =~ m#/tmp/#' )'
;exceptRule=

# For explanations, see 'exceptRule'.
;includeRule=

# write a file name .storeBackup.notSaved.bz2 with the
# names of all skipped files, default is 'no', can be 'yes' or 'no'
;writeExcludeLog=

# do not save the specified types of files, allowed: Sbcfpl
# S - file is a socket
# b - file is a block special file
# c - file is a character special file
# f - file is a plain file
# p - file is a named pipe
# l - file is a symbolic link
# Spbc can only be backed up if GNU copy is available.
;exceptTypes=

# save the specified type of files in an archive instead saving
# them directly in the file system
# use this if you want to backup those file types but your target
# file or transport (eg. sshfs or non gnu-cp) system does not support
# those types of file
#   S - file is a socket
#   b - file is a block special file
#   c - file is a character special file
#   p - file is a named pipe
#   l - file is a symbolic link
# you also have to set specialTypeArchiver when using this option
;archiveTypes=


# possible values are 'cpio', 'tar', 'none'. default is 'cpio'
# tar is not able to archive sockets
# cpio is not part of the actual posix standard any more
;specialTypeArchiver=

# Activate this option if your system's cp is a full-featured GNU
# version. In this case you will be able to also backup several
# special file types like sockets.
# Possible values are 'yes' and 'no'. Default is 'no'
;cpIsGnu=

# make a hard link to existing, identical symlinks in old backups
# use this, if your operating system supports this (linux does)
# Possible values are 'yes' and 'no'. Default is 'no'
;linkSymlinks=

# exec job before starting the backup, checks lockFile (-L) before
# starting (e.g. can be used for rsync) stops execution if job returns
# exit status != 0
;precommand=

# exec job after finishing the backup, but before erasing of old
# backups reports if job returns exit status != 0
;postcommand=

# follow symbolic links like directories up to depth 0 -> do not
# follow links
;followLinks=

# only store the contents of file systems named by
# sourceDir and symlinked via followLinks
# possible values are 'yes' and 'no'; default is 'no'
;stayInFileSystem=

# use this only if you write your backup over a high latency line
# like a vpn over the internet
# storebackup will use more parallelization at the cost of more
# cpu power
# possible values are 'yes' and 'no'; default is 'no'
;highLatency=

# If this option is disabled, then the files in the backup will not
# neccessarily have the same permissions and owner as the originals.
# This speeds up backups on network drives a lot. Correct permissions
# are restored by storeBackupRecover.pl no matter what this option is
# set to. Default is 'no'
;ignorePerms=

# suppress (unwanted) warnings in the log files;
# to suppress warnings, the following keys can be used:
#   excDir (suppresses the warning that excluded directories
#          do not exist)
#   fileChange (suppresses the warning that a file has changed during
#              the backup)
#   crSeries (suppresses the warning that storeBackup had to create the
#            'default' series)
#   hashCollision (suppresses the warning if a possible
#                 hash collision is detected)
#   fileNameWithLineFeed (suppresses the warning if a filename
#                        contains a line feed)
#    use_DB_File (suppresses the warning that you should install
#                 perl module DB_File for better perforamnce)
#    use_MLDBM (suppresses the warning that you should install
#               perl module MLDBM if you want to use rule functions
#               MARK_DIR or MARK_DIR_REC together with option saveRAM)
#    use_IOCompressBzip2 (suppresses the warning that you should
#                         instal perl module IO::Compress::Bzip2
#                         for better performance)
#    noBackupForPeriod (suppresses warning that there are
#                       no backups for certain periods when using
#                       option keepRelative)
#  This option can be repeated multiple times on the command line.
#  Example usage in conf file:
#  suppressWarning = excDir fileChange crSeries hashCollision
#  By default no warnings are suppressed.
;suppressWarning=

# do *not* write hard links to existing files in the backup
# during the backup (yes|no)
# you have to call the program storeBackupUpdateBackup.pl
# later on your server if you set this flag to 'yes'
# you have to run storeBackupUpdateBackup.pl later - see
# description for that program
# default = no: do not write hard links
;lateLinks=

# only in combination with --lateLinks
# compression from files >= size will be done later,
# the file is (temporarily) copied into the backup
# default = no: no late compression
;lateCompress=

# repair simple inconsistencies (from lateLinks) automatically
# without requesting the action
# default = no, no automatic repair
;autorepair=

# Files with specified suffix for which storeBackup will make an md5 check
# on blocks of that file. Executed after --checkBlocksRule(n)
;checkBlocksSuffix=

# Only check files specified in --checkBlocksSuffix if there
# file size is at least this value, default is 100M
;checkBlocksMinSize=

# Block size for files specified with --checkBlocksSuffix
# default is 1M (1 megabyte)
;checkBlocksBS=

# if set, the blocks generated due to checkBlocksSuffix are compressed
# Possible values are 'check, 'yes' and 'no'. Default is 'no'
# check uses COMRESSION_CHECK (see option compressSuffix)
;checkBlocksCompr=

# Read files specified here in parallel to "normal" ones.
# This only makes sense if they are on a different disk.
# Default value is 'no'
;checkBlocksParallel=

# length of queue to store files before block checking,
# default = 1000
;queueBlock=

# Files for which storeBackup will make an md5 check depending
# on blocks of that file.
# The rules are checked from rule 1 to rule 5. The first match is used
# !!! see README file 'including / excluding files and directories'
# EXAMPLE: 
# searchRule = ( '$size > &::SIZE("3M")' and '$uid eq "hjc"' ) or
#    ( '$mtime > &::DATE("3d4h")' and not '$file =~ m#/tmp/#' )'
;checkBlocksRule0=

# Block size for option checkBlocksRule
# default is 1M (1 megabyte)
;checkBlocksBS0=

# if set to 'yes', blocks generated due to this rule will be compressed
# possible values: 'check', 'yes' or 'no', default is 'no'
# check users COMRESSION_CHECK (see option compressSuffix)
;checkBlocksCompr0=

# Filter for reading the file to treat as a blocked file
# eg.   gzip -d   if the file is compressed. Default is no read filter.
;checkBlocksRead0=

# Read files specified here in parallel to "normal" ones.
# This only makes sense if they are on a different disk.
# Default value is 'no'
;checkBlocksParallel0=

;checkBlocksRule1=
;checkBlocksBS1=
;checkBlocksCompr1=
;checkBlocksRead1=
;checkBlocksParallel1=

;checkBlocksRule2=
;checkBlocksBS2=
;checkBlocksCompr2=
;checkBlocksRead2=
;checkBlocksParallel2=

;checkBlocksRule3=
;checkBlocksBS3=
;checkBlocksCompr3=
;checkBlocksRead3=
;checkBlocksParallel3=

;checkBlocksRule4=
;checkBlocksBS4=
;checkBlocksCompr4=
;checkBlocksRead4=
;checkBlocksParallel4=

#  List of Devices for md5 ckeck depending on blocks of these
#  Devices (eg. /dev/sdb or /dev/sdb1)
;checkDevices0=

# Directory where to store the backups of the devices
;checkDevicesDir0=

# Block size of option checkDevices0
# default is 1M (1 megabyte)
;checkDevicesBS0=

# if set, the blocks generated due to checkDevices0 are compressed
# possible values: 'check', 'yes' or 'no', default is 'no'
# check users COMRESSION_CHECK (see option compressSuffix)
;checkDevicesCompr0=

# Read devices specified here in parallel to "normal" ones.
# This only makes sense if they are on a different disk.
# Default value is 'no'
;checkDevicesParallel0=

;checkDevices1=
;checkDevicesDir1=
;checkDevicesBS1=
;checkDevicesCompr1=
;checkDevicesParallel1=

;checkDevices2=
;checkDevicesDir2=
;checkDevicesBS2=
;checkDevicesCompr2=
;checkDevicesParallel2=

;checkDevices3=
;checkDevicesDir3=
;checkDevicesBS3=
;checkDevicesCompr3=
;checkDevicesParallel3=

;checkDevices4=
;checkDevicesDir4=
;checkDevicesBS4=
;checkDevicesCompr4=
;checkDevicesParallel4=

# write temporary dbm files in --tmpdir
# use this if you have not enough RAM, default is no
;saveRAM=

# compress command (with options), default is <bzip2>
;compress=

# uncompress command (with options), default is <bzip2 -d>
;uncompress=

# postfix to add after compression, default is <.bz2>
;postfix=

# do not compress files with the following
# suffix (uppercase included):
# (if you set this to '.*', no files will be compressed)
# Default is \.zip \.bz2 \.gz \.tgz \.jpg \.gif \.tiff? \.mpe?g \.mp[34] \.mpe?[34] \.ogg \.gpg \.png \.lzma \.xz \.mov
;exceptSuffix=

# like --exceptSuffix, but do not replace defaults, add
;addExceptSuffix=


# Like --exceptSuffix, but mentioned files will be
# compressed. If you chose this option, then files not
# affected be execptSuffix, addExceptSuffix or this Suffixes
# will be rated by the rule function COMPRESS_CHECK wether
# to compress or not
;compressSuffix=

# Files smaller than this size will never be compressed but always
# copied. Default is 1024
;minCompressSize=

# alternative to exceptSuffix, comprRule and minCompressSize:
# definition of a rule which files will be compressed
# If this rule is set, exceptSuffix, addExceptSuffix
# and minCompressSize are ignored.
# Default rule _generated_ from the options above is:
# comprRule = '$size > 1024' and not
#   '$file =~ /.zip\Z|.bz2\Z|.gz\Z|.tgz\Z|.jpg\Z|.gif\Z|.tiff\Z|.tif\Z|.mpeg\Z|.mpg\Z|.mp3\Z|.ogg\Z|.gpg\Z|.png\Z/i'
# or (eg. if compressSuffix = .doc .pdf):
#   '$size > 1024 and not $file =~ /.zip\Z|.bz2\Z|.gz\Z|.tgz\Z|.jpg\Z|.gif\Z|.tiff\Z|.tif\Z|.mpeg\Z|.mpg\Z|.mp3\Z|.ogg\Z|.gpg\Z|.png\Z/i and ( $file =~ /.doc\Z|.pdf\Z/i or &::COMPRESSION_CHECK($file) )'
;comprRule=

# maximal number of parallel compress operations,
# default = choosen automatically
;noCompress=

# length of queue to store files before compression,
# default = 1000
;queueCompress=

# maximal number of parallel copy operations,
# default = 1
;noCopy=

# length of queue to store files before copying,
# default = 1000
;queueCopy=

# write statistics about used space in log file
# default is 'no'
;withUserGroupStat=

# write statistics about used space in name file
#		    will be overridden each time
# if no file name is given, nothing will be written
# format is:
# identifier uid userName value
# identifier gid groupName value
;userGroupStatFile=

# default is 'no', if you do not want to compress, say 'yes'
;doNotCompressMD5File=

# permissions of .md5checkSumFile, default is 0600
;chmodMD5File=

# verbose messages, about exceptRule and includeRule
# and added files. default is 'no'
;verbose=

# generate debug messages, levels are 0 (none, default),
# 1 (some), 2 (many) messages
;debug=

# reset access time in the source directory - but this will
# change ctime (time of last modification of file status
# information
# default is 'no', if you want this, say 'yes'
;resetAtime=

# do not delete any old backup (e.g. specified via --keepAll or
# --keepWeekday) but print a message. This is for testing configuratons
# or if you want to delete old backups with storeBackupDel.pl.
# Values are 'yes' and 'no'. Default is 'no' which means to not delete.
;doNotDelete=

# delete old backups which have not been finished
# this will not happen if doNotDelete is set
# Values are 'yes' and 'no'. Default is 'no' which means not to delete.
;deleteNotFinishedDirs=

# keep backups which are not older than the specified amount
# of time. This is like a default value for all days in
# --keepWeekday. Begins deleting at the end of the script
# the time range has to be specified in format 'dhms', e.g.
# 10d4h means 10 days and 4 hours
# default = 30d;
# An archive flag is not possible with this parameter (see below).
;keepAll=

# keep backups for the specified days for the specified
# amount of time. Overwrites the default values choosen in
# --keepAll. 'Mon,Wed:40d Sat:60d10m' means:
# keep backups from Mon and Wed 40days + 5mins
# keep backups from Sat 60days + 10mins
# keep backups from the rest of the days like spcified in
# --keepAll (default 30d)
# you can also set the 'archive flag'.
# 'Mon,Wed:a40d5m Sat:60d10m' means:
# keep backups from Mon and Wed 40days + 5mins + 'archive'
# keep backups from Sat 60days + 10mins
# keep backups from the rest of the days like specified in
# --keepAll (default 30d)
# If you also use the 'archive flag' it means to not
# delete the affected directories via --keepMaxNumber:
# a10d4h means 10 days and 4 hours and 'archive flag'
;keepWeekday=

# do not delete the first backup of a year
# format is timePeriod with possible 'archive flag'
;keepFirstOfYear=

# do not delete the last backup of a year
# format is timePeriod with possible 'archive flag'
;keepLastOfYear=

# do not delete the first backup of a month
# format is timePeriod with possible 'archive flag'
;keepFirstOfMonth=

# do not delete the last backup of a month
# format is timePeriod with possible 'archive flag'
;keepLastOfMonth=

# default: 'Sun'. This value is used for calculating
# --keepFirstOfWeek and --keepLastOfWeek
;firstDayOfWeek=

# do not delete the first backup of a week
# format is timePeriod with possible 'archive flag'
;keepFirstOfWeek=

# do not delete the last backup of a week
# format is timePeriod with possible 'archive flag'
;keepLastOfWeek=

# keep multiple backups of one day up to timePeriod
# format is timePeriod, 'archive flag' is not possible
# default is 7d
;keepDuplicate=

# Keep that miminum of backups. Multiple backups of one
# day are counted as one backup. Default is 10.
;keepMinNumber=

# Try to keep only that maximum of backups. If you have more
# backups, the following sequence of deleting will happen:
# - delete all duplicates of a day, beginning with the old
#   once, except the oldest of every day
# - if this is not enough, delete the rest of the backups
#   beginning with the oldest, but *never* a backup with
#   the 'archive flag' or the last backup
;keepMaxNumber=

# Alternative deletion scheme. If you use this option, all
# other keep options are ignored. Preserves backups depending
# on their *relative* age. Example:
#
#   keepRelative = 1d 7d 61d 92d
#
# will (try to) ensure that there is always
#
# - One backup between 1 day and 7 days old
# - One backup between 5 days and 2 months old
# - One backup between ~2 months and ~3 months old
#
# If there is no backup for a specified timespan (e.g. because the
# last backup was done more than 2 weeks ago) the next older backup
# will be used for this timespan.
;keepRelative =

# print progress report after each 'number' files
# Default is 0, which means no reports.
# additional you may add a time frame after which a message is printed
# if you want to print a report each 1000 files and after
# one minute and 10 seconds, use: -P 1000,1m10s
;progressReport=

# print depth of actual readed directory during backup
# default is 'no', values are 'yes' and 'no'
;printDepth=

# ignore read errors in source directory; not readable
# directories does not cause storeBackup.pl to stop processing
# Values are 'yes' and 'no'. Default is 'no' which means not
# to ignore them
;ignoreReadError=

# after a successful backup, set a symbolic link to
# that backup and delete existing older links with the
# same name
;linkToRecent=

# name of the log file (default is STDOUT)
;logFile=

# if you specify a log file with --logFile you can
# additionally print the output to STDOUT with this flag
# Values are 'yes' and 'no'. Default is 'no'.
;plusLogStdout=

# output in logfile without time: 'yes' or 'no'
# default = no
;suppressTime=

# maximal length of log file, default = 1e6
;maxFilelen=

# number of old log files, default = 5
;noOfOldFiles=

# save log files with date and time instead of deleting the
# old (with [-noOfOldFiles]): 'yes' or 'no', default = 'no'
;saveLogs=

# compress saved log files (e.g. with 'gzip -9')
# default is 'bzip2'
;compressWith=

# write log file (also) in the backup directory:
# 'yes' or 'no', default is 'no'
# Be aware that this log does not contain all error
# messages of the one specified with --logFile!
# Some errors are possible before the backup
# directory is created.
;logInBackupDir=

# compress the log file in the backup directory:
# 'yes' or 'no', default is 'no'
;compressLogInBackupDir=

# filename to use for writing the above log file,
# default is '.storeBackup.log'
;logInBackupDirFileName=

openSUSE Build Service is sponsored by