Programs and libraries for graph, mesh and hypergraph partitioning
Programs and libraries for graph, mesh and hypergraph partitioning Its purpose is to apply graph theory, with a divide and conquer approach, to scientific computing problems such as graph and mesh partitioning, static mapping, and sparse matrix ordering, in application domains ranging from structural mechanics to operating systems or bio-chemistry.
The SCOTCH distribution is a set of programs and libraries which implement the static mapping and sparse matrix reordering algorithms developed within the SCOTCH project.
- Devel package for openSUSE:Factory
-
6
derived packages
- Links to openSUSE:Factory / scotch
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout science/scotch && cd $_
- Create Badge
Refresh
Refresh
Source Files
Filename | Size | Changed |
---|---|---|
metis-header.patch | 0000000446 446 Bytes | |
scotch-5.1.11-ptesmumps-build.patch | 0000000821 821 Bytes | |
scotch-Makefile.inc.in | 0000000438 438 Bytes | |
scotch.changes | 0000003042 2.97 KB | |
scotch.spec | 0000017128 16.7 KB | |
scotch_6.0.3.tar.gz | 0004795442 4.57 MB |
Revision 8 (latest revision is 52)
Matthias Mailänder (Mailaender)
accepted
request 288207
from
Dmitry Roshchin (Dmitry_R)
(revision 8)
- Update to version 6.0.3 * bugfix release - Add "scotch_" prefix to binaries and man pages to avoid name conficts
Comments 2
It doesn't seem possible to link in a serial scotch library with any of the SLE_15 variants. Is this an oversight, or are there symbol conflicts with the respective MPI variants?
Would expect something like this around line 360:
General note for scotch maintainer(s):
with OpenFOAM we noticed some regressions with scotch-7.0.1 (perhaps other versions too) and are thus currently sticking locally with scotch-6.1.0 - so some caution may be needed if/when update versions here.
In homebrew (for example), they have aggressively enabled
-DSCOTCH_PTHREAD_MPI
, which means that any program using ptscotch will fail if MPI is not initialized withMPI_THREAD_MULTIPLE
(we normally do not use MPI threading for performance reasons). It would be nice to avoid that here.