Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

mvapich2_2_2-gnu-hpc-2.2-12.2 RPM for aarch64

From OpenSuSE Leap 15.3 for aarch64

Name: mvapich2_2_2-gnu-hpc Distribution: SUSE Linux Enterprise 15
Version: 2.2 Vendor: SUSE LLC <https://www.suse.com/>
Release: 12.2 Build date: Sun May 5 08:13:34 2019
Group: Development/Libraries/Parallel Build host: centriq6
Size: 12343086 Source RPM: mvapich2_2_2-gnu-hpc-2.2-12.2.src.rpm
Packager: https://www.suse.com/
Url: http://mvapich.cse.ohio-state.edu/overview/mvapich2/
Summary: OSU MVAPICH2 MPI package
This is an MPI-3 implementation which includes all MPI-1 features. It
is based on MPICH2 and MVICH.

Provides

Requires

License

BSD-3-Clause

Changelog

* Thu May 02 2019 nmoreychaisemartin@suse.com
  - Add mvapich2-fix-double-free.patch to fix a segfault
    when running on a machine with no RDMA hardware (bsc#1133797)
* Wed Mar 20 2019 aguerrero@suse.com
  - Add patch to remove obsolete GCC check (bnc#1129421). It also patches
    autogen.sh to get the autotools working in SLE12SP4.
    * 0001-Drop-GCC-check.patch
  - Force to re-run autotools to generate properly the files after
    patching src/binding/cxx/buildiface
* Sun Nov 18 2018 eich@suse.com
  - Add macro _hpc_mvapich2_modules for modules support (bsc#1116458).
* Mon Sep 10 2018 nmoreychaisemartin@suse.com
  - Remove bashism in postun scriptlet
* Wed Sep 05 2018 nmoreychaisemartin@suse.com
  - Fix handling of mpi-selector during updates (bsc#1098653)
* Sun Aug 19 2018 eich@suse.com
  - macros.hpc-mvapich2:
    replace %%compiler_family by %%hpc_compiler_family
* Mon Jul 16 2018 msuchanek@suse.com
  - Use sched_yield instead of pthread_yield (boo#1102421).
    - drop mvapich2-pthread_yield.patch
* Mon Jun 18 2018 nmoreychaisemartin@suse.com
  - Add missing bsc and fate references to changelog
* Tue Jun 12 2018 nmoreychaisemartin@suse.com
  - Disable HPC builds for SLE12 (fate#323655)
* Sun Mar 25 2018 kasimir_@outlook.de
  - Change mvapich2-arm-support.patch to provide missing functions for
    armv6hl
* Fri Feb 09 2018 cgoll@suse.com
  - Fix summary in module files (bnc#1080259)
* Tue Jan 30 2018 eich@suse.com
  - Use macro in mpivars.(c)sh to be independent of changes to the module
    setup for the compiler (boo#1078364).
* Fri Jan 05 2018 eich@suse.com
  - Switch from gcc6 to gcc7 as additional compiler flavor for HPC on SLES.
  - Fix library package requires - use HPC macro (boo#1074890).
* Fri Oct 06 2017 nmoreychaisemartin@suse.com
  - Add conflicts between the macros-devel packages
* Thu Oct 05 2017 nmoreychaisemartin@suse.com
  - Add BuildRequires to libibmad-devel for older release (SLE <= 12.2, Leap <= 42.2)
* Tue Sep 12 2017 eich@suse.com
  - Add HPC specific build targets using environment modules
    (FATE#321712).
* Tue Sep 12 2017 nmoreychaisemartin@suse.com
  - Drop unnecessary dependency to xorg-x11-devel
* Mon Sep 11 2017 nmoreychaisemartin@suse.com
  - Only requires verbs libraries for verbs build.
    libibverbs devel causes a SEGV when run in a chroot using the
    psm or psm2 conduits
  - Add testuite packages for all build flavours
* Thu Jul 13 2017 nmoreychaisemartin@suse.com
  - Add LD_LIBRARY_PATH to mpivars.sh and mpivars.csh
* Thu Jul 13 2017 nmoreychaisemartin@suse.com
  - Disable rpath in pkgconfig files
* Wed Jul 05 2017 nmoreychaisemartin@suse.com
  - Remove redondant configure options already passed by %configure
* Mon Jun 26 2017 nmoreychaisemartin@suse.com
  - Change install dir to allow multiple flavor to be installed
    at the same time (bsc#934090)
  - Fix bsc#1045955
    - Fix mvapich2-psm package to use libpsm (TrueScale)
    - Add mvapich2-psm2 package using libpsm2 (OmniPath)
* Mon Jun 26 2017 nmoreychaisemartin@suse.com
  - Use _multibuild to build the various mvapich2-flavours
* Fri Jun 23 2017 nmoreychaisemartin@suse.com
  - Replace dependency from libibmad-devel to infiniband-diags-devel
* Wed Jun 14 2017 nmoreychaisemartin@suse.com
  - Have mvapich2 and mvapich2-psm conflicts between them
  - Cleanup spec file
  - Remove mvapich2-testsuite RPM
* Thu Jun 08 2017 nmoreychaisemartin@suse.com
  - Reenable arm compilation
  - Rename and cleanup mvapich-s390_get_cycles.patch to
    mvapich2-s390_get_cycles.patch for coherency
  - Cleanup mvapich2-pthread_yield.patch
  - Add mvapich2-arm-support.patch to provide missing functions for
    armv7hl and aarch64
* Thu Jun 08 2017 nmoreychaisemartin@suse.com
  - Remove version dependencies to libibumad, libibverbs and librdmacm
* Tue May 16 2017 nmoreychaisemartin@suse.com
  - Fix mvapich2-testsuite packaging
  - Disable build on armv7
* Wed Mar 29 2017 pth@suse.de
  - Make dependencies on libs now coming from rdma-core versioned.
* Tue Nov 29 2016 pth@suse.de
  - Create environment module (bsc#1004628).
* Wed Nov 23 2016 pth@suse.de
  - Fix URL.
  - Update to mvapich 2.2 GA. Changes since rc1:
    MVAPICH2 2.2 (09/07/2016)
    * Features and Enhancements (since 2.2rc2):
    - Single node collective tuning for Bridges@PSC, Stampede@TACC and other
      architectures
    - Enable PSM builds when both PSM and PSM2 libraries are present
    - Add support for HCAs that return result of atomics in big endian notation
    - Establish loopback connections by default if HCA supports atomics
    * Bug Fixes (since 2.2rc2):
    - Fix minor error in use of communicator object in collectives
    - Fix missing u_int64_t declaration with PGI compilers
    - Fix memory leak in RMA rendezvous code path
    MVAPICH2 2.2rc2 (08/08/2016)
    * Features and Enhancements (since 2.2rc1):
    - Enhanced performance for MPI_Comm_split through new bitonic algorithm
    - Enable graceful fallback to Shared Memory if LiMIC2 or CMA transfer fails
    - Enable support for multiple MPI initializations
    - Unify process affinity support in Gen2, PSM and PSM2 channels
    - Remove verbs dependency when building the PSM and PSM2 channels
    - Allow processes to request MPI_THREAD_MULTIPLE when socket or NUMA node
      level affinity is specified
    - Point-to-point and collective performance optimization for Intel Knights
      Landing
    - Automatic detection and tuning for InfiniBand EDR HCAs
    - Warn user to reconfigure library if rank type is not large enough to
      represent all ranks in job
    - Collective tuning for Opal@LLNL, Bridges@PSC, and Stampede-1.5@TACC
    - Tuning and architecture detection for Intel Broadwell processors
    - Add ability to avoid using --enable-new-dtags with ld
    - Add LIBTVMPICH specific CFLAGS and LDFLAGS
    * Bug Fixes (since 2.2rc1):
    - Disable optimization that removes use of calloc in ptmalloc hook
      detection code
    - Fix weak alias typos (allows successful compilation with CLANG compiler)
    - Fix issues in PSM large message gather operations
    - Enhance error checking in collective tuning code
    - Fix issues with UD based communication in RoCE mode
    - Fix issues with PMI2 support in singleton mode
    - Fix default binding bug in hydra launcher
    - Fix issues with Checkpoint Restart when launched with mpirun_rsh
    - Fix fortran binding issues with Intel 2016 compilers
    - Fix issues with socket/NUMA node level binding
    - Disable atomics when using Connect-IB with RDMA_CM
    - Fix hang in MPI_Finalize when using hybrid channel
    - Fix memory leaks
* Tue Nov 15 2016 pth@suse.de
  - Update to version 2.2rc1 (fate#319240). Changes since 2.1:
    MVAPICH2 2.2rc1 (03/29/2016)
    * Features and Enhancements (since 2.2b):
    - Support for OpenPower architecture
    - Optimized inter-node and intra-node communication
    - Support for Intel Omni-Path architecture
    - Thanks to Intel for contributing the patch
    - Introduction of a new PSM2 channel for Omni-Path
    - Support for RoCEv2
    - Architecture detection for PSC Bridges system with Omni-Path
    - Enhanced startup performance and reduced memory footprint for storing
      InfiniBand end-point information with SLURM
    - Support for shared memory based PMI operations
    - Availability of an updated patch from the MVAPICH project website
      with this support for SLURM installations
    - Optimized pt-to-pt and collective tuning for Chameleon InfiniBand
      systems at TACC/UoC
    - Enable affinity by default for TrueScale(PSM) and Omni-Path(PSM2)
      channels
    - Enhanced tuning for shared-memory based MPI_Bcast
    - Enhanced debugging support and error messages
    - Update to hwloc version 1.11.2
    * Bug Fixes (since 2.2b):
    - Fix issue in some of the internal algorithms used for MPI_Bcast,
      MPI_Alltoall and MPI_Reduce
    - Fix hang in one of the internal algorithms used for MPI_Scatter
    - Thanks to Ivan Raikov@Stanford for reporting this issue
    - Fix issue with rdma_connect operation
    - Fix issue with Dynamic Process Management feature
    - Fix issue with de-allocating InfiniBand resources in blocking mode
    - Fix build errors caused due to improper compile time guards
    - Thanks to Adam Moody@LLNL for the report
    - Fix finalize hang when running in hybrid or UD-only mode
    - Thanks to Jerome Vienne@TACC for reporting this issue
    - Fix issue in MPI_Win_flush operation
    - Thanks to Nenad Vukicevic for reporting this issue
    - Fix out of memory issues with non-blocking collectives code
    - Thanks to Phanisri Pradeep Pratapa and Fang Liu@GaTech for
      reporting this issue
    - Fix fall-through bug in external32 pack
    - Thanks to Adam Moody@LLNL for the report and patch
    - Fix issue with on-demand connection establishment and blocking mode
    - Thanks to Maksym Planeta@TU Dresden for the report
    - Fix memory leaks in hardware multicast based broadcast code
    - Fix memory leaks in TrueScale(PSM) channel
    - Fix compilation warnings
    MVAPICH2 2.2b (11/12/2015)
    * Features and Enhancements (since 2.2a):
    - Enhanced performance for small messages
    - Enhanced startup performance with SLURM
    - Support for PMIX_Iallgather and PMIX_Ifence
    - Support to enable affinity with asynchronous progress thread
    - Enhanced support for MPIT based performance variables
    - Tuned VBUF size for performance
    - Improved startup performance for QLogic PSM-CH3 channel
    - Thanks to Maksym Planeta@TU Dresden for the patch
    * Bug Fixes (since 2.2a):
    - Fix issue with MPI_Get_count in QLogic PSM-CH3 channel with very large
      messages (>2GB)
    - Fix issues with shared memory collectives and checkpoint-restart
    - Fix hang with checkpoint-restart
    - Fix issue with unlinking shared memory files
    - Fix memory leak with MPIT
    - Fix minor typos and usage of inline and static keywords
    - Thanks to Maksym Planeta@TU Dresden for the patch and suggestions
    - Fix missing MPIDI_FUNC_EXIT
    - Thanks to Maksym Planeta@TU Dresden for the patch
    - Remove unused code
    - Thanks to Maksym Planeta@TU Dresden for the patch
    - Continue with warning if user asks to enable XRC when the system does not
      support XRC
    MVAPICH2 2.2a (08/17/2015)
    * Features and Enhancements (since 2.1 GA):
    - Based on MPICH 3.1.4
    - Support for backing on-demand UD CM information with shared memory
      for minimizing memory footprint
    - Reorganized HCA-aware process mapping
    - Dynamic identification of maximum read/atomic operations supported by HCA
    - Enabling support for intra-node communications in RoCE mode without
      shared memory
    - Updated to hwloc 1.11.0
    - Updated to sm_20 kernel optimizations for MPI Datatypes
    - Automatic detection and tuning for 24-core Haswell architecture
    * Bug Fixes (since 2.1 GA):
    - Fix for error with multi-vbuf design for GPU based communication
    - Fix bugs with hybrid UD/RC/XRC communications
    - Fix for MPICH putfence/getfence for large messages
    - Fix for error in collective tuning framework
    - Fix validation failure with Alltoall with IN_PLACE option
    - Thanks for Mahidhar Tatineni @SDSC for the report
    - Fix bug with MPI_Reduce with IN_PLACE option
    - Thanks to Markus Geimer for the report
    - Fix for compilation failures with multicast disabled
    - Thanks to Devesh Sharma @Emulex for the report
    - Fix bug with MPI_Bcast
    - Fix IPC selection for shared GPU mode systems
    - Fix for build time warnings and memory leaks
    - Fix issues with Dynamic Process Management
    - Thanks to Neil Spruit for the report
    - Fix bug in architecture detection code
    - Thanks to Adam Moody @LLNL for the report
* Fri Oct 14 2016 pth@suse.de
  - Create and include modules file for Mvapich2 (bsc#1004628).
  - Remove mvapich2-fix-implicit-decl.patch as the fix is upstream.
  - Adapt spec file to the changed micro benchmark install directory.
* Sun Jul 24 2016 p.drouand@gmail.com
  - Update to version 2.1
    * Features and Enhancements (since 2.1rc2):
    - Tuning for EDR adapters
    - Optimization of collectives for SDSC Comet system
    - Based on MPICH-3.1.4
    - Enhanced startup performance with mpirun_rsh
    - Checkpoint-Restart Support with DMTCP (Distributed MultiThreaded
      CheckPointing)
    - Thanks to the DMTCP project team (http://dmtcp.sourceforge.net/)
    - Support for handling very large messages in RMA
    - Optimize size of buffer requested for control messages in large message
      transfer
    - Enhanced automatic detection of atomic support
    - Optimized collectives (bcast, reduce, and allreduce) for 4K processes
    - Introduce support to sleep for user specified period before aborting
    - Disable PSM from setting CPU affinity
    - Install PSM error handler to print more verbose error messages
    - Introduce retry mechanism to perform psm_ep_open in PSM channel
    * Bug-Fixes (since 2.1rc2):
    - Relocate reading environment variables in PSM
    - Fix issue with automatic process mapping
    - Fix issue with checkpoint restart when full path is not given
    - Fix issue with Dynamic Process Management
    - Fix issue in CUDA IPC code path
    - Fix corner case in CMA runtime detection
    * Features and Enhancements (since 2.1rc1):
    - Based on MPICH-3.1.4
    - Enhanced startup performance with mpirun_rsh
    - Checkpoint-Restart Support with DMTCP (Distributed MultiThreaded
      CheckPointing)
    - Support for handling very large messages in RMA
    - Optimize size of buffer requested for control messages in large message
      transfer
    - Enhanced automatic detection of atomic support
    - Optimized collectives (bcast, reduce, and allreduce) for 4K processes
    - Introduce support to sleep for user specified period before aborting
    - Disable PSM from setting CPU affinity
    - Install PSM error handler to print more verbose error messages
    - Introduce retry mechanism to perform psm_ep_open in PSM channel
    * Bug-Fixes (since 2.1rc1):
    - Fix failures with shared memory collectives with checkpoint-restart
    - Fix failures with checkpoint-restart when using internal communication
      buffers of different size
    - Fix undeclared variable error when --disable-cxx is specified with
      configure
    - Fix segfault seen during connect/accept with dynamic processes
    - Fix errors with large messages pack/unpack operations in PSM channel
    - Fix for bcast collective tuning
    - Fix assertion errors in one-sided put operations in PSM channel
    - Fix issue with code getting stuck in infinite loop inside ptmalloc
    - Fix assertion error in shared memory large message transfers
    - Fix compilation warnings
    * Features and Enhancements (since 2.1a):
    - Based on MPICH-3.1.3
    - Flexibility to use internal communication buffers of different size for
      improved performance and memory footprint
    - Improve communication performance by removing locks from critical path
    - Enhanced communication performance for small/medium message sizes
    - Support for linking Intel Trace Analyzer and Collector
    - Increase the number of connect retry attempts with RDMA_CM
    - Automatic detection and tuning for Haswell architecture
    * Bug-Fixes (since 2.1a):
    - Fix automatic detection of support for atomics
    - Fix issue with void pointer arithmetic with PGI
    - Fix deadlock in ctxidup MPICH test in PSM channel
    - Fix compile warnings
    * Features and Enhancements (since 2.0):
    - Based on MPICH-3.1.2
    - Support for PMI-2 based startup with SLURM
    - Enhanced startup performance for Gen2/UD-Hybrid channel
    - GPU support for MPI_Scan and MPI_Exscan collective operations
    - Optimize creation of 2-level communicator
    - Collective optimization for PSM-CH3 channel
    - Tuning for IvyBridge architecture
    - Add -export-all option to mpirun_rsh
    - Support for additional MPI-T performance variables (PVARs)
      in the CH3 channel
    - Link with libstdc++ when building with GPU support
      (required by CUDA 6.5)
    * Bug-Fixes (since 2.0):
    - Fix error in large message (>2GB) transfers in CMA code path
    - Fix memory leaks in OFA-IB-CH3 and OFA-IB-Nemesis channels
    - Fix issues with optimizations for broadcast and reduce collectives
    - Fix hang at finalize with Gen2-Hybrid/UD channel
    - Fix issues for collectives with non power-of-two process counts
    - Make ring startup use HCA selected by user
    - Increase counter length for shared-memory collectives
  - Use download Url as source
  - Some other minor improvements
  - Add mvapich2-fix-implicit-decl.patch

Files

/usr/lib/hpc/gnu7/mpi
/usr/lib/hpc/gnu7/mpi/mvapich2
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/hydra_nameserver
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/hydra_persist
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/hydra_pmi_proxy
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpic++
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpicc
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpichversion
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpicxx
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpiexec
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpiexec.hydra
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpiexec.mpirun_rsh
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpif77
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpif90
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpifort
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpiname
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpirun
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpirun_rsh
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpispawn
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpivars
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpivars.csh
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/mpivars.sh
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/bin/parkill
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/include
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_allgather
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_allgatherv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_allreduce
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_alltoall
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_alltoallv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_barrier
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_bcast
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_gather
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_gatherv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_iallgather
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_iallgatherv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_ialltoall
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_ialltoallv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_ialltoallw
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_ibarrier
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_ibcast
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_igather
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_igatherv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_iscatter
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_iscatterv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_reduce
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_reduce_scatter
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_scatter
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/collective/osu_scatterv
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_acc_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_cas_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_fop_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_get_acc_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_get_bw
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_get_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_put_bibw
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_put_bw
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/one-sided/osu_put_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt/osu_bibw
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt/osu_bw
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt/osu_latency
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt/osu_latency_mt
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt/osu_mbw_mr
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/pt2pt/osu_multi_lat
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/startup
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/startup/osu_hello
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib/osu-micro-benchmarks/mpi/startup/osu_init
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64/libmpi.so.12
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64/libmpi.so.12.0.5
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64/libmpicxx.so.12
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64/libmpicxx.so.12.0.5
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64/libmpifort.so.12
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/lib64/libmpifort.so.12.0.5
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/hydra_nameserver.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/hydra_persist.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/hydra_pmi_proxy.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/mpicc.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/mpicxx.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/mpiexec.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/mpif77.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man1/mpifort.1
/usr/lib/hpc/gnu7/mpi/mvapich2/2.2/share/man/man3
/usr/lib/hpc/gnu7/mvapich2
/usr/share/doc/mvapich2_2_2-gnu-hpc/CHANGELOG
/usr/share/doc/mvapich2_2_2-gnu-hpc/CHANGES
/usr/share/doc/mvapich2_2_2-gnu-hpc/COPYRIGHT
/usr/share/lmod/moduledeps/gnu-7-mvapich2
/usr/share/lmod/moduledeps/gnu-7/mvapich2
/usr/share/lmod/moduledeps/gnu-7/mvapich2/.version.2.2
/usr/share/lmod/moduledeps/gnu-7/mvapich2/2.2


Generated by rpm2html 1.8.1

Fabrice Bellet, Tue Jul 9 13:54:43 2024