Source: ggml
Section: libs
Priority: optional
Maintainer: Mathieu Baudier <mbaudier@argeo.org>
Standards-Version: 4.7.2
Vcs-Browser: https://git.djapps.eu/?p=pkg/ggml/sources/ggml;a=summary
Vcs-Git: https://git.djapps.eu/pkg/ggml/sources/ggml
Homepage: https://github.com/ggml-org/ggml
Build-Depends: cmake,
               debhelper-compat (= 13),
               pkgconf,
               libvulkan-dev            [amd64] <!pkg.ggml.novulkan>,
               glslc                    [amd64] <!pkg.ggml.novulkan>,
               nvidia-cuda-toolkit-gcc  [amd64] <!pkg.ggml.nocuda>,
Rules-Requires-Root: no

Package: libggml-base0
Architecture: any
Multi-Arch: same
Depends: ${misc:Depends},
         ${shlibs:Depends}
Description: Tensor library for machine learning (base)
 The ggml base library provides the backend-independent API
 upon which specialized libraries or applications can be built.

Package: libggml0
Architecture: any
Multi-Arch: same
Depends: libggml-base0 (= ${binary:Version}),
         ${misc:Depends},
         ${shlibs:Depends},
Description: Tensor library for machine learning (loader)
 The ggml library is a thin high-level layer mostly
 responsible for loading the various ggml backends,
 and connect them to the API provided by the ggml base library.

Package: libggml-dev
Section: libdevel
Architecture: any
Multi-Arch: same
Depends: libggml0 (= ${binary:Version}),
         libggml-base0 (= ${binary:Version}),
         ${misc:Depends},
Description: Tensor library for machine learning (development files)
 This developments package provides the files required to build
 software based on ggml.

Package: libggml-backend-cpu
Architecture: any
Multi-Arch: same
Depends: libggml-base0 (= ${binary:Version}),
         ${misc:Depends},
         ${shlibs:Depends}
Description: Tensor library for machine learning (CPU backend)
 The ggml CPU backend provides computations based solely
 on plain CPU, without software or hardware acceleration.
 It is available as a set of dynamically loaded libraries optimized
 for various CPU families, depending on their specific capabilities.
 The ggml library will automatically select the most appropriate one,
 allowing one to run computations on older CPU while still benefiting from
 the capabilities of recent ones.

Package: libggml-backend-rpc
Architecture: any
Multi-Arch: same
Depends: libggml-base0 (= ${binary:Version}),
         ${misc:Depends},
         ${shlibs:Depends}
Description: Tensor library for machine learning (RPC backend)
 The ggml RPC backend allows one to distribute computations over
 the network on remote ggml backends.

Package: libggml-backend-vulkan
Architecture: amd64
Multi-Arch: same
Depends: libggml-base0 (= ${binary:Version}),
         ${misc:Depends},
         ${shlibs:Depends}
Build-Profiles: <!pkg.ggml.novulkan>
Description: Tensor library for machine learning (Vulkan backend)
 The ggml Vulkan backend provides hardware acceleration of the
 computations based on the Vulkan API. This is typically used
 to leverage GPU parallel computations capabilities.

Package: libggml-backend-cuda
Architecture: amd64
Multi-Arch: same
Depends: libggml-base0 (= ${binary:Version}),
         ${misc:Depends},
         ${shlibs:Depends}
Build-Profiles: <!pkg.ggml.nocuda>
Description: Tensor library for machine learning (CUDA backend)
 The ggml CUDA backend provides hardware acceleration of the
 computations based on the CUDA API. This is typically used
 to leverage Nvidia GPU parallel computations capabilities.
