repo_name
stringlengths
2
29
repo_link
stringlengths
27
60
category
stringclasses
19 values
github_about_section
stringlengths
10
415
homepage_link
stringlengths
14
93
github_topic_closest_fit
stringlengths
3
23
goose
https://github.com/block/goose
agent
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
https://block.github.io/goose/
mcp
ray
https://github.com/ray-project/ray
ai compute engine
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
machine-learning
flashinfer-bench
https://github.com/flashinfer-ai/flashinfer-bench
benchmark
Building the Virtuous Cycle for AI-driven LLM Systems
https://bench.flashinfer.ai
null
KernelBench
https://github.com/ScalingIntelligence/KernelBench
benchmark
KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems
https://scalingintelligence.stanford.edu/blogs/kernelbench/
benchmark
SWE-bench
https://github.com/SWE-bench/SWE-bench
benchmark
SWE-bench: Can Language Models Resolve Real-world Github Issues?
https://www.swebench.com
benchmark
terminal-bench
https://github.com/laude-institute/terminal-bench
benchmark
A benchmark for LLMs on complicated tasks in the terminal
https://www.tbench.ai
null
TritonBench
https://github.com/thunlp/TritonBench
benchmark
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
null
null
BitBLAS
https://github.com/microsoft/BitBLAS
BLAS
BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
null
null
hipBLAS
https://github.com/ROCm/hipBLAS
BLAS
[DEPRECATED] Moved to ROCm/rocm-libraries repo
https://github.com/ROCm/rocm-libraries
hip
hipBLASLt
https://github.com/AMD-AGI/hipBLASLt
BLAS
hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/index.html
null
AdaptiveCpp
https://github.com/AdaptiveCpp/AdaptiveCpp
compiler
Compiler for multiple programming models (SYCL, C++ standard parallelism, HIP/CUDA) for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
https://adaptivecpp.github.io
compiler
llvm-project
https://github.com/llvm/llvm-project
compiler
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
http://llvm.org
compiler
numba
https://github.com/numba/numba
compiler
NumPy aware dynamic Python compiler using LLVM
https://numba.pydata.org
compiler
nvcc4jupyter
https://github.com/andreinechaev/nvcc4jupyter
compiler
A plugin for Jupyter Notebook to run CUDA C/C++ code
null
null
CU2CL
https://github.com/vtsynergy/CU2CL
CUDA / OpenCL
A prototype CUDA-to-OpenCL source-to-source translator, built on the Clang compiler framework
http://chrec.cs.vt.edu/cu2cl
opencl
cuda-python
https://github.com/NVIDIA/cuda-python
CUDA / OpenCL
CUDA Python: Performance meets Productivity
https://nvidia.github.io/cuda-python/
null
OpenCL-SDK
https://github.com/KhronosGroup/OpenCL-SDK
CUDA / OpenCL
OpenCL SDK
null
opencl
pocl
https://github.com/pocl/pocl
CUDA / OpenCL
pocl - Portable Computing Language
https://portablecl.org
opencl
SYCL-Docs
https://github.com/KhronosGroup/SYCL-Docs
CUDA / OpenCL
SYCL Open Source Specification
null
opencl
triSYCL
https://github.com/triSYCL/triSYCL
CUDA / OpenCL
Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
null
opencl
ZLUDA
https://github.com/vosen/ZLUDA
CUDA / OpenCL
CUDA on non-NVIDIA GPUs
https://vosen.github.io/ZLUDA/
cuda
llama.cpp
https://github.com/ggml-org/llama.cpp
inference engine
LLM inference in C/C++
https://ggml.ai
inference
mistral-inference
https://github.com/mistralai/mistral-inference
inference engine
Official inference library for Mistral models
https://mistral.ai/
llm-inference
ollama
https://github.com/ollama/ollama
inference engine
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
https://ollama.com
inference
sglang
https://github.com/sgl-project/sglang
inference engine
SGLang is a fast serving framework for large language models and vision language models.
https://docs.sglang.ai
inference
TensorRT
https://github.com/NVIDIA/TensorRT
inference engine
NVIDIA TensorRT is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
inference
vllm
https://github.com/vllm-project/vllm
inference engine
A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
inference
kernels
https://github.com/huggingface/kernels
kernels
Load compute kernels from the Hub
null
null
kernels-community
https://github.com/huggingface/kernels-community
kernels
Kernel sources for https://huggingface.co/kernels-community
null
null
Liger-Kernel
https://github.com/linkedin/Liger-Kernel
kernels
Efficient Triton Kernels for LLM Training
https://openreview.net/pdf?id=36SjAIT42G
triton
quack
https://github.com/Dao-AILab/quack
kernels
A Quirky Assortment of CuTe Kernels
null
null
reference-kernels
https://github.com/gpu-mode/reference-kernels
kernels
Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!
https://gpumode.com
gpu
pytorch
https://github.com/pytorch/pytorch
machine learning framework
Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
machine-learning
tensorflow
https://github.com/tensorflow/tensorflow
machine learning framework
An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
machine-learning
torchdendrite
https://github.com/sandialabs/torchdendrite
machine learning framework
Dendrites for PyTorch and SNNTorch neural networks
null
null
onnx
https://github.com/onnx/onnx
machine learning interoperability
Open standard for machine learning interoperability
https://onnx.ai
onnx
executorch
https://github.com/pytorch/executorch
model compiler
On-device AI across mobile, embedded and edge for PyTorch
https://executorch.ai
compiler
cutlass
https://github.com/NVIDIA/cutlass
parallel computing
CUDA Templates and Python DSLs for High-Performance Linear Algebra
https://docs.nvidia.com/cutlass/index.html
parallel-programming
ThunderKittens
https://github.com/HazyResearch/ThunderKittens
parallel computing
Tile primitives for speedy kernels
https://hazyresearch.stanford.edu/blog/2024-10-29-tk2
parallel-programming
helion
https://github.com/pytorch/helion
parallel computing dsl
A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.
https://helionlang.com
parallel-programming
TileIR
https://github.com/microsoft/TileIR
parallel computing dsl
TileIR (tile-ir) is a concise domain-specific IR designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of TVM, TileIR allows developers to focus on productivity without sacrificing the low-level optimizations necessary for state-of-the-art performance.
null
parallel-programming
tilelang
https://github.com/tile-ai/tilelang
parallel computing dsl
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
https://tilelang.com
parallel-programming
triton
https://github.com/triton-lang/triton
parallel computing dsl
Development repository for the Triton language and compiler
https://triton-lang.org/
parallel-programming
cupti
https://github.com/cwpearson/cupti
performance testing
Profile how CUDA applications create and modify data in memory.
null
profiling
hatchet
https://github.com/LLNL/hatchet
performance testing
Graph-indexed Pandas DataFrames for analyzing hierarchical performance data
https://llnl-hatchet.readthedocs.io
profiling
intelliperf
https://github.com/AMDResearch/intelliperf
performance testing
Automated bottleneck detection and solution orchestration
https://arxiv.org/html/2508.20258v1
profiling
omnitrace
https://github.com/ROCm/omnitrace
performance testing
Omnitrace: Application Profiling, Tracing, and Analysis
https://rocm.docs.amd.com/projects/omnitrace/en/docs-6.2.4
profiling
jax
https://github.com/jax-ml/jax
scientific computing
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
https://docs.jax.dev
scientific-computing
numpy
https://github.com/numpy/numpy
scientific computing
The fundamental package for scientific computing with Python.
https://numpy.org
scientific-computing
scipy
https://github.com/scipy/scipy
scientific computing
SciPy library main repository
https://scipy.org
scientific-computing
elasticsearch
https://github.com/elastic/elasticsearch
search engine
Free and Open Source, Distributed, RESTful Search Engine
https://www.elastic.co/products/elasticsearch
search-engine
jupyterlab
https://github.com/jupyterlab/jupyterlab
ui
JupyterLab computational environment.
https://jupyterlab.readthedocs.io/
jupyter
milvus
https://github.com/milvus-io/milvus
vector database
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
https://milvus.io
vector-search
accelerate
https://github.com/huggingface/accelerate
training framework
A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support.
https://huggingface.co/docs/accelerate
gpu-acceleration
airtable.js
https://github.com/Airtable/airtable.js
null
Airtable javascript client
null
null
aiter
https://github.com/ROCm/aiter
null
AI Tensor Engine for ROCm
null
null
ao
https://github.com/pytorch/ao
null
PyTorch native quantization and sparsity for training and inference
https://pytorch.org/ao/stable/index.html
quantization
burn
https://github.com/tracel-ai/burn
null
Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.
https://burn.dev
machine-learning
ccache
https://github.com/ccache/ccache
null
ccache - a fast compiler cache
https://ccache.dev
null
ComfyUI
https://github.com/comfyanonymous/ComfyUI
null
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
stable-diffusion
composable_kernel
https://github.com/ROCm/composable_kernel
null
Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators
https://rocm.docs.amd.com/projects/composable_kernel/en/latest/
null
cudnn-frontend
https://github.com/NVIDIA/cudnn-frontend
null
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
null
null
cuJSON
https://github.com/AutomataLab/cuJSON
null
cuJSON: A Highly Parallel JSON Parser for GPUs
null
null
DeepSpeed
https://github.com/deepspeedai/DeepSpeed
null
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
gpu
dstack
https://github.com/dstackai/dstack
null
dstack is an open-source control plane for running development, training, and inference jobs on GPUs-across hyperscalers, neoclouds, or on-prem.
https://dstack.ai
orchestration
flashinfer
https://github.com/flashinfer-ai/flashinfer
null
FlashInfer: Kernel Library for LLM Serving
https://flashinfer.ai
attention
FTorch
https://github.com/Cambridge-ICCS/FTorch
null
A library for directly calling PyTorch ML models from Fortran.
https://cambridge-iccs.github.io/FTorch/
machine-learning
GEAK-agent
https://github.com/AMD-AGI/GEAK-agent
null
It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.
null
null
hhvm
https://github.com/facebook/hhvm
null
A virtual machine for executing programs written in Hack.
https://hhvm.com
hack
hip
https://github.com/ROCm/hip
null
HIP: C++ Heterogeneous-Compute Interface for Portability
https://rocmdocs.amd.com/projects/HIP/
hip
hipCUB
https://github.com/ROCm/hipCUB
null
[DEPRECATED] Moved to ROCm/rocm-libraries repo
https://github.com/ROCm/rocm-libraries
null
IMO2025
https://github.com/harmonic-ai/IMO2025
null
null
null
null
kubernetes
https://github.com/kubernetes/kubernetes
null
Production-Grade Container Scheduling and Management
https://kubernetes.io
containers
lapack
https://github.com/Reference-LAPACK/lapack
null
LAPACK development repository
null
linear-algebra
lean4
https://github.com/leanprover/lean4
null
Lean 4 programming language and theorem prover
https://lean-lang.org
lean
letta
https://github.com/letta-ai/letta
null
Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
https://docs.letta.com/
ai-agents
lightning-thunder
https://github.com/Lightning-AI/lightning-thunder
null
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
null
null
LMCache
https://github.com/LMCache/LMCache
null
Supercharge Your LLM with the Fastest KV Cache Layer
https://lmcache.ai/
inference
mcp-agent
https://github.com/lastmile-ai/mcp-agent
null
Build effective agents using Model Context Protocol and simple workflow patterns
null
ai-agents
Megakernels
https://github.com/HazyResearch/Megakernels
null
kernels, of the mega variety
null
null
metaflow
https://github.com/Netflix/metaflow
null
Build, Manage and Deploy AI/ML Systems
https://metaflow.org
machine-learning
MIOpen
https://github.com/ROCm/MIOpen
null
[DEPRECATED] Moved to ROCm/rocm-libraries repo
https://github.com/ROCm/rocm-libraries
null
modelcontextprotocol
https://github.com/modelcontextprotocol/modelcontextprotocol
null
Specification and documentation for the Model Context Protocol
https://modelcontextprotocol.io
null
modular
https://github.com/modular/modular
null
The Modular Platform (includes MAX & Mojo)
https://docs.modular.com/
mojo
monarch
https://github.com/meta-pytorch/monarch
null
PyTorch Single Controller
https://meta-pytorch.org/monarch
null
Mooncake
https://github.com/kvcache-ai/Mooncake
null
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
https://kvcache-ai.github.io/Mooncake/
inference
nccl
https://github.com/NVIDIA/nccl
null
Optimized primitives for collective multi-GPU communication
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html
null
neuronx-distributed-inference
https://github.com/aws-neuron/neuronx-distributed-inference
null
null
null
null
nixl
https://github.com/ai-dynamo/nixl
null
NVIDIA Inference Xfer Library (NIXL)
null
null
ome
https://github.com/sgl-project/ome
null
OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)
http://docs.sglang.ai/ome/
k8s
ondemand
https://github.com/OSC/ondemand
null
Supercomputing. Seamlessly. Open, Interactive HPC Via the Web
https://openondemand.org/
hpc
oneDPL
https://github.com/uxlfoundation/oneDPL
null
oneAPI DPC++ Library (oneDPL)
https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-library.html
null
openevolve
https://github.com/codelion/openevolve
null
Open-source implementation of AlphaEvolve
null
genetic-algorithm
ort
https://github.com/pytorch/ort
null
Accelerate PyTorch models with ONNX Runtime
null
null
peft
https://github.com/huggingface/peft
null
PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
lora
Primus-Turbo
https://github.com/AMD-AGI/Primus-Turbo
null
null
null
null
pybind11
https://github.com/pybind/pybind11
null
Seamless operability between C++11 and Python
https://pybind11.readthedocs.io/
bindings
RaBitQ
https://github.com/gaoj0017/RaBitQ
null
[SIGMOD 2024] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search
https://github.com/VectorDB-NTU/RaBitQ-Library
nearest-neighbor-search
rdma-core
https://github.com/linux-rdma/rdma-core
null
RDMA core userspace libraries and daemons
null
linux-kernel
rocFFT
https://github.com/ROCm/rocFFT
null
[DEPRECATED] Moved to ROCm/rocm-libraries repo
https://github.com/ROCm/rocm-libraries
hip