repo_name
stringlengths 2
29
| github_repo_link
stringlengths 27
60
| category
stringclasses 14
values | repo_description
stringlengths 10
415
⌀ | homepage_link
stringlengths 14
87
⌀ | github_topic_closest_fit
stringlengths 3
23
⌀ |
|---|---|---|---|---|---|
pytorch
|
https://github.com/pytorch/pytorch
|
machine learning framework
|
Tensors and Dynamic neural networks in Python with strong GPU acceleration
|
https://pytorch.org
|
machine-learning
|
triton
|
https://github.com/triton-lang/triton
|
parallel computing dsl
|
Development repository for the Triton language and compiler
|
https://triton-lang.org/
|
parallel-programming
|
cutlass
|
https://github.com/NVIDIA/cutlass
|
parallel computing
|
CUDA Templates and Python DSLs for High-Performance Linear Algebra
|
https://docs.nvidia.com/cutlass/index.html
|
parallel-programming
|
tilelang
|
https://github.com/tile-ai/tilelang
|
parallel computing dsl
|
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
|
https://tilelang.com
|
parallel-programming
|
ThunderKittens
|
https://github.com/HazyResearch/ThunderKittens
|
parallel computing
|
Tile primitives for speedy kernels
|
https://hazyresearch.stanford.edu/blog/2024-10-29-tk2
|
parallel-programming
|
helion
|
https://github.com/pytorch/helion
|
parallel computing dsl
|
A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.
|
https://helionlang.com
|
parallel-programming
|
TileIR
|
https://github.com/microsoft/TileIR
|
parallel computing dsl
|
TileIR (tile-ir) is a concise domain-specific IR designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of TVM, TileIR allows developers to focus on productivity without sacrificing the low-level optimizations necessary for state-of-the-art performance.
| null |
parallel-programming
|
BitBLAS
|
https://github.com/microsoft/BitBLAS
| null |
BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
| null | null |
tensorflow
|
https://github.com/tensorflow/tensorflow
|
machine learning framework
|
An Open Source Machine Learning Framework for Everyone
|
https://tensorflow.org
|
machine-learning
|
vllm
|
https://github.com/vllm-project/vllm
|
inference engine
|
A high-throughput and memory-efficient inference and serving engine for LLMs
|
https://docs.vllm.ai
|
inference
|
ollama
|
https://github.com/ollama/ollama
|
inference engine
|
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
|
https://ollama.com
|
inference
|
llama.cpp
|
https://github.com/ggml-org/llama.cpp
|
inference engine
|
LLM inference in C/C++
|
https://ggml.ai
|
inference
|
sglang
|
https://github.com/sgl-project/sglang
|
inference engine
|
SGLang is a fast serving framework for large language models and vision language models.
|
https://docs.sglang.ai
|
inference
|
onnx
|
https://github.com/onnx/onnx
|
machine learning framework
|
Open standard for machine learning interoperability
|
https://onnx.ai/
|
onnx
|
executorch
|
https://github.com/pytorch/executorch
|
model compiler
|
On-device AI across mobile, embedded and edge for PyTorch
|
https://executorch.ai
|
compiler
|
ray
|
https://github.com/ray-project/ray
| null |
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
|
https://ray.io
|
machine-learning
|
jax
|
https://github.com/jax-ml/jax
| null |
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
|
https://docs.jax.dev
|
jax
|
llvm-project
|
https://github.com/llvm/llvm-project
|
compiler
|
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
|
http://llvm.org
| null |
TensorRT
|
https://github.com/NVIDIA/TensorRT
| null |
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
|
https://developer.nvidia.com/tensorrt
|
inference
|
ao
|
https://github.com/pytorch/ao
| null |
PyTorch native quantization and sparsity for training and inference
|
https://pytorch.org/ao/stable/index.html
|
quantization
|
GEAK-agent
|
https://github.com/AMD-AGI/GEAK-agent
| null |
It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.
| null | null |
goose
|
https://github.com/block/goose
|
agent
|
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
|
https://block.github.io/goose/
|
mcp
|
openevolve
|
https://github.com/codelion/openevolve
| null |
Open-source implementation of AlphaEvolve
| null |
genetic-algorithm
|
verl
|
https://github.com/volcengine/verl
| null |
verl: Volcano Engine Reinforcement Learning for LLMs
|
https://verl.readthedocs.io/en/latest/index.html
| null |
peft
|
https://github.com/huggingface/peft
| null |
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
|
https://huggingface.co/docs/peft
|
lora
|
quack
|
https://github.com/Dao-AILab/quack
|
kernels
|
A Quirky Assortment of CuTe Kernels
| null | null |
intelliperf
|
https://github.com/AMDResearch/intelliperf
| null |
Automated bottleneck detection and solution orchestration
| null |
performance
|
letta
|
https://github.com/letta-ai/letta
| null |
Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
|
https://docs.letta.com/
|
ai-agents
|
mcp-agent
|
https://github.com/lastmile-ai/mcp-agent
| null |
Build effective agents using Model Context Protocol and simple workflow patterns
| null |
ai-agents
|
modular
|
https://github.com/modular/modular
| null |
The Modular Platform (includes MAX & Mojo)
|
https://docs.modular.com/
|
mojo
|
KernelBench
|
https://github.com/ScalingIntelligence/KernelBench
|
benchmark
|
KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems
|
https://scalingintelligence.stanford.edu/blogs/kernelbench/
|
benchmark
|
TritonBench
|
https://github.com/thunlp/TritonBench
|
benchmark
|
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
| null | null |
flashinfer-bench
|
https://github.com/flashinfer-ai/flashinfer-bench
|
benchmark
|
Building the Virtuous Cycle for AI-driven LLM Systems
|
https://bench.flashinfer.ai
| null |
terminal-bench
|
https://github.com/laude-institute/terminal-bench
|
benchmark
|
A benchmark for LLMs on complicated tasks in the terminal
|
https://www.tbench.ai
| null |
SWE-bench
|
https://github.com/SWE-bench/SWE-bench
|
benchmark
|
SWE-bench: Can Language Models Resolve Real-world Github Issues?
|
https://www.swebench.com
|
benchmark
|
reference-kernels
|
https://github.com/gpu-mode/reference-kernels
|
kernels
|
Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!
|
https://gpumode.com
|
gpu
|
Liger-Kernel
|
https://github.com/linkedin/Liger-Kernel
|
kernels
|
Efficient Triton Kernels for LLM Training
|
https://openreview.net/pdf?id=36SjAIT42G
|
triton
|
kernels
|
https://github.com/huggingface/kernels
|
kernels
|
Load compute kernels from the Hub
| null | null |
kernels-community
|
https://github.com/huggingface/kernels-community
|
kernels
|
Kernel sources for https://huggingface.co/kernels-community
| null | null |
unsloth
|
https://github.com/unslothai/unsloth
| null |
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
|
https://docs.unsloth.ai/
|
unsloth
|
jupyterlab
|
https://github.com/jupyterlab/jupyterlab
|
ui
|
JupyterLab computational environment.
|
https://jupyterlab.readthedocs.io/
|
jupyter
|
rocm-systems
|
https://github.com/ROCm/rocm-systems
| null |
super repo for rocm systems projects
| null | null |
hip
|
https://github.com/ROCm/hip
| null |
HIP: C++ Heterogeneous-Compute Interface for Portability
|
https://rocmdocs.amd.com/projects/HIP/
|
hip
|
ROCm
|
https://github.com/ROCm/ROCm
| null |
AMD ROCm™ Software - GitHub Home
|
https://rocm.docs.amd.com
|
documentation
|
omnitrace
|
https://github.com/ROCm/omnitrace
| null |
Omnitrace: Application Profiling, Tracing, and Analysis
|
https://rocm.docs.amd.com/projects/omnitrace/en/docs-6.2.4/
|
performance-analysis
|
ZLUDA
|
https://github.com/vosen/ZLUDA
| null |
CUDA on non-NVIDIA GPUs
|
https://vosen.github.io/ZLUDA/
|
cuda
|
CU2CL
|
https://github.com/vtsynergy/CU2CL
| null |
A prototype CUDA-to-OpenCL source-to-source translator, built on the Clang compiler framework
|
http://chrec.cs.vt.edu/cu2cl
| null |
pocl
|
https://github.com/pocl/pocl
| null |
pocl - Portable Computing Language
|
https://portablecl.org
|
opencl
|
cupti
|
https://github.com/cwpearson/cupti
|
profiler
|
Profile how CUDA applications create and modify data in memory.
| null | null |
hatchet
|
https://github.com/LLNL/hatchet
|
profiler
|
Graph-indexed Pandas DataFrames for analyzing hierarchical performance data
|
https://llnl-hatchet.readthedocs.io
|
performance
|
triton-runner
|
https://github.com/toyaix/triton-runner
| null |
Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.
|
https://triton-runner.org
|
triton
|
Triton-distributed
|
https://github.com/ByteDance-Seed/Triton-distributed
|
model compiler
|
Distributed Compiler based on Triton for Parallel Systems
|
https://triton-distributed.readthedocs.io/en/latest/
| null |
tritonparse
|
https://github.com/meta-pytorch/tritonparse
| null |
TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels
|
https://meta-pytorch.org/tritonparse/
|
triton
|
numpy
|
https://github.com/numpy/numpy
|
python library
|
The fundamental package for scientific computing with Python.
|
https://numpy.org
|
python
|
scipy
|
https://github.com/scipy/scipy
|
python library
|
SciPy library main repository
|
https://scipy.org
|
python
|
numba
|
https://github.com/numba/numba
| null |
NumPy aware dynamic Python compiler using LLVM
|
https://numba.pydata.org/
|
compiler
|
lightning-thunder
|
https://github.com/Lightning-AI/lightning-thunder
| null |
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
| null | null |
torchdynamo
|
https://github.com/pytorch/torchdynamo
| null |
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
| null | null |
nccl
|
https://github.com/NVIDIA/nccl
| null |
Optimized primitives for collective multi-GPU communication
|
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html
|
cuda
|
nixl
|
https://github.com/ai-dynamo/nixl
| null |
NVIDIA Inference Xfer Library (NIXL)
| null | null |
Self-Forcing
|
https://github.com/guandeh17/Self-Forcing
| null |
Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)
| null | null |
StreamDiffusion
|
https://github.com/cumulo-autumn/StreamDiffusion
| null |
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
| null | null |
ComfyUI
|
https://github.com/comfyanonymous/ComfyUI
| null |
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
|
https://www.comfy.org/
|
stable-diffusion
|
streamv2v
|
https://github.com/Jeff-LiangF/streamv2v
| null |
Official Pytorch implementation of StreamV2V.
|
https://jeff-liangf.github.io/projects/streamv2v/
| null |
DeepSpeed
|
https://github.com/deepspeedai/DeepSpeed
| null |
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
|
https://www.deepspeed.ai/
|
gpu
|
server
|
https://github.com/triton-inference-server/server
| null |
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
|
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
|
inference
|
elasticsearch
|
https://github.com/elastic/elasticsearch
|
search engine
|
Free and Open Source, Distributed, RESTful Search Engine
|
https://www.elastic.co/products/elasticsearch
|
search-engine
|
kubernetes
|
https://github.com/kubernetes/kubernetes
| null |
Production-Grade Container Scheduling and Management
|
https://kubernetes.io
|
containers
|
modelcontextprotocol
|
https://github.com/modelcontextprotocol/modelcontextprotocol
| null |
Specification and documentation for the Model Context Protocol
|
https://modelcontextprotocol.io
| null |
milvus
|
https://github.com/milvus-io/milvus
|
vector database
|
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
|
https://milvus.io
|
vector-search
|
RaBitQ
|
https://github.com/gaoj0017/RaBitQ
| null |
[SIGMOD 2024] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search
|
https://github.com/VectorDB-NTU/RaBitQ-Library
|
nearest-neighbor-search
|
airtable.js
|
https://github.com/Airtable/airtable.js
| null |
Airtable javascript client
| null | null |
mistral-inference
|
https://github.com/mistralai/mistral-inference
|
inference engine
|
Official inference library for Mistral models
|
https://mistral.ai/
|
llm-inference
|
dstack
|
https://github.com/dstackai/dstack
| null |
dstack is an open-source control plane for running development, training, and inference jobs on GPUs—across hyperscalers, neoclouds, or on-prem.
|
https://dstack.ai
|
orchestration
|
torchdendrite
|
https://github.com/sandialabs/torchdendrite
|
machine learning framework
|
Dendrites for PyTorch and SNNTorch neural networks
| null | null |
torchtitan
|
https://github.com/pytorch/torchtitan
| null |
A PyTorch native platform for training generative AI models
| null | null |
cudnn-frontend
|
https://github.com/NVIDIA/cudnn-frontend
| null |
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
| null | null |
ort
|
https://github.com/pytorch/ort
| null |
Accelerate PyTorch models with ONNX Runtime
| null | null |
ome
|
https://github.com/sgl-project/ome
| null |
OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)
|
http://docs.sglang.ai/ome/
|
k8s
|
neuronx-distributed-inference
|
https://github.com/aws-neuron/neuronx-distributed-inference
|
inference engine
| null | null | null |
monarch
|
https://github.com/meta-pytorch/monarch
| null |
PyTorch Single Controller
|
https://meta-pytorch.org/monarch
| null |
LMCache
|
https://github.com/LMCache/LMCache
| null |
Supercharge Your LLM with the Fastest KV Cache Layer
|
https://lmcache.ai/
|
inference
|
rdma-core
|
https://github.com/linux-rdma/rdma-core
| null |
RDMA core userspace libraries and daemons
| null |
linux-kernel
|
FTorch
|
https://github.com/Cambridge-ICCS/FTorch
| null |
A library for directly calling PyTorch ML models from Fortran.
|
https://cambridge-iccs.github.io/FTorch/
|
machine-learning
|
hhvm
|
https://github.com/facebook/hhvm
| null |
A virtual machine for executing programs written in Hack.
|
https://hhvm.com
|
hack
|
spark
|
https://github.com/apache/spark
| null |
Apache Spark - A unified analytics engine for large-scale data processing
|
https://spark.apache.org/
|
big-data
|
composable_kernel
|
https://github.com/ROCm/composable_kernel
| null |
Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators
|
https://rocm.docs.amd.com/projects/composable_kernel/en/latest/
| null |
aiter
|
https://github.com/ROCm/aiter
| null |
AI Tensor Engine for ROCm
| null | null |
torchtitan
|
https://github.com/AMD-AGI/torchtitan
| null |
A PyTorch native platform for training generative AI models
| null | null |
hipBLASLt
|
https://github.com/AMD-AGI/hipBLASLt
| null |
hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
|
https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/index.html
| null |
rocm-torchtitan
|
https://github.com/AMD-AGI/rocm-torchtitan
| null | null | null | null |
Megakernels
|
https://github.com/HazyResearch/Megakernels
| null |
kernels, of the mega variety
| null | null |
opencv
|
https://github.com/opencv/opencv
| null |
Open Source Computer Vision Library
|
https://opencv.org
|
image-processing
|
burn
|
https://github.com/tracel-ai/burn
| null |
Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.
|
https://burn.dev
|
machine-learning
|
ondemand
|
https://github.com/OSC/ondemand
| null |
Supercomputing. Seamlessly. Open, Interactive HPC Via the Web
|
https://openondemand.org/
|
hpc
|
flashinfer
|
https://github.com/flashinfer-ai/flashinfer
| null |
FlashInfer: Kernel Library for LLM Serving
|
https://flashinfer.ai
|
attention
|
cuJSON
|
https://github.com/AutomataLab/cuJSON
| null |
cuJSON: A Highly Parallel JSON Parser for GPUs
| null | null |
metaflow
|
https://github.com/Netflix/metaflow
| null |
Build, Manage and Deploy AI/ML Systems
|
https://metaflow.org
|
machine-learning
|
IMO2025
|
https://github.com/harmonic-ai/IMO2025
| null | null | null | null |
lean4
|
https://github.com/leanprover/lean4
| null |
Lean 4 programming language and theorem prover
|
https://lean-lang.org
|
lean
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.