Dataset Viewer
Auto-converted to Parquet Duplicate
github_repo_link
stringlengths
27
60
repo_name
stringlengths
2
29
repo_description
stringlengths
10
159
homepage_link
stringlengths
14
87
closest_github_tag
stringlengths
3
23
category
stringclasses
13 values
https://github.com/pytorch/pytorch
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
machine-learning
machine learning framework
https://github.com/vllm-project/vllm
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
inference
inference engine
https://github.com/ollama/ollama
ollama
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
https://ollama.com
llms
inference engine
https://github.com/sgl-project/sglang
sglang
SGLang is a fast serving framework for large language models and vision language models.
https://docs.sglang.ai/
inference
inference engine
https://github.com/ggml-org/llama.cpp
llama.cpp
LLM inference in C/C++
null
ggml
inference engine
https://github.com/triton-lang/triton
triton
Development repository for the Triton language and compiler
https://triton-lang.org/
null
dsl
https://github.com/pytorch/helion
helion
A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.
null
null
dsl
https://github.com/microsoft/TileIR
TileIR
null
null
null
dsl
https://github.com/tile-ai/tilelang
tilelang
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
https://tilelang.com/
null
dsl
https://github.com/NVIDIA/cutlass
cutlass
CUDA Templates and Python DSLs for High-Performance Linear Algebra
https://docs.nvidia.com/cutlass/index.html
cuda
null
https://github.com/tensorflow/tensorflow
tensorflow
An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
deep-learning
machine learning framework
https://github.com/HazyResearch/ThunderKittens
ThunderKittens
Tile primitives for speedy kernels
null
null
null
https://github.com/pytorch/executorch
executorch
On-device AI across mobile, embedded and edge for PyTorch
https://executorch.ai
mobile
model compiler
https://github.com/onnx/onnx
onnx
Open standard for machine learning interoperability
https://onnx.ai/
deep-learning
null
https://github.com/ray-project/ray
ray
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
deep-learning
null
https://github.com/jax-ml/jax
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
https://docs.jax.dev
jax
null
https://github.com/llvm/llvm-project
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
http://llvm.org
null
compiler
https://github.com/NVIDIA/TensorRT
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
inference
null
https://github.com/pytorch/ao
ao
PyTorch native quantization and sparsity for training and inference
https://pytorch.org/ao/stable/index.html
quantization
null
https://github.com/AMD-AGI/GEAK-agent
GEAK-agent
It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.
null
null
null
https://github.com/block/goose
goose
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
https://block.github.io/goose/
mcp
agent
https://github.com/codelion/openevolve
openevolve
Open-source implementation of AlphaEvolve
null
genetic-algorithm
null
https://github.com/volcengine/verl
verl
verl: Volcano Engine Reinforcement Learning for LLMs
https://verl.readthedocs.io/en/latest/index.html
null
null
https://github.com/huggingface/peft
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
lora
null
https://github.com/Dao-AILab/quack
quack
A Quirky Assortment of CuTe Kernels
null
null
kernels
https://github.com/AMDResearch/intelliperf
intelliperf
Automated bottleneck detection and solution orchestration
null
performance
null
https://github.com/letta-ai/letta
letta
Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
https://docs.letta.com/
ai-agents
null
https://github.com/lastmile-ai/mcp-agent
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
null
ai-agents
null
https://github.com/modular/modular
modular
The Modular Platform (includes MAX & Mojo)
https://docs.modular.com/
mojo
null
https://github.com/ScalingIntelligence/KernelBench
KernelBench
KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems
https://scalingintelligence.stanford.edu/blogs/kernelbench/
benchmark
benchmark
https://github.com/thunlp/TritonBench
TritonBench
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
null
null
benchmark
https://github.com/flashinfer-ai/flashinfer-bench
flashinfer-bench
Building the Virtuous Cycle for AI-driven LLM Systems
https://bench.flashinfer.ai
null
benchmark
https://github.com/laude-institute/terminal-bench
terminal-bench
A benchmark for LLMs on complicated tasks in the terminal
https://www.tbench.ai
null
benchmark
https://github.com/SWE-bench/SWE-bench
SWE-bench
SWE-bench: Can Language Models Resolve Real-world Github Issues?
https://www.swebench.com
benchmark
benchmark
https://github.com/gpu-mode/reference-kernels
reference-kernels
Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!
https://gpumode.com
gpu
kernels
https://github.com/linkedin/Liger-Kernel
Liger-Kernel
Efficient Triton Kernels for LLM Training
https://openreview.net/pdf?id=36SjAIT42G
triton
kernels
https://github.com/huggingface/kernels
kernels
Load compute kernels from the Hub
null
null
kernels
https://github.com/huggingface/kernels-community
kernels-community
Kernel sources for https://huggingface.co/kernels-community
null
null
kernels
https://github.com/unslothai/unsloth
unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
https://docs.unsloth.ai/
unsloth
null
https://github.com/jupyterlab/jupyterlab
jupyterlab
JupyterLab computational environment.
https://jupyterlab.readthedocs.io/
jupyter
ui
https://github.com/ROCm/rocm-systems
rocm-systems
super repo for rocm systems projects
null
null
null
https://github.com/ROCm/hip
hip
HIP: C++ Heterogeneous-Compute Interface for Portability
https://rocmdocs.amd.com/projects/HIP/
hip
null
https://github.com/ROCm/ROCm
ROCm
AMD ROCm™ Software - GitHub Home
https://rocm.docs.amd.com
documentation
null
https://github.com/ROCm/omnitrace
omnitrace
Omnitrace: Application Profiling, Tracing, and Analysis
https://rocm.docs.amd.com/projects/omnitrace/en/docs-6.2.4/
performance-analysis
null
https://github.com/vosen/ZLUDA
ZLUDA
CUDA on non-NVIDIA GPUs
https://vosen.github.io/ZLUDA/
cuda
null
https://github.com/vtsynergy/CU2CL
CU2CL
A prototype CUDA-to-OpenCL source-to-source translator, built on the Clang compiler framework
http://chrec.cs.vt.edu/cu2cl
null
null
https://github.com/pocl/pocl
pocl
pocl - Portable Computing Language
https://portablecl.org
opencl
null
https://github.com/cwpearson/cupti
cupti
Profile how CUDA applications create and modify data in memory.
null
null
profiler
https://github.com/LLNL/hatchet
hatchet
Graph-indexed Pandas DataFrames for analyzing hierarchical performance data
https://llnl-hatchet.readthedocs.io
performance
profiler
https://github.com/toyaix/triton-runner
triton-runner
Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.
https://triton-runner.org
triton
null
https://github.com/ByteDance-Seed/Triton-distributed
Triton-distributed
Distributed Compiler based on Triton for Parallel Systems
https://triton-distributed.readthedocs.io/en/latest/
null
model compiler
https://github.com/meta-pytorch/tritonparse
tritonparse
TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels
https://meta-pytorch.org/tritonparse/
triton
null
https://github.com/numpy/numpy
numpy
The fundamental package for scientific computing with Python.
https://numpy.org
python
python library
https://github.com/scipy/scipy
scipy
SciPy library main repository
https://scipy.org
python
python library
https://github.com/numba/numba
numba
NumPy aware dynamic Python compiler using LLVM
https://numba.pydata.org/
compiler
null
https://github.com/Lightning-AI/lightning-thunder
lightning-thunder
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
null
null
null
https://github.com/pytorch/torchdynamo
torchdynamo
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
null
null
null
https://github.com/NVIDIA/nccl
nccl
Optimized primitives for collective multi-GPU communication
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html
cuda
null
https://github.com/ai-dynamo/nixl
nixl
NVIDIA Inference Xfer Library (NIXL)
null
null
null
https://github.com/guandeh17/Self-Forcing
Self-Forcing
Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)
null
null
null
https://github.com/cumulo-autumn/StreamDiffusion
StreamDiffusion
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
null
null
null
https://github.com/comfyanonymous/ComfyUI
ComfyUI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
stable-diffusion
null
https://github.com/Jeff-LiangF/streamv2v
streamv2v
Official Pytorch implementation of StreamV2V.
https://jeff-liangf.github.io/projects/streamv2v/
null
null
https://github.com/deepspeedai/DeepSpeed
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
gpu
null
https://github.com/triton-inference-server/server
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
inference
null
https://github.com/elastic/elasticsearch
elasticsearch
Free and Open Source, Distributed, RESTful Search Engine
https://www.elastic.co/products/elasticsearch
search-engine
search engine
https://github.com/kubernetes/kubernetes
kubernetes
Production-Grade Container Scheduling and Management
https://kubernetes.io
containers
null
https://github.com/modelcontextprotocol/modelcontextprotocol
modelcontextprotocol
Specification and documentation for the Model Context Protocol
https://modelcontextprotocol.io
null
null
https://github.com/milvus-io/milvus
milvus
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
https://milvus.io
vector-search
vector databse
https://github.com/gaoj0017/RaBitQ
RaBitQ
[SIGMOD 2024] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search
https://github.com/VectorDB-NTU/RaBitQ-Library
nearest-neighbor-search
null
https://github.com/Airtable/airtable.js
airtable.js
Airtable javascript client
null
null
null
https://github.com/mistralai/mistral-inference
mistral-inference
Official inference library for Mistral models
https://mistral.ai/
llm-inference
inference engine
https://github.com/dstackai/dstack
dstack
dstack is an open-source control plane for running development, training, and inference jobs on GPUs—across hyperscalers, neoclouds, or on-prem.
https://dstack.ai
orchestration
null
https://github.com/sandialabs/torchdendrite
torchdendrite
Dendrites for PyTorch and SNNTorch neural networks
null
scr-3078
machine learning framework
https://github.com/pytorch/torchtitan
torchtitan
A PyTorch native platform for training generative AI models
null
null
null
https://github.com/NVIDIA/cudnn-frontend
cudnn-frontend
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
null
null
null
https://github.com/pytorch/ort
ort
Accelerate PyTorch models with ONNX Runtime
null
null
null
https://github.com/sgl-project/ome
ome
OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)
http://docs.sglang.ai/ome/
k8s
null
https://github.com/aws-neuron/neuronx-distributed-inference
neuronx-distributed-inference
null
null
null
inference engine
https://github.com/meta-pytorch/monarch
monarch
PyTorch Single Controller
https://meta-pytorch.org/monarch
null
null
https://github.com/LMCache/LMCache
LMCache
Supercharge Your LLM with the Fastest KV Cache Layer
https://lmcache.ai/
inference
null
https://github.com/linux-rdma/rdma-core
rdma-core
RDMA core userspace libraries and daemons
null
linux-kernel
null
https://github.com/Cambridge-ICCS/FTorch
FTorch
A library for directly calling PyTorch ML models from Fortran.
https://cambridge-iccs.github.io/FTorch/
deep-learning
null
https://github.com/facebook/hhvm
hhvm
A virtual machine for executing programs written in Hack.
https://hhvm.com
hack
null
https://github.com/apache/spark
spark
Apache Spark - A unified analytics engine for large-scale data processing
https://spark.apache.org/
big-data
null
https://github.com/ROCm/composable_kernel
composable_kernel
Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators
https://rocm.docs.amd.com/projects/composable_kernel/en/latest/
null
null
https://github.com/ROCm/aiter
aiter
AI Tensor Engine for ROCm
null
null
null
https://github.com/AMD-AGI/torchtitan
torchtitan
A PyTorch native platform for training generative AI models
null
null
null
https://github.com/AMD-AGI/hipBLASLt
hipBLASLt
hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/index.html
null
null
https://github.com/AMD-AGI/rocm-torchtitan
rocm-torchtitan
null
null
null
null
https://github.com/HazyResearch/Megakernels
Megakernels
kernels, of the mega variety
null
null
null
https://github.com/opencv/opencv
opencv
Open Source Computer Vision Library
https://opencv.org
image-processing
null
https://github.com/Lightning-AI/lightning-thunder
lightning-thunder
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
null
null
null
https://github.com/tracel-ai/burn
burn
Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.
https://burn.dev
machine-learning
null
https://github.com/OSC/ondemand
ondemand
Supercomputing. Seamlessly. Open, Interactive HPC Via the Web
https://openondemand.org/
hpc
null
https://github.com/flashinfer-ai/flashinfer
flashinfer
FlashInfer: Kernel Library for LLM Serving
https://flashinfer.ai
attention
null
https://github.com/AutomataLab/cuJSON
cuJSON
cuJSON: A Highly Parallel JSON Parser for GPUs
null
null
null
https://github.com/Netflix/metaflow
metaflow
Build, Manage and Deploy AI/ML Systems
https://metaflow.org
machine-learning
null
https://github.com/harmonic-ai/IMO2025
IMO2025
null
null
null
null
https://github.com/leanprover/lean4
lean4
Lean 4 programming language and theorem prover
https://lean-lang.org
lean
null
End of preview. Expand in Data Studio

PyTorch Conference 2025 GitHub Repos

I created a list of every GitHub repo mentioned during PyTorch Conference 2025 and Open Source AI Week.

Downloads last month
50

Collection including TylerHilbert/PyTorchConference2025_GithubRepos