A Simple Heuristic Solution for Steepest Descent on Stiefel Manifold
Fast, numerically stable, and differentiable solution for steepest descent on Stiefel manifold.
Fast, numerically stable, and differentiable solution for steepest descent on Stiefel manifold.
Towards a maximal update parameterization of n-simplicial attention
Why does Adam with aggressive gradient value/norm clipping have sparse updates and do well with higher learning rates? Here we show that it is essentially equivalent to a smoothed version of SignSGD/NormSGD.
A small step towards hardware-architecture-optimizer codesign in deep learning.
Muon from first principles, what makes it different from other optimizers, and why it works so well.
A possible reason why Muon converges faster & does better at higher learning rates than Adam.
The blocked matrix formulation of linear attention mechanisms, multi-step online gradient descent at inference time, and chunk-wise parallelism.
Why Muon still work despite not perfectly semi-orthogonalizing the gradients.
Simply switching to Muon can already get you 2x efficiency gains. But you can squeeze out an extra 1-2% by optimizing the Newton-Schulz coefficients.
The CASPR optimizer, a variant of Shampoo, reduces to Muon when we remove the accumulation on the preconditioners.
GRPO may not be the best choice for training reasoning models. Here’s why.
A unifying framework for linear attention mechanisms as test-time regression and how to parallelize training and inference.
Instead of asking, ‘Which optimizer should I use?’ ask, ‘In which space do my features live in?’
Generate interleaved text and image content in a structured format you can directly pass to downstream APIs.
[Technical Report for CVPR’s 2nd MMFM Challenge] This report presents Multimodal Structured Generation, a general framework which constrains the output logits of frozen Multimodal Foundation Models to force them to reason before responding with structured outputs that downstream APIs can parse and use. This approach achieved the second highest score in the hidden test set for Phase 2 and third highest overall in the 2nd Multimodal Foundation Models Challenge hosted by the Computer Vision and Pattern Recognition (CVPR) conference.
A minimal implementation of Flash Attention 1 & 2 in just ~350 lines of CUDA code. This is still a work-in-progress, but the ultimate goal is to implement the various variations of Hyperbolic Attention in CUDA.
[IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR) 2024] This paper presents Retrieval Augmented Structured Generation (RASG), a novel general framework for Business Document Information Extraction that achieves state of the art (SOTA) results on both Key-Information Extraction (KIE) and Line Items Recognition (LIR).
Years of experience in building artificial minds led me to believe that these AIs may end up seeming more ‘human’ than we currently imagine them to be.
A C++ implementation of Meta’s Llama2 generative large-language model. I also optimized the original C implementation by Karpathy by adding parallelization on the multi-head attention layer.
Expedock Assistant is a chatbot that allows you to ask questions about your shipments and get answers in real time. It’s like having a personal assistant that knows everything about your business, shipments and industry.
Expedock’s AutoML Library – fit a model, run batch inference, and get explanations in one line of code each.
A thought dump on mRNA vaccines and the future of computational biology
Booking demand prediction for Grab’s Southeast Asia operations. The project involves spatio-temporal forecasting, anomaly detection, and econometric modeling.