Current

I've been teaching myself competitive programming and math/linguistic/informatics olympiads. I also write essays on deep-tech in Substack and stream development on Discord, Twitch, and Youtube.

I lead the Arcane Systems Reading Group, a collective of systems nerds who explore niche topics like parsers, virtual machines, IDEs, operating system kernels, formal methods, large-scale infrastructure, compute orchestration, and performance engineering.

Side Projects: I've been deep diving into Distributed Workload Runners in Python, writing my own Linux Kernel Modules in C, and experimenting with Language Server Protocols in Rust to understand them on a fundamental level.

Books: I've been enjoying reading Database Internals by Alex Petrov, Operating Systems: Three Easy Pieces by Andrea and Remzi Arpaci-Dusseau, and Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein.

Ongoing Projects

I'm currently focusing on hacking around the entire machine learning stack (models, kernels, compilers, and hardware) and database stack (data models, storage engines, query engines, and distributed systems) from scratch. Unfortunately, my other projects have been on hold, including my open-source project Memspect.

Future Projects include an experimental file-system, a GPU-aware scheduler for serverless platforms, MapReduce from scratch, context-sensitive search-engine for metadata & logs, tiny machine learning compiler, library containing state-of-the-art parallel algorithms for distributed deep learning, and PyTorch-inspired deep learning framework from scratch.