Systems

I build systems from scratch—compilers, storage engines, kernels, distributed infrastructure—with an obsession for understanding every layer from transistors to syscalls. My background is in electrical engineering, which means I think about performance in terms of cache lines and memory bandwidth, not just Big-O. Currently doing my MS in CS at NYU.

I spend my free time grinding competitive programming and working through olympiad problem sets. The algorithmic intuition matters—when you've solved enough hard problems, you start seeing the structure underneath systems design decisions.

✦ Compilers & Static Analysis ✦

I built Memspect, a static analysis framework for production C codebases—not a toy lexer, but something that handles the pathological cases: pointer aliasing through casts, dataflow across compilation units, the undefined behavior that naive analyses miss. Presented at IICT 2024 and ERCICA. Working on it taught me more about LLVM internals than any course could.

I did research at IIT Madras contributing to an OCaml-based hardware DSL for FPGA design. That's where OCaml got beaten into my head—not "learned functional programming," but hundreds of hours writing real code until I started thinking in algebraic data types and exhaustive pattern matching. Now I'm implementing an LSP from scratch in Rust because I want to understand how language servers actually work at the protocol level.

SSA form, graph-coloring register allocation, x86-64 codegen—I've implemented all of it. The interesting part is the optimizations that actually matter in practice: instruction selection trade-offs, scheduling for modern microarchitectures, the 20% of compiler work that gives you 80% of the performance.

✦ Storage Engines & Databases ✦

I'm building Tachyon, an LSM storage engine in Rust. Memtables, SSTables with block indexing, write-ahead logging, bloom filters, configurable compaction strategies. The point isn't just implementing a spec—it's understanding the design space well enough to make my own trade-offs. I know when leveled compaction beats tiered, why some engines separate keys from values, and what happens to write amplification when you tune the level multiplier.

I've implemented buffer pool managers with different eviction policies, B+ tree indexes with proper concurrency control, query execution with operator fusion. When I read database papers, I implement the core ideas myself to make sure I actually understand them. Reading about something and building it are completely different levels of understanding.

✦ Operating Systems & Kernels ✦

I've written kernels from scratch: bootloader, page tables, preemptive scheduler, system calls, device drivers for keyboard and timer interrupts. Context switching in assembly. Debugging triple faults at 2am. The kind of work where you really understand what the hardware is doing because you're the one talking to it directly.

I write Linux kernel modules—not following tutorials, but actually reading kernel source and understanding the scheduler, the VFS layer, how io_uring works at the syscall level. My EE background helps here: I understand TLB shootdowns, cache coherence protocols, memory ordering constraints. When I think about performance, I'm thinking about what the silicon is actually doing, not just what the programming model promises.

✦ Distributed Systems ✦

I've implemented MapReduce with fault tolerance, Raft consensus with leader election and log compaction, fault-tolerant key-value stores, sharded databases with dynamic reconfiguration. Not just getting them to pass tests—understanding them deeply enough to debug subtle liveness issues and reason about safety under network partitions.

I lead a systems reading group where we go through papers and implement the core ideas: consensus protocols, distributed transactions, CRDTs. Teaching other people forces you to actually understand things. If you can't explain why linearizability matters to someone who's confused about it, you probably don't get it yourself.

✦ Hardware & Performance ✦

My EE degree means I understand hardware at a level most software engineers don't. I've designed digital circuits, worked on FPGA implementations, understand pipelining and timing constraints. When I think about performance, I think about roofline models, memory bandwidth, cache hierarchies—not just algorithmic complexity.

Performance engineering is about measurement, not intuition. I profile before I optimize, I understand where cycles actually go, and I know when the bottleneck is memory vs compute. This matters more than most people realize—the difference between code that runs and code that runs fast is often understanding what the hardware is actually doing.