Run LLMs on Windows Using Ollama
Run ChatGPT-style models locally on Windows using Ollama—no cloud dependency.
Short, practical write-ups on hardware-aware AI, VLSI workflows, timing/power analysis, and signal processing. All posts are hosted on Medium for now.
High-signal tutorials and practical workflows.
Run ChatGPT-style models locally on Windows using Ollama—no cloud dependency.
Complete guide to installing CUDA/cuDNN and frameworks for deep learning workflows.
STA theory + real reporting workflows using OpenSTA for chip design.
Chronological list of Medium articles.
Strategy + execution steps for building proof-of-concepts aligned to business goals.
A clean explanation of attention and multi-head attention for quick intuition.
Power + delay analysis workflow with practical examples and reporting structure.
Spin up OpenSTA quickly via Docker for repeatable, clean timing runs.
Use Yosys passes to optimize RTL and improve downstream synthesis quality.
Start-to-finish introduction to synthesis using an open-source flow.
Design + verify a synchronous counter using Icarus Verilog and waveforms.
Profiles and primary writing hub.