Stanford CME 295 Lecture 1 Notes: Transformers and LLM Foundations

Lecture notes and key concepts from Stanford’s CME 295 Transformers & LLMs course. Summarizes NLP fundamentals, tokenization, embeddings, sequence models, attention, and the Transformer architecture.

Whisper Code Review: Complete Dissection of OpenAI STT Model Internals

Whisper Code Review — Dissecting the Internal Structure of OpenAI’s STT Model…