If you didn't have a question, don't subject yourself to an endless stream of answers
With LLMs, looking for the big picture or seeking the pros, cons, and white space in and around my path of inquiry is really all on me as the human.
No cloud required: local LLMs as a practical app dependency
TL;DR: Ollama on a MacBook Air with M4 is great and it's a taste of the future. Developers should try it now.
Gall's law
"A complex system that works is invariably found to have evolved from a simple system that worked."
Zero to Deploy with Claude.ai and Val Town
Today I streamed my first video on Twitch, showing how to use Anthropic's Claude.ai and Val Town to ship a small frontend web app.
Batch loading resources in EmbedJS for RAG
A look at dynamically batch loading RAG resources with EmbedJS.
AI-powered fun in p5.js
Using AI in p5.js by wiring up a sketch to a serverless proxy function on Val Town.