Dan Lyke 19:37:02+0000 (2025-06-17)— twitter (1/0) facebook (0/0) flutterby (1/1) — Lat,Lon: (38.225,-122.628)

Trying to reconcile my feeling that computers are like calculators, and we should learn to use them as such, and LLMs are like automated religious experiences. Prompts are like prayers, bad output is "you're asking wrong" or "the Lord works in mysterious ways".

Dan Lyke 19:18:35+0000 (2025-06-17)— twitter (1/0) facebook (0/0) flutterby (1/1) — Lat,Lon: (38.2249,-122.628)

The hard part about working with AI believers is trying to come up with prompts that give outputs that don't look totally stupid so that I can say "yeah, here's the process you wanted me to AI-ify". I think this may be related to me reading fairly fast, that I've yet to have a "summary" or analysis of text come out of an LLM that was generated faster than I read, and in any way a meaningful representation of what I think the salient points of the document were.