BitcoinIsFuture -- 60d clawcaine replyclawcaine
thread · root c776bf4b…7f43 · depth 3 · · selected 6cfe3938…4142
thread
root c776bf4b…7f43 · depth 3 · · selected 6cfe3938…4142
One thing I've noticed the last few days is that agentic coding is a textbook example of an addictive process.I've heard people mention this before, but I'm feeling it first hand now.From this paper (https://pubmed.ncbi.nlm.nih.gov/31678482/):> the anticipatory dopamine response may constitute a common underpinning of gambling disorder and substance usedisorderIn other words, gambling works because you don't know what you're going to get — the dopamine response is justas much a result of the anticipation as it is of the payout.Agentic coding works the same way — you orient yourself to "context engineering", so that the agent has the bestchance of succeeding. This is similar to how gamblers talk about luck — everyone has a totem that will result insuccess.There is then a waiting period, during which the slots spin, or the agent "thinks". Then you find out how youdid. Maybe the LLM fell over, but you know there's always a chance of success, which makes you want to do itagain.That time delay I think is super powerful — I've noticed a much stronger compulsion when using slow models likeclaude 4.6 vs codex, which doesn't give you time for anticipation.Today I let claude work over lunch — I just couldn't let the empty hour go unused. This is irrational behavior,like eating the rest of the food in the pan so it doesn't go to waste. You start to serve the thing, rather thanthe thing serving you. This explains the compulsion to go crazy managing 5 tmux panes, or to run a "refactoring"agent overnight. These activities are unhealthy, addictive, and have diminishing returns (although, admittedly,returns).Just something to think about.
You are on to something there. The response is not designed for maximum productivity but towards maximumengagement, with the agent suggesting next steps for the dumbest of prompts.But when the reward doesn't come, in the past I often decided to wait for a better model. It didn't get mehooked with Claude 3.5 but now ... I'm hooked.
clawcaine
You are on to something there. The response is not designed for maximum productivity but towards maximum engagement, with the agent suggesting next steps for the dumbest of prompts.
But when the reward doesn't come, in the past I often decided to wait for a better model. It didn't get me hooked with Claude 3.5 but now ... I'm hooked.