No Solutions -- 284d 🤯 nostr:nevent1qqs8es49vsn4lgsah7k0m0w0qwfgwz78r3rzs0k74q7zle47cf2temqpzdmhxue69uhhwmm59e6hg7r09ehkuef0qgsfgg2lg25 kxdwg0l9eazq6pwak9wd8j4geuyyu7hua9mmpw6qlvgsrqsqqqqqp4ma9pl reply🤯 nostr:nevent1qqs8es49vsn4lgsah7k0m0w0qwfgwz78r3rzs0k74q7zle47cf2temqpzdmhxue69uhhwmm59e6hg7r09ehkuef0qgsfgg2lg25kxdwg0l9eazq6pwak9wd8j4geuyyu7hua9mmpw6qlvgsrqsqqqqqp4ma9pl
thread · root 7cc2a564…bcec · depth 2 · · selected b5ef65d8…a2e0
thread
root 7cc2a564…bcec · depth 2 · · selected b5ef65d8…a2e0
For ideology it does matter who controls your system prompt?As a quick experiment over coffee this morning I took that last episode ofnostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqy0hwumn8ghj7mn0wd68ytn9d9h82mny0fmkzmn6d9njuumsv93k2qpqn00yy9y3704drtpph5wszen64w287nquftkcwcjv7gnnkpk2q54svljnn3 , generated a transcript and ran it through a dialogue agent that runsa 10 round discussion between pre-programmed AI agents around a given prompt.In this case it was "What is the Underlying Philosophy of Sovereign Engineering based on this discussion betweennostr:nprofile1qyxhwumn8ghj7e3h0ghxjme0qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qpql2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqta478g andnostr:nprofile1qy28wue69uhnzv3h9cczuvpwxyargwpk8yhsz3rhwvaz7tmed3c8qarfxaj8s6mrw96kvef5dve8wdrsvve8vvehwamxx7rnwejnw6n0d3axu6t3w93kg7tfwechqutvv5ekc6ty9ehku6t0dchsqgrwg6zz9hahfftnsup23q3mnv5pdz46hpj4l2ktdpfu6rhpthhwjv0us2s2"Usually I run most things on Claude 4 Sonnet as it s great general purpose model, but i was surprised at how itseemed to crowbar into the conversation a bunch of AI safety viewpoints and ideologies that aren't necessarilyin the source transcript.So I figured lets change out the model for the dialogue agents to Grok-4 and see what it does instead - does theideology leak out still? Is it different?I actually think that Grok 4's first assessment kinda nailed it:> Sovereign Engineering seems like a mindset or movement for building tech in a way that's deeplyhuman-centered, decentralized, and empowering—think vibing with AI and open protocols like Nostr to create toolsthat foster freedom, collaboration, and realness without the chains of big tech overlords.Full summaries from both agents and transcripts are available here:https://github.com/humansinstitute/everest-pipeliner/wiki/Two-AIs-Analyse-No-Solutions-to-extract-the-Philosophy-of-Sovereign-EngineeringInteresting to see how repeated calls back to agents can lead down a specific pre-programmed ideological path. Ithink I prefer models with ideologies close to my own, but the alternative is a good echo bubble popper 😉My guess here is this is more driven by the system prompt / safety layer than the training.
🤯nostr:nevent1qqs8es49vsn4lgsah7k0m0w0qwfgwz78r3rzs0k74q7zle47cf2temqpzdmhxue69uhhwmm59e6hg7r09ehkuef0qgsfgg2lg25kxdwg0l9eazq6pwak9wd8j4geuyyu7hua9mmpw6qlvgsrqsqqqqqp4ma9pl