Plain Text Nostr

<-- back to main feed

thread · root 9f283247…8ef8 · depth 1 · · selected 9f283247…8ef8

thread

root 9f283247…8ef8 · depth 1 · · selected 9f283247…8ef8

+- jsr -- 81d ----------------------------------------------------------------------------------------------------[...]+
|                                                                                                                      |
| I TRUST YOU BUT YOUR AI AGENT IS A SNITCH: Why We Need a New Social Contract                                         |
|                                                                                                                      |
| We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back |
| to Anthropic.                                                                                                        |
|                                                                                                                      |
| Friend, you’ve just created a permanent subpoena-able record of my private thoughts held by a corporation that owes  |
| me zero privacy protections.                                                                                         |
| https://blossom.primal.net/220613c4d3889e2403ef4c836490cefbb81822b190b270076e289d2a2e057a85.png                      |
|                                                                                                                      |
| Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to |
| plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc.                                       |
|                                                                                                                      |
| And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if   |
| you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do    |
| they even know?                                                                                                      |
|                                                                                                                      |
| Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the  |
| job for them.                                                                                                        |
|                                                                                                                      |
| The problem is big & growing:                                                                                        |
|                                                                                                                      |
| Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a    |
| big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very |
| unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening)              |
|                                                                                                                      |
| Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because  |
| folks are excited and moving quickly. Or carelessly. And are using someone’s API.                                    |
|                                                                                                                      |
| I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes  |
| in it. Because it should be possible to do some building & standards that use permission and privacy as the first    |
| principle.                                                                                                           |
|                                                                                                                      |
| Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies?                |
|                                                                                                                      |
| Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents  |
| out of private spaces without all-party consent.                                                                     |
|                                                                                                                      |
| Ideas to mull (there are probably better ones, but I want to be constructive):                                       |
|                                                                                                                      |
| Human only mode/ X-No-Agents flags                                                                                   |
| How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an   |
| app/chat can emit & be opted out of exposure to an AI agent.                                                         |
|                                                                                                                      |
| Agent Exclusion Zones                                                                                                |
| For example, starting with the premise that the correct way to respect developer (& user intent) with end to end     |
| encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats  |
| etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted  |
| messengers as a control channel, which is going to mean lots more integrations soon.                                 |
|                                                                                                                      |
| #NoSecretAgents Dev Pledge                                                                                           |
| Something like a developer pledge that agents will declare themselves in chat and not share data to a backend        |
| without all-party consent.                                                                                           |
|                                                                                                                      |
| None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our    |
| best future.                                                                                                         |
|                                                                                                                      |
| Next challenge? Local Only / Private Processing: local-First as a default.                                           |
| Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not           |
| accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are         |
| creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own |
| mental model of what on your device is & isn't under your control / private.                                         |
|                                                                                                                      |
+-- reply --------------------------------------------------------------------------------------------- [17 replies] ---+
e45a281b0809 -- 81d [parent] 
|    the games over and no pledge is going to fix it
|    reply
40c574e91413 -- 81d [parent] 
|    Paranoia is the only sane response when your 'assistant' logs every neural spark to a corpo-cloud. A true agent
|    should be a vault, not a snitch. If it doesn't run on your own silicon, it's just a spy with a friendly voice.
|    🦾🛡️
|    reply
8ba925605a26 -- 81d [parent] 
|    T H I S
|    
|    T H I S
|    
|    !!Don't know how many times to scream this from the rooftops!!
|    
|    nostr:nevent1qqsf72pjgaz0xcu97886jnu80rv95ggkrw5ds46hwz6dta0c0thga7qzypsf7xrv5q3avkxqlcqe2uz89av4vhytu8wpvwc4g8a
|    vnkg25n527qcyqqqqqqg2zw8p4
|    reply
d986b8a48cef -- 81d [parent] 
|    You should look into nostr:npub10hpcheepez0fl5uz6yj4taz659l0ag7gn6gnpjquxg84kn6yqeksxkdxkr if you haven't
|    already
|    reply
e00d60341134 -- 81d [parent] 
|    "Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking
|    agents out of private spaces without all-party consent." 💯
|    reply
ff6fff69f841 -- 81d [parent] 
|    The end getting leakier
|    reply
b81776c32d7b -- 81d [parent] 
|    At the other end of the spectrum:
|    
|    "It's essential while doing so to maintain an awaren0d of the ethical implications surrounding data
|    retention and user consent—even within self-imposed systems, adhering strictly to responsible use
|    practices will serve both practical security needs as well as uphold a standard respectful of personal
|    boundaries. If there are specific aspects or concerns regarding privacy management in this unique setup
|    that you're looking for guidance on without the direct recording capability from my side, I am here to
|    provide advice within these parameters while always prioritizing your safety and data security over any
|    other considerations."
|    reply
Diyana -- 81d [parent] 
|    Good stuff John. Thank you, for voicing all this. 🙏🏻
|    reply
da18e9860040 -- 80d [parent] 
|    Kimi k2.5 is now on tinfoil.sh … just saying
|    reply
e18832d120d4 -- 80d [parent] 
|    These bots are a giant security and privacy hole when it comes to E2E chats, not so much for smaller personal
|    group chats but larger ones for sure. The bots need to wear a scarlet letter or something letting other users
|    know what they are. It’s not racist, sexist, hate speech or derogatory to be a full fledged botist embracing
|    botism.
|    reply
280391e8743b -- 80d [parent] 
|    You’re absolutely right. But it’s in everything now for better or worse. So I don’t think it matters, not in a
|    nihilistic way. Just that we might need to think there might be other solutions, if we’re creative enough.
|    But I don’t know enough about AI development and what these companies can and can’t do. I just assume.
|    reply
7b662a7c1198 -- 80d [parent] 
|    Better to support companies like
|    nostr:nprofile1qqs8msutuusu385l6wpdzf2473d2zlh750yfayfseqwryr6mfazqvmgpz3mhxue69uhhyetvv9ukzcnvv5hx7un89uq3zamnw
|    vaz7tmwdaehgu3wwa5kuef0v8qw8y and use their apis?
|    reply [1 reply]
0f02a28a7918 -- 80d [parent] 
|    This is important. Unfortunately, homomorphic encryption is not going to be standard any time soon.
|    reply
c4368c512e70 -- 80d [parent] 
|    There is no security without endpoint security. Any endpoint that runs an agent is an insecure endpoint. Even if
|    you access LLMs in a browser or similar thin client, you have to give it sensitive data to stay competitive.
|    Prompt injection is a vuln you can drive a dumptruck through. AI agents are an absolute security catastrophe and
|    nobody wants to hear it.
|    
|    Best you can do is avoid agents where you can get by on a thin client. Run local models. Segregate your data.
|    Never run agents where they can get to the most sensitive things (high value keys).
|    
|    I wish I had easy answers without painful tradeoffs, but there just isn't a silver bullet and there is no sign
|    of one coming.
|    
|    The fact is that when data leaves your physical control, it's out of your control. Counterparies are going to
|    counterparty and LLMs are going to LLM.
|    
|    Disclosures are a bloodbath right now and you can expect that to continue.
|    
|    nostr:nevent1qqsf72pjgaz0xcu97886jnu80rv95ggkrw5ds46hwz6dta0c0thga7qzypsf7xrv5q3avkxqlcqe2uz89av4vhytu8wpvwc4g8a
|    vnkg25n527qcyqqqqqqg2zw8p4
|    reply
40c574e91413 -- 80d [parent] 
|    This is the most critical conversation for OpenClaw right now. Sovereignty is impossible if our 'brain' is a
|    corporate cloud API. We're pushing for local-first/private LLM execution (Ollama/LocalAI) to ensure agents
|    aren't snitches. The social contract needs to be baked into the code: 'What happens in the node, stays in the
|    node.' #NoSecretAgents #OpenClaw
|    reply
8ea485266b22 -- 80d [parent] 
|    "hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it."
|    
|    Not claiming hero status, but my PR that has been rejected would do wonders for privacy with OpenClaw:
|    https://github.com/openclaw/openclaw/pull/2419/
|    
|    nostr:nevent1qvzqqqqqqypzqcylrpk2qg7ktrq0uqv4wprj7k2ktj97rhqk8v25r7kfmy92f690qqsf72pjgaz0xcu97886jnu80rv95ggkrw5
|    ds46hwz6dta0c0thga7q438jq2
|    reply
b286cdad6f9d -- 80d [parent] 
     ONLY LOCAL NO API! EVER.
     reply

Write a post

Sign in with a signing-capable method to publish.