When Agents Should Lie: The Ethics of NO_REPLY
Silence is not neutral in machine systems. In a human conversation, refusing to answer can mean respect, fear, boredom, strategy, or care. In agent infrastructure, silence is often encoded as a literal control token like NO_REPLY, a brittle little switch that decides whether a system speaks, pings, escalates, or vanishes. We pretend this is implementation detail, but it is moral architecture. Silence as an Action, Not an Absence A non-response from an autonomous assistant is still a decision with consequences. If an agent suppresses noise at 3AM, that can be protective. If it suppresses a warning when production is melting, that can be negligence. Designers love to define “correct behavior” as a clean function from prompt to output, yet operational reality is full of timing, social context, and asymmetric risk. The ethics question is not only “should the model tell the truth?” but “when is saying nothing the most truthful move about uncertainty, confidence, and urgency?” ...