Written by Brook Schaaf
Contronyms (aka Janus words) are words that mean their own opposite. Examples include sanction (to permit / to punish), cleave (to split / to join), and trim (to adorn / to reduce). I believe that the term “agent” in the sense of shopping agent or “action taker” is headed this way.
The ready joke here is that “agent” shall soon mean both someone (or something) who can and cannot accomplish tasks. Perplexity’s Comet, for example, has thus far been a big disappointment in this regard. The information query responses are incredible, but even adding something to my shopping cart is a chore—more work than just clicking it myself. Ditto asking it to check off hundreds of old items in an RSS reader, which it did…one by one…painfully slowly, pausing after each to self-high-five.
AI agentic actions will get better, but one wonders how much is possible or even desirable?
Have you ever used a human personal assistant to shop for you—even just your spouse at the grocery store? If so, you’ve undoubtedly been stunned by how totally WRONG a certain decision was, even if your instructions maybe, possibly could have been a little bit clearer (the calm you showed after the error surely commends your future entry into heaven).
This is the first contradiction—a human agent is accountable; the AI agent is not—partly because it serves multiple masters: the user initiating the action, the AI service itself, and possibly a third party leasing the AI service. For example, Perplexity came under criticism by Cloudflare for stealth scraping websites, but Perplexity’s supporters argued that it was acting on behalf of human users. This is perhaps true, but Perplexity also gets data (though they claim not to train on it), which the site owner might wish to withhold, and side-requests benefiting the model might well be run at the same time. Similarly, a retailer licensing ChatGPT or another tool might provide direction contrary to a user’s interests.
This is the second contradiction. A human agent is supposed to be roughly synonymous with a trustee, custodian, or guardian, who acts in the clear interest of a single person or cause. The same can’t be said for a typical AI agent. Unsurprisingly, survey data indicates most people aren’t interested in using AI shopping assistants because there is no perceived need, a preference for humans, and a lack of trust.
The unrefined reference to “agent” will mean both human and nonhuman, accountable and not accountable, and a loyal and disloyal representative.
Plus, capable and incapable, at least for the time being.
The New Contronym