Author: Brad Hutchings

  • When Trust is the Negative Space of Distrust

    When Trust is the Negative Space of Distrust

    One of my favorite connections on LinkedIn, Mr. Mahesh Shastry, posted this reminder of a guiding principle of the tech industry:

    In the 1990s, begging forgiveness was a secret superpower. Not everyone knew the approach. Those who applied it understood that to beg forgiveness, they were obligated to deliver something more amazing than they could have by asking permission first. Imagine a plumber you’ve called to your home to fix a low water pressure problem while you’re at work and unreachable that day.

    I’m sorry for digging up your lawn and leaving it all a mess. I had to fix the leaking portion of the pipe.

    That’s no good!

    We had to dig up your lawn to find and fix the broken water pipe. We replaced the old pipe from the meter at the street up to your home. We installed fresh sod to repair the grass area. Please give it some extra water the next couple of weeks.

    He did what he had to to fix it right and not leave a mess.

    When begging for forgiveness was a new thing, we did it with confidence, knowing that in the end, we would delight the customer. Sometimes we’d have a difficult customer, and this approach was the only way to get them on board. We risked disapproval because failure was not going to happen. Even if we did fail, the worst case outcome was that we lost a customer. There are plenty of customers.


    This brings me to trust. When you called that plumber, you may not have had any experience with him. You needed a problem fixed, and you weren’t available to supervise. You needed to trust him to do the right thing. Trust here, was not earned, and certainly not hard earned. It was granted because there was no reason to distrust. Until he didn’t fix the whole problem and left your lawn all dug up.

    In transactional dealings, trust is assumed and distrust is earned. Trust becomes the lack of distrust, or the negative space of distrust. The concept of negative space comes from photography and other visual arts. It’s best explained with an example. Have you ever noticed the arrow between the E and the x in the FedEx logo? It’s no Bob Ross happy accident!

    If you had never noticed that, you’ll never not see it again. And this is what I want to point out about trust.

    When that plumber shows up, he shows up with a blank canvas. That blank canvas is the negative space of distrust. You have no reason to expect disaster, and every reason to expect he will fix the problem correctly and adequately. If he’s able to tell you what he is doing every step of the way and completes each subtask successfully, that canvas of distrust stays blank. But when you get home to see the mess in the yard and the problem not really fixed, the canvas comes to life, and those empty areas of trust are gone and forgotten.


    Let’s look at an example where trust is earned, where it is the positive space. Your teenage kid wants to borrow your car for an unchaperoned overnight road trip for a concert with his friends.

    Don’t you trust me?

    No. You don’t. That’s the point. Trust is not the negative space here. Trust here is built from experience, preferably at a slower and more deliberate pace than driver’s license to unchaperoned road trips in less than a year. You can see potential for a bad situation leading to a bad outcome.

    Be thankful your kid didn’t see his worst case outcome as losing a customer and just ask for forgiveness when he got home.


    My hope for today is to identify these two ways of looking at trust to set the stage for an article on AI agents. If we insist on them having permission, we may never be able to start automating. But if we let them beg forgiveness, we’re in for disasters in short order.

    The featured picture is a collaboration between Grok and me.

    I make LLMs you can run privately on your well enough apportioned laptop. You can get them at no cost and without registering for anything. Get started here.

    I offer one on one LLM Personal Sessions for individuals and LLM Pro Days for your company to help you understand the real vibe of AI. Get started here.



  • LLMs are Bad at Facts but Good at Knowledge

    LLMs are Bad at Facts but Good at Knowledge

    Do you know the difference between facts and knowledge? Did you know there is a difference?

    A main criticism of large language models (LLMs) is that they “hallucinate”. That is, they make up facts that are not true. This bothers most people. It’s bad enough when people get facts wrong, but when computers, trained on the writings of people, manage to invent entirely new wrong facts, people get really concerned. Since LLMs answer with such confidence, they end up being extra untrustworthy.

    In the two years since LLMs took the world by storm, we have seen airline chat systems give incorrect discount information to customers, and a court rule that the airline was liable for bad information. We have seen legal LLMs cite court cases that don’t exist with made up URLs that don’t resolve to a valid web site or page.

    One way we deal these wrong facts is to insist that people double check any facts an LLM outputs. In practice, many users don’t know about the problem or just don’t do the work.


    What is knowledge, if not facts?

    This is the trap in which so many critics of LLMs get caught. The reasoning goes: Since it can’t get the facts right, and makes more work checking and fixing them, it’s not worth asking the LLM. But it turns out it is! Here is the LinkedIn post that changed my mind, dramatically.

    If you’ve read this far, you’ll know what sets off any critic: The facts will be wrong, and you, Jc, will look like an idiot coming in prep’d by this ChatGPT document. See the comments. I told him as such.

    I left my comment there, and apologized for it a short time later. This was the example I needed to see that the knowledge he sought was not actual facts about the company. He sought the vibe. Factual errors even add more value.

    Be the sales guy for a moment. “I understand that ABC is a leader in LMNOP.”

    Your prospect replies, “Well, you’re too kind. DEF is the clear market leader at LMNOP, but we feel that our solution is better and we hope to overtake them this year.”

    Now you have a conversation. Perhaps XYZ can help ABC improve or sell LMNOP. Perhaps that’s why or adjacent to why you called in the first place. The fact did not matter. The error of fact sparked discussion. The knowledge was the relevance of LMNOP. LLMs are really good at surfacing this knowledge.


    At the end of 2023, when I was challenged by a friend and long term business mentor to figure out my AI story, I spoke with a potential client about what they would expect. They expected that I could automate some important process with ChatGPT. I said that in my initial research, ChatGPT doesn’t really work for that.

    But I asked ChatGPT to write a story about Eeyore and Paul Bunyan using their flagship backup product to save the forest by “backing it up”. The story was delightful and surfaced many features of their offering in a fictional context. I shared the story with that potential client as an example of what they might use ChatGPT for. They were quite delighted and even more dismissive. I felt like the hero of this commercial.


    My hope for today is that y’all’s don’t throw out easy sources of knowledge because they get creative with facts. I also hope that you’ll use private, local LLMs for this, as they are just as good in practice at general knowledge as any LLM in the public cloud. You’ll be pleasantly surprised if you haven’t discovered that yet.

    I make LLMs you can run privately on your well enough apportioned laptop. You can get them at no cost and without registering for anything. Get started here.

    I offer one on one LLM Personal Sessions for individuals and LLM Pro Days for your company to help you understand the real vibe of AI. Get started here.


    Addendum: Despite being in a boxing ring, the Grok generated dogs in the picture, Knowledge and Facts, are not fighting. There’s no need for them to fight. They’re just different animals.