Are We All Commodities?

Brian G Herbert
11 min readJul 20, 2023

--

Optimism is a choice that I make to apply control to an uncontrollable world. My optimism fuels my big ideas like restructuring advertising to be more effective in an online and streaming world, how to grow the value of content by respecting user rights, or how to build a truly personalized metaverse. But today, I want to apply my optimism to look at how the inevitable encroachment of AI can help us reconnect to the value that only humans can bring.

The Yin-Yang of AI

I’ve spent a lifetime thinking outside of the box, which has led to my greatest successes but has also made it difficult to position myself in a status-quo job market. My best achievements in both professional and personal life are passionate, multi-disciplinary adventures in bringing alignment to chaos. Being a misfit in a period of market uncertainty has its benefits. I feel that progressive companies need people like me who thrive in handling change and uncertainty.

So I came at this essay with the standpoint that I have not been commoditized, nor have you, but I needed to work through some details to understand both the benefits and the threats presented by recent developments in AI…

Generative AI Can Reduce Misinformation

There is a dangerous increase in the spread of misinformation in our society. This has grown worse due to the internet and more specifically, social media, becoming the primary source of information for many. It creates “filter bubbles”, aka echo chambers, that do not subject false facts or flawed opinions to editorial standards, academic review, or rigorous debate that have traditionally acted as societal gatekeepers. I think most people agree we need gatekeepers to keep society from devolving into a mass cluster-f***, but who those gatekeepers should be has been the point of contention.

There is also the risk of filter bubbles driven by homogeneity, aka selection bias.

Selection bias comes from the fact that our natural qualities or traits lead to connections and associations with others who are more like us than they are different. I grew up a white, middle-class, Anglo-Saxon Protestant with two married parents who both had post-college degrees. Typically, that would map to a certain set of assumptions about me, my preferred associations, and content that I will respond to favorably. Again, I am the ultimate outlier, and I break many of those rules, but you can understand how these embedded traits can tend to segment and isolate us, particularly when an algorithm-driven entertainment platform like TikTok or Instagram is serving us content with the sole goal of keeping us as engaged as possible for as long as possible.

My belief is that what matters most about an opinion is the honesty and authenticity of the person who says it. But it’s also natural to try to corroborate someone’s opinion with our own experiences- and our experiences are subject to our lifelong selection bias. This is a great reason to collect a wide range of diverse life experiences! It is also worth considering how the older we get, the more the momentum of our selection bias can take us into areas we would not have considered when we were younger. Why are we less tolerant? Most likely, it is the aggregate effect of selection bias.

LLM AI can help due to the scope of the content it ingests. It can reduce filter bubbles and provide a more balanced viewpoint, at the risk of homogeneity, aka being a bit boring!

As an outlier who didn’t accept the role modeling of a dad who made his resentment of me clear, I learned to question everything. I learned to value one-on-one connections with others and to always be looking for insights from alternative sources. I grew up knowing I had no answers but the ability to make objective and logical decisions and the personality to have interesting conversations with others and understand their points of view. What do we really need? We need the ability to turn our bias filters off, collect information, and apply the amazing inference generator that is the human mind.

Growing Filter Bubbles with Podcasts

I’m kind of a Podcast addict. I sometimes listen to podcasts that go after each other simply to understand both sides of an argument. That is how I deconstruct my potential selection bias. But just by reading the introductions used for podcast interviews on some of my favorite business and technology podcasts, I realized the troubling state of selection bias. We are hearing from a tiny slice of opinions, from people who define themselves as masters of the universe.

The typical biographical summary is something like this: our guest today first invented the alphabet, then invented the English language, because she/he realized these were necessary before she/he could write the bestseller that landed them a spot on the podcast or TED talk today!

I grew up in McLean, VA, graduated from the University of Colorado and then George Mason University, then continued with professional certifications at Emory University. So, I already probably have an outlook skewed towards academic achievement, but I realized what I’m hearing from most podcasts is much more extreme than my own bias. I am sure there are many more people with insights who can communicate those insights succinctly that are simply excluded from the dialogue due to the celebrity worship of hosts and/or producers.

I’m probably tilting at windmills, but we need to say these things EXPLICITLY to have any hope of affecting change.

The point I’ve been making is societal risk is formulaic- the lack of diversity of the voices we hear increases risk. Including more minorities with Ivy League degrees is good, but it is still a massive selection bias. It is still giving the power of voice only to those who really no longer need it.

Do you Add Value, or Should Your Work be Automated?

If I cannot add value to my work, I try to automate it or train someone else in it and move on to a more difficult challenge. I don’t know anybody else who treats work the way I do, but I’ve never been out to imitate anyone so it doesn’t surprise me.

I got my start in the software industry by taking a job as a bookkeeper and automating my way out of it so impressively that the CEO of the startup offered to train me to consult with their clients to customize our sales force automation software. That was my first job out of college, as a guy with a psychology degree, no family connections, but a strong passion for learning how to design solutions using computers.

It taught me a few lessons. One lesson was that if I wanted a better job, I should automate myself out of my current one! Another was to boldly take initiative as long as I had the best interests of the business in mind. But taking bold initiative requires a lot of extra investment, aka homework, above and beyond the job description. That drove me to master lifelong learning. I proved that my aptitude is still strong by recently by building analytics and machine learning solutions in Python (see my articles on apps I built at https://bgherbert.medium.com).

With some skills, repetition leads to mastery and a cognitive efficiency that has great benefits. In sports, we call this muscle memory. However, with most business or technical tasks, repetition sucks, and the fact that it is boring leads to more not fewer errors. This gets to the argument for AI in its ability to automate mind-numbing tasks.

AI is not some huge discontinuity to me but just another milestone on the continuum of getting machines to automate repetitive tasks. It is the acceleration of a trend we’ve been on for 15 or 20 years involving the benefits of using larger and larger training data sets and increased quality and ease of use of analytical tools like the Python language. All these changes make automation more accurate and more capable of handling exception conditions.

Memorization is a dead-end, even for doctors and lawyers, it Adds No Value

Some articles sought to make a point that AI is even automating doctors and lawyers out of jobs. That is not quite accurate. AI is automating well-documented, procedural, and data-intensive tasks. Where a combination of experience and critical problem-solving skills are required, or connection and deeper knowledge of patients/clients make a difference, their skills are still very much in need.

As I see our civilization go one way, I often challenge myself to pull source data, identify common biases and remove them from my thoughts, and consider other possible outcomes. I work at identifying new ideas that can break through existing biases. One Idea I’ve had is for a reality show where people who are unqualified for a particular job are given the opportunity to prove themselves. A line worker is allowed to be a CEO or CMO for a month. We’ve seen the reverse with celebrity CEO, but how many executives have the courage to learn from a line worker? What was behind Total Quality Management employed by the Japanese to dominate auto manufacturing starting in the 1970s was very much centered on learning from the expertise of very “ordinary” workers.

That leads me to explain another type of bias that we all have- confirmation bias. Confirmation Bias stems from our emotional desire to be right — we tend to select or remember examples and stories from our lives that fit a particular opinion and conveniently forget about examples that contradict that opinion. If the voices we hear are elite, we discount the voices of those without a systemic megaphone. Methodologies like Agile or TQM seek to balance that within an organization, but do we have any similar gating process for public voices?

Not Adding Value is Not an Option

I see evidence that is consistent with many people sleepwalking through their jobs and lives. The Quiet Quitting movement is a good example, but every interpersonal issue has two sides to it. I’ve seen many companies that don’t encourage or incentivize initiative, so why shouldn’t employees push back? Particularly those employees from Gen Z who may be more forceful in crafting a work environment that works for them, versus those of us who have lost battles and have become more jaded.

Capitalism, like democracy, is the best system created by humans, but that doesn’t mean it doesn’t need tweaks and adjustments. The danger with capitalists is they have a tendency to try to remove the source of any unique or differentiable value other than their own. A good example is what we have seen with big tech fearing the job security of US-based software developers in the late 20th century.

They massively lobbied for an increase in H1-B visas, and they were successful at restoring software development to commodity status. How have we paid for that? And I mean not just myself as a US citizen but developers from other countries as well. On corporate IT projects, software developers are no longer partners in business outcomes.

When I look at the pressure exerted by lobbyists at big tech versus the success of methodologies that have empowered self-organizing, self-managing teams, I have to side with the latter as presenting better clues as to where we need to go. I don’t believe AI changes the value of the line worker who knows his or her activity so well they can suggest game-changing innovations. There is plenty of manufacturing data from the last 75 years to support this view.

Wake up or we’ll ‘bot Your Ass!

You can be a warehouse driver or a surgeon, and if you are not pushing yourself to innovate, adapt, and collaborate, you can find yourself marginalized.

Of course, to do those things effectively, it is necessary to build a solid knowledge base and be able to efficiently access it and add to it. So, lifelong learning and the ability to use one’s knowledge practically are both vital components. This is how Moore’s law is our friend and our nemesis, but mainly our friend in the tremendous access to information that is has provided us on the cheap.

The difference between nouveau billionaires and the rest of us has primarily been about the willingness to conduct psychological experiments on other humans without their knowledge and with an understanding of the potential negative impact. That is not a business skill that any MBA school could admit to teaching. Clearly, for our civilization to continue and to prosper, we need better leaders with better ethics.

We can make flippant, caveat-emptor dismissals, but frankly, I think too many of us have been indoctrinated into this neo-libertarian capitalist ethos although few of us truly participate in its benefits.

Love, Creativity, and Collaboration are our Agency.

Large Language Model (LLM) AI is typically trained on a massive pull of data from the Internet. This is done by bots indexing and navigating public websites and is known as the Common Crawl. So there was a Common Crawl for year-end 2021 on which the first publicly available LLM, from Open AI, was trained. At such a scale, its predictive powers to suggest text have grown quite a bit since you first turned on predictive word suggestions for the texts you write on your mobile phone. And the competitive builders of LLMs are increasingly training their models on more current crawls, so there is less and less latency to the present.

However, we are still talking about a machine that is not sentient. It understands what we have said about the human experience or how we feel about things we experience, but it has no firsthand knowledge of these things or the deeper meaning they have for us.

Whatever your spiritual beliefs, you must believe that we’ve been given some agency to carry out our lives and affect change. I believe our effectiveness at doing that comes from three sources: love, creativity, and collaboration, of which AI knows not.

To state these qualities in my own terms, Love is that special bond that causes us to go beyond reason in our care for another person, Creativity is that spark that brings synchronicity or the connection of previously unconnected things to solve a problem, and Collaboration is a sense of loyalty and commitment to teammates that produces a whole that is greater than the sum of its parts.

Without these three things, my life feels empty. When I have outlets for all three, my life feels rich beyond my greatest dreams. I’m a hopeless optimist, so your mileage may vary, but those three qualities have clearly accounted for much of our positive progress in civilization. AI in a way is challenging us to leave the small stuff to it, and for us to focus on the big stuff, just those unimportant things that require human love, creativity, and teamwork!

I don’t mean to be insensitive to those who fear losing their job to automation, I’m just trying to clarify there are still huge differences between the most competent machine and a human being.

How many people would say the strength of their family is not based on at least two of these three qualities? If that has no real value then why do we care so much about family and significant others in our lives? Can we really put up a “Chinese wall” between our personal values and how we conduct business? How well has that ever worked?

If nothing else, AI at least helps us to bring to the surface what makes our experience uniquely human.

--

--

Brian G Herbert

Award-winning Product Manager & Solution Architect for new concepts and ventures . MBA, BA-Psychology, Certificates in Machine Learning & BigData Analytics