Your Indispensable Value in the AI Era

Asking the right questions is an irreplaceable skill.

"In a world where the cost of answers is dropping to zero, the value of the question becomes everything." — Brit Cruise, the AI Paradox

LLMs and their adjacent AI tools have provided us with something truly novel: the ability to ask anything, at any time, and receive an answer. We might have thought Google was playing this role, until these tools showed us that what we had before was a ubiquitous digital encyclopedia, and what we have now resembles a librarian who will attempt to answer any question, regardless of the complexity or the absurdity.

When you have a tool that has all the answers, what value do you have to bring?

It turns out: a whole lot. More than you probably thought, too.

The Question is the Value

If I were asked by someone to describe what it's like be a developer (or programmer, or coder, whatever you identify with), I would describe it as a state of being in which one is ceaselessly asking questions:

  • Why doesn't this work?
  • Why does this work?
  • Is there a better way to approach this?
  • How can I build this feature?
  • Should I refactor this code?
  • What happens if I change X?
  • How does it behave when I move Y?
  • What happens if I remove Z?

Turning Blue to Purple

Many moons ago, circa 2010, I was hired to build an eCommerce site using CS Cart, a platform that was still in its infancy (even eCommerce was not that old by that point). During the checkout process, I got an obscure mySQL error, and I wasn't super experienced with SQL at the time, nevertheless something specific to CS Cart.

Off to Google I went, hoping to find an easy answer (spoiler: nope). I quickly turned every blue link I found to purple, with no clear resolution. So, I kept asking.

Each time I reformulated the question, I would get slightly different results, which led to refining the question. I was piecing together jigsaw puzzle without the picture, and each piece showed me the shape of the next piece I should begin sifting through the box for. Each reformulation of the question was another potential shape that could fit.

Eventually, after creating page after page of purple links, I gathered together enough pieces to formulate the actual question that I needed to answer. And once I had that question, the answer was immediate, and self-evident. After a solid day of searching (punctuated with many walks around the block), I found the answer!

Or, more accurately: I found the question.

The answer was the result of the labor, it was the outcome. As Brit Cruise also said in his scintillating video, finding the question itself is the work.

Nothing Is Becoming Simpler

From the introduction of GPT 3.5 to the latest models and tooling, answers are now abundant. Whether or not they are the correct answers...that is a problem that remains unchanged even with the latest frontier models. And it's a problem that is only solveable by those that know the power of asking the right questions.

I realize that with modern troubleshooting tooling and the assistance of AI models, that particular debugging session would have likely been resolved differently, and likely quicker. This is a wonderful turn of events; this is how progress works. That is that is applying an old problem to new tooling, though. The problems we face now are very different, and the challenges we face at any point in time, scale to the complexity of industry.

Programming and development are far more complicated in modern times than they were in 2010, if not just due to the sheer number of abstractions we've created for even the simplest features.

I recently found a great article by Paul Herbert who demonstrates this so effectively:
The Incredible Overcomplexity of the Shadcn Radio Button

For example, troubleshooting a Docker issue is orders of magnitudes more complex than anything I encountered in 2010, because the state of technology to support something like Docker did not exist then, but it does now.

As the tools grow in capability, the complexity grows. And as things get more complex, we encounter novel problems. We'll need more people who are able to approach these novel problems to help solve them, who know how to ask questions, how to research, and how to formulate new questions to navigate these new problems.

AI Cannot ask the Question For You

It's been a few years now and it's quite apparent that LLMs and AI tools are not going to simplify anything about what it means to program and create software. Peruse OpenAI's Harness engineering write-up, and it's clear to see that the new way to approach programming, seems to still be the same fundamentals as programming, but with more abstractions between you and the result, and with potential higher complexity due to the sheer volume of code that can be generated in shorter amounts of time. Which will lead to more complex software, which leads to more complex issues. Software is not a static industry, and we constantly scale the capabilities of the software to the capabilities of the tools.

What remains is the most valuable discipline that you can cultivate: how to ask effective, productive questions. And once you receive an answer to that question, how to take that answer to reformulate the next question, performing this process recursively until clarity comes. This is the process of "critical thinking", but that phrase handwaves away the mechanics of what it means to do just that. Being able to distill a general question into a highly specific one involves relentless scrutiny, ongoing experimentation, and being comfortable residing in a state of unknowing for an unspecific amount of time. This is the true job description of the programmer.

AI models cannot think for you, and they cannot formulate the question for you. They can certainly be a tool that helps you on your research path, but they are still bound by their training data, and they cannot escape the weighted dice that determines their paths which drive to their outputs. Their ability to generate answers is unmatched, but those answers are highly sensitive to the original input & context, and fundamentally untrustworthy by their very nature. That's OK, because they still have tremendous value, but only when someone is present and capable to sift through the noise, distill the truth, and verify the answer (or perhaps, the next question).

The Answers are Cheap Now

As Brit Cruise stated in a beautifully succinct manner: in an era where AI tooling has dropped the cost of answers to near zero, the value resides in asking the right questions. And while AI tools have exacerbated this situation, I would postulate that the value has always been in asking the right questions.

And when you have an infinite answer machine, your ability to ask good questions becomes infinitely more valuable.