Breaking Brains

21 Sept 2024

We are entering a Golden Age of Large Language Models being integrated into product development, and I’ve noticed it’s breaking a lot of brains – including ours!

It's not just because the tools are numerous and new – and therefore not easy to use. It’s because using them requires a change of mindset at each step of the process.

This is rooted in the fundamental difference between Probabilistic and Deterministic outputs. But it’s important to remember that you likely think about broader strategy with the same mindset  – be it business planning, roadmap priorities, or career path.

Let’s drill down into three Paradox Pairs:

🔮 Probabilistic vs Deterministic

The biggest difference between how LLMs work vs traditional “coding” is that the output is probabilistic. Put simply, this goes back to what LLMs are built for: Generate the most likely ending to this paragraph given its beginning.

If you’re embedding prompts in your code, you have to design for the probability of incorrect output. It’s frustrating to have to add more QA or chained “check your answer and try again” type prompts. But the gains of generated code and interpreting more complex data sets far outweigh the losses of having to rethink the chain of events that creates a product build. This will challenge your “communication” skills with these prompting tools in a good way!

⚡ Designing vs Generating

One underrated aspect of code generation tools is that you can actually move from design to deploy quickly. The latest crop of builders like V0, Cursor, and Replit (which I will review my experience with in subsequent posts) are game changers for technical PMs, junior devs, or even senior architects who want to deliver faster without being in the weeds of boilerplate code that is irrelevant to the business logic you’re trying to prove out.

But to use these tools effectively, you have to really think about how you ask the tool to generate code. This will result in “no, try again” rewrites! But these can happen many times over an hour rather than many days over a week in the more traditional way of implementing new code.


📈 Interpreting Data vs Parsing Data

This is an overlooked area of where LLMs can make a difference, and it’s a key aspect of the product I’m currently building. Files, APIs, and structured data come in all shapes and sizes, and it used to be unfeasible to write custom code to handle every variance in format. LLMs can employ probabilistic analysis to broaden the capability of transformation code to build new data sets.

🎲 What all of these Probabilistic tactics have in common is that you are making bets on what you want the output to be to get the right outcome. This is intellectually no different than the bets you are placing on your roadmap for the best customer outcomes, or the best role to take on for the next step of your career. But the cycle of trying, failing, and learning is much much faster if you’re patient and open-minded enough to reap the rewards!