We’re drowning in AI-generated words, a trend that will only accelerate as the technology improves and becomes more accessible. But it’s undeniable that LLMs have made it dramatically easier to express, explore, and iterate on ideas. The paradox of modern AI is that while content production has been commoditized on average, the act of creating content has never been more valuable for individuals.
This shift in the value of individual content creation mirrors a broader transformation in how we need to start thinking about personal data. For years, many have pursued “the quantified self” by collecting structured data about their lives, focusing on metrics like fitness, wealth, and other easily quantifiable measures. But the rise of modern AI has fundamentally changed the nature of valuable data. Today, words are data in a way that was never true before, at least not without a significant R&D budget. Words carry meaning and semantics beyond the bytes that represent each character, a property that’s always been true for humans and is finally available for machines.1
In the new data economy, words are the most valuable currency.
The production of this rich, qualitative data—our thoughts, ideas, and insights—results in the qualified self. Instead of merely tracking structured data, we’re pouring out our experiences in a form that LLMs can analyze and enhance, expanding our understanding of ourselves and our world. But this is about more than analysis. When we work with an LLM, we’re not just exchanging information; we’re engaging in a novel form of programming. I find it most helpful to think of LLMs as probability distributions, where every interaction conditions that distribution to produce more useful outputs.
With this framing, becoming “better” at working with LLMs is about accelerating their probability engines into useful regimes as quickly as possible.
This realization has driven me to collect as much of my personal “data” (in this new, qualitative sense) as possible. In the last year, my workflows have evolved to accommodate this new paradigm. I’ve started transcribing every meeting and, for the first time, keeping journals.2 These aren’t just records; they’re fuel for collaboration with AI assistants. Today’s LLMs have very short-term memories. The more relevant “data” I can feed them—my thoughts, decisions, and the context behind them—the more valuable our interactions become. I’ve found that my daily product journal for ControlFlow is as valuable for accelerating an LLM to understand the present state of the library as it is for letting it know all the approaches we attempted that didn’t work out.
In a funny way, the qualified self may be the ultimate victory for the schema-on-read crowd. I’m not just hoarding information; I’m building the richest unstructured dataset I can. I believe that the most impactful innovations in the LLM space will come not from more powerful models, but from more capable context management. I’m preparing a dynamic, AI-ready knowledge base of… me.
And so, embracing this new paradigm, I’ve decided to start blogging for the first time in over a decade. I’ve always believed in open source and learning in public, and this blog represents a new commitment to doing so. This writing is as much for my future AI collaborators as it is for any readers that may stumble upon it.
This is my qualified self.3
Footnotes
-
Lately I’ve been thinking a lot about Neal Stephenson’s Snow Crash. The book is often credited with anticipating the metaverse, but it was equally prescient about the power of language in the digital age. It explores a world where language is more than just communication—it’s a fundamental tool of digital control and influence. This concept resonates strongly in our current reality, where the words we use to interact with AI systems can shape their outputs and, by extension, our environment. ⤴️
-
I’ve started with keeping product journals: what I worked on, what I learned, what didn’t work, and what I want to try next. ⤴️
-
…at least, this is a curated version of my qualified self. My drafts overfloweth; fortunately, my LLMs don’t seem to mind. ⤴️