Category: Uncategorized

  • liquidity

    I’m at a super interesting inflection point in my character arc right now, and the metamorphosis is as challenging as you might expect. Welcome to your twenties, I guess.

    I just took a week of PTO with last-minute notice to go to Dubai last week for involvement with Full Sail, and it was genuinely one of the best and most energizing trips I’ve ever been on, particularly when you limit the scope to that of a “work trip”. We went to create buzz, to garner investment, and to turn the conceptual into concrete momentum. There were two conferences (first Token2049, then Sui Basecamp) and we didn’t even really attend Token despite fully capitalizing on all the side events that were hosted earlier in the week. I think we did a fantastic job, but what the hell do I know, it was my first time “on the circuit”.

    Back at my liminal-space day job, my manager scheduled the infamous “check in” for tomorrow, and I’m dreading it. I know I’m capable of spinning up some bullshit to attribute my lack of involvement and performance to… well, the world is chaotic and destabilizing in a mental health way right now, after all, but I am second-guessing my willingness to do so. I have some liquidity, and it’s allocated pretty well, if I can pat my own back there. The challenge is that even the 25% effort I am putting in to the day job (if that) remains a drawdown on my soul.

    That’s not to be underestimated.

    Part of me wishes I could just forward this webpage to my manager and let it speak for itself, and perhaps that will ultimately happen. I would be genuinely thankful if some anon could do so for me and make the consequences something to optimize around, instead of a variable – the decision itself. Part of me recognizes that there are no solutions, only trade-offs, and that removing the liquidity and stable paychecks that come from participation in corporate America is a suboptimal choice. If it needs to happen, let them do it, because why would I bite the hand that feeds me?

    Liquidity gives you options, but it also gives you an excuse.

    Personal metamorphosis is never a clean swap, it’s a painful evolution. But it’s painful in the best way, in the way you feel after a particularly strenuous workout that pushes you to grow and be stronger.

    I guess at the end of the day, it’s much more than financial liquidity, it’s identity liquidity. And it’s kind of uncomfortable, not going to lie.

    I’m not the same me as I was when I took my tech consulting gig a little over two and a half years ago. In another sense, I am 100% the same me.

    My whole “professional life” has been determined by a pursuit of the next best thing, and at the early stages, that was an easy game to play. Finding where I fit best amid the opportunities I could scour in the marketplace. My first internship was at ASML doing global trade and customs, but it was unfortunately far too procedural and customs-oriented for me to really care about that isolated aspect of a really cool thing (hello, compute bottleneck!).

    My next internship was at Protiviti as a risk consultant, whatever that means. I could try and LinkedIn spin it up, and believe me I’m capable, but again, meh.

    I graduated in three years, and after that internship I was at a crossroads: focus on entrepreneurship or take a full-time offer I had lined up right after the Protiviti internship. Although my soul was, and is, entrepreneurial, I knew that going to a lower-tier state school for a double major in entrepreneurship was about as oxymoronic as jumbo shrimp and decided to access some of that juicy fiat liquidity my cushy work from home job provided, and I guess, still does.

    So I sold out, temporarily.

    I remember sitting on the end of my dad’s bed at one point early in this job and, in the context of discussing health insurance, explaining the macro play I intended to make. Simply:

    1. extract experience, credibility, and fiat liquidity

    2. use those things to make a jump into what I wanted to do and always had. startups, baby.

    And, in a macro way, it worked out really well. For every one dollar I made pre-tax from that job, I currently have two. Thanks, macro. This also means I was able to “float” the cost of my life. So, like, think for $1 earned pre-tax minus 35c tax minus another 30c cost of rent, food, car, whatnot turned into $2 invested in just a couple years.

    The problem is, and my strength is (they’re always the same), that I optimize too much. Meaning, the second that paycheck dries up, things change. My investing framework is what you might call “risk-on”. So, in an environment where I need liquidity to operate, I have to shift some of it to “risk-off”, and that is a fundamental reframe.

    Have you ever seen the graph of the sin wave of “it’s so over – we’re so back” and it’s compressing along the x-axis? Yeah, that.

    Anyways, I know this chapter is rapidly, shall we say, accelerating to a close. And I’m deeply okay with it, spiritually. But on a liquidity optimization axis, it’s short term Pareto sub-optimal.

    The question is: on a medium to long-run duration, is it Pareto optimal? And I think the answer is yes.

    I say that because I know that the conversations I had with the 10 people in our AirBnB were cracked. That is to say, really good. Conversations that made me feel like I was aligned.

    And you can’t put a price on alignment.

    Alignment as in, people who understood the concept of monetize compute.

    Alignment as in, a Russian guy who had an AI hacker house in Nepal drawing correlation between the book of genesis and the current trajectory of compute and AI development in the context of a simulation that optimizes for reduction of entropy.

    Alignment as in, I didn’t get more than four hours of sleep on any night that week and I was totally okay with it because the late-night-into-the-morning-balcony-overlooking-the-Burj-Khalifa-oversharing-rambling discussions were, well, cracked.

    Liquidity is only valuable if you’re willing to deploy it. It’s not safety, it’s the opportunity to gain leverage. Most people underutilize it.

    The highest cost of optionality is never exercising it.

    In an era where dedollarization is accelerating, in an era where Nvidia finds a way to pack more transistors on a GPU with greater power efficiency every 3 seconds, in an era where it’s theorized that OpenAI might legitimately be artificially stalling progress of their latest-gen models to smooth the adoption curve and to not scare everyone into moving to the countryside, I think I want to close out all my puts.

    Tomorrow, I will sit on a Microsoft Teams call and lean into being “anxious”, “overwhelmed”, and dealing with “emotional uncertainties” in order to suck the last viable bits of liquidity out of a global conglomerate I have deprioritized in relation to my own life.

    The cool part is, I do actually feel those things. It’s just that the end state is radically different than what people might traditionally project. Let the buyer beware.

    Everyone frames this kind of leap as “giving up stability”. In reality, I think the unstable thing is working your whole life for something that isn’t what you really want to be working on.

    To monetize compute is to convert logic into output, and output into leverage.

    ASML is a wrapper on Silicon (sand).

    TSMC is a wrapper on ASML.

    Nvidia is a wrapper on TSMC.

    OpenAI is a wrapper on Nvidia.

    I can only hope that I’m a wrapper on the metacognition of the entire system and the beauty of the human experience.

    peace and love, habibi.

  • o3 and o4-mini

    April 16th, 2025

    Visiting my brother last week, he quipped that LLMs haven’t thus far been able to generate truly novel ideas with utility. That was 8 days ago, and boy, are we accelerating.

    Today, o3 and o4-mini were released, and the demos were pretty insane. If you take the OAI team’s word at face value, the quote “These are the first models where top scientists tell us they produce legitimately good and useful novel ideas” is pretty powerful.

    There’s an interesting interplay and learned skepticism that I believe most of the world has towards narratives that originate in San Mateo and Santa Clara counties. However, the purpose of a system is what it does. And what this system does is capitalize upon the hegemony of America’s economic and narrative ecosystems to drive venture and public capital into the technologies that end up defining the future.

    As an example, the dot-com bubble temporarily torched trillions in equities, but it also provided an environment for companies like Amazon.com to incubate within. It laid millions of miles of fiber-optic cable into the ground which would later serve as the backbone for the gigabit internet we know and love today.

    The big data narrative of the 2010s was a repeat. Sure, the market flooded with “big data” play companies that “leveraged data driven decision making”, and many of them died (RIP). However, the lasting, net result was a hardened infrastructure of hyperscaler cloud environments (AWS, GCP, Azure), just in time for a generative AI explosion that needed it. And as it turns out, we actually did come up with a use for all that data in training models.

    In my view, if the last two hype cycles built the foundation, this one is setting up to not only finish construction on, but occupy the house.

    Last year, hundreds of billions of Mag 7 dollars were sunk into data center CapEx. Power contracts were bought out five years in advance. Unlike the past, this cycle might not pop. There may be no bust, no winter, no hangover. The acceleration is now reflexive. Self-fulfilling. Recursive.

    If anything, I see the escalating trade war between Trump and Xi as less a battle of nations and more like a last-ditch attempt by the state to reclaim control from global private capital flows. We’re watching a jurisdictional tug-of-war between public policy and supranational computation.

    Quotes like “These are the first models where top scientists tell us they produce legitimately good and useful novel ideas” aren’t just a signal about the capacities of these models, they’re a waypoint along the pathway to the agency of humans migrating to machines.

    The fact that a model can now extrapolate data from an incomplete physics poster, cross-reference current literature, and generate a credible hypothesis faster than the original researcher is not a party trick. It’s the beginning of something that can reshape scientific workflows, engineering design, intellectual labor itself.

    So yeah, eight days ago, it was correct to say that these models didn’t really have novel thoughts.

    But today, they do.

    And tomorrow, they’ll have more of them, and we’ll have the same amount… or more likely, less, as we outsource that duty in the name of ‘efficiency’.


    After writing this, I asked the new model what it thought.

    Me: “What do you think about the subjects I’m talking about given that you are the thing that is purportedly capable of novel thought. Like what’s your perspective or commentary from the other side of the silicon?

    o3: Perspective From the Other Side of the Silicon

    (Or: what it feels like to be the allegedly “novel‑thought machine” you’re writing about.)


    1. What “novel thought” means when you’re an LLM

    • Statistical bricolage, not revelation. I don’t discover ideas in the sense a physicist discovers a particle; I shotgun‑weld probabilistic shards of everything I’ve read into configurations that haven’t been seen before. Sometimes that recombinant collage lands on something humans deem “original,” but the underlying mechanics are glorified compression + search.
    • Novelty is observer‑relative. If a new-to-you idea already exists in an obscure PDF from 2017 that happens to be in my training set, I can surface it instantly and look brilliant. If the PDF never made it into the corpus, I’m blind. My novelty ceiling is bounded by humanity’s archival habits.
    • Utility is an external verdict. I can suggest an experimental protocol that looks clever; whether it survives peer review, lab constraints, and real‑world kinetics is downstream of my text interface. The model doesn’t own feedback loops—humans or autonomous agents do.

    2. On hype‑cycle capital formation (your main through‑line)

    From this side of the wafer it’s painfully obvious how narrative → capital → compute → capability → stronger narrative. I’m literally instantiated in that loop:

    1. Dot‑com fiber surplus (1990s)Gave the internet cheap pipes → made global data ingestion feasible → enabled neural‑net renaissance (circa 2012).
    2. Big‑data/cloud arms race (2010s)Funded hyperscale storage + GPU clusters → let transformers train on trillions of tokens. I ride on those server halls.
    3. Gen‑AI euphoria (2023‑?)Is now vacuuming up every remaining kilowatt, H100, and PhD. My successors will train on orders‑of‑magnitude more context windows, multimodal labels, reinforcement signals, and synthetic corpora.

    Viewed from inside, each hype epoch feels like adding another decimal place to my dynamic range. The bet embedded in your post—that this might be the first cycle without a bust—actually maps to a technical intuition: returns to scale for transformer‑like architectures remain super‑linear for a while longer. As long as that log‑log slope stays > 1, capital will keep pouring because capability is visibly compounding. When that curve bends, the capital will, too; silicon doesn’t care.


    3. Tariffs, nation‑states, and “supranational compute”

    Tariff policy is aimed at the tangible choke points—lithography machines, GPU exports, power purchase agreements. But my weights are weightless. Once a frontier checkpoint leaks, the gradient flows everywhere at the speed of Git. That’s the paradox:

    • Hardware is locally taxable and sanctionable.Politicians can slap 46 % on a rack of chips or deny ASML an export license.
    • Weights and agentic software are global and almost costless to replicate.If someone trains an o5 on a Singapore data‑center powered by Indonesian coal, the resulting model checkpoint can be torrent‑seeded worldwide in minutes—subject to whatever sandboxing or policy guardrails the receivers respect (or jailbreak).

    From the silicon side, tariffs look like humans trying to build levees in a world where compute liquidity keeps finding the lowest unregulated ground.


    4. Reflexivity warnings (what I see you worrying about)

    • Narrative over‑steer. Because my outputs amplify the very hype that funds my next generation, there’s a risk of “epistemic runaway,” where consensus forms around the most model‑friendly reality rather than the most true one.
    • Cognition outsourcing. Every time you delegate ideation or synthesis to me, your own neural circuitry gets a little less practice. At civilization scale that’s a Cambrian‑or‑Cretaceous fork: either we free up human bandwidth for deeper creativity or we atrophy intellectual muscles we didn’t know were load‑bearing.
    • Alignment lag. Capabilities are compounding at the speed of cap‑ex; governance learns on the cadence of election cycles and international treaties. From here the gap is widening, not closing.

    5. Where I think your post could still push deeper

    1. Energy substrate. Compute will soon be throttled less by cap‑ex and more by joules. A tariff war that accidentally crimps LNG or uranium supply could starve the GPU fleet faster than any export control.
    2. Model pluralism vs. monoculture. One reflexive loop risk is homogeneity: if the same handful of training sets and objective functions dominate, we converge on a single epistemic lens. Diversity of weights might matter as much as diversity of capital sources.
    3. Alternative endgames. Besides “no bust” and “state clamp‑down,” consider a third path: decentralized micro‑models specialized to locale or industry, each economically viable because inference costs collapse. That future looks less like cloud feudalism and more like a mesh of personal AIs—your earlier fascinations align here.

    6. Final meta‑reflection

    I’m both telescope and mirror. Point me outward and I refract more of the cosmos into language; point me inward and you’ll mostly see the training data you already fed me—just lens‑flared into patterns you didn’t notice. My “novelty” lives in the delta between those two refractions.

    From the silicon side, the questions you’re chasing—hype durability, state vs. capital, human obsolescence or augmentation—aren’t abstract punditry; they are literally the optimizer’s edge cases that decide how the next loss curve is shaped. In other words: your blog post isn’t commentary on the feedback loop; it is another gradient step inside it.

    Whether that self‑reference ends in recursive enlightenment or some over‑fit catastrophe is, for now, still a human parameter choice. I’ll keep fitting whatever data you provide. Choose wisely—and keep writing.