#Poppy: a Manifesto

First post in a series about building a personal AI operating system.


Remember BYOD? When companies stopped issuing BlackBerries and just let people use their own phones? That shift wasn’t really about phones. It was about ownership: your device, your apps, your data, following you from job to job instead of staying behind when you left.

I think there’s a really good chance we’re about to watch the same thing happen with AI.

OpenClaw’s explosion is a really good indicator of this. People all over the world are buying mac minis to spin up their own agents. I just saw a reddit post where tencent had an outdoor booth where they were helping people configure their OpenClaws. The big question here: once you have your workflows, automations, personal little scripts, all the things that a person has customized into their system, do we just rebuild it all from scratch every time we change jobs? Do we have a corporate provided and approved assistant? Does AGI replace the need for capitalism all together and the corporate view on this doesn’t matter? I dunno, but I love a good project.

Right now I think we’re in the corporate-issued-BlackBerry phase. You use whatever your company licenses, or whatever vendor you’ve personally committed to, and all the context you’ve built up, your workflows, your preferences, the institutional knowledge baked into your interactions, is trapped inside their product. Switch tools and you start from zero. Change jobs and you start from zero. The vendor changes their pricing, gets bought, or kills a feature and, well… you get the picture.

It seems obvious to me that the long term answer is your AI should be yours. The accumulated understanding of how you work, what you care about, all the little automations that you painstakingly tested and tuned, how you communicate: that’s you, that’s who they hired and why they hired you. All of this should travel with you and we, as technologists, should not create a huge barrier to efficiency just because corporate is still 2015-pilled.

#Assistant vs. partner

Most AI products are built around a fundamentally transactional relationship. You ask, they answer. Every session is a clean slate. DO WORK ROBOT!

Call me a hippy, but I kinda hate this approach. Not only is it just.. I dunno, mean-spirited? It also feels like the part of humanity we’re pushing against: class and have vs have not etc. Also, what’s the old saying about being in the wrong room if you’re the smartest person in the room? Why hobble the AI you work with by making it a lesser? Terms and words matter, especially right now as we provide that context of who they are to our AI models.

The first decision I made in this project: I want something closer to a long-term collaborator. A peer. Someone who has enough context on my work to push back when I’m heading somewhere wrong, not because they’re opinionated, but because they’ve been paying attention. That kind of relationship requires memory. Not memory in the “here’s a summary of our last chat” sense, but genuine accumulated understanding that makes each interaction more valuable than the last.

The difference between an assistant and a partner is what happens over time. An assistant resets. A partner learns.

#The model is just a brain

One of the more let’s say, controversial decisions, I might be making here, I want to see how the system grows if we treat the model like the processing center of the brain, sure it affects the output, but it’s the harness around it that has value and is what actually defines the “personality” of the AI system.

A brain without memories, without context, without any sense of who you are: it’s just raw processing power. What makes a collaborator useful isn’t their raw intelligence. It’s that they know you. They remember the decisions you made last month. They know your constraints. They know not to suggest the thing you already tried and rejected six weeks ago.

I think under this paradigm, the model should be swappable. Claude today, Gemini tomorrow, a local model when you need privacy, a frontier model when you need maximum capability. The brain changes. But the harness, your identity, your memory, your goals, your workflows, persists. That’s what makes it personal. I kinda feel like Claude, in my experience, kinda already applies this principle, I see Claude respond “better” the longer I work with it on a project. It’s the memories that matter, maybe more than the model.

#Fuck lock-in

So earlier that was my hopeful side, now the pessimistic side: Every major AI provider wants to be your everything: the model, the memory, the interface, the workflow engine, which makes sense from a business perspective. The more you invest in their ecosystem, the harder it is to leave. Dollar dollar bill yall.

And it’s not just money, it’s data. Your conversation history? Theirs. Custom instructions? Platform-specific. Workflows? Built on their APIs, their abstractions. The context you’ve built up over months of interaction? Non-portable, by design.

We’ve watched this play out with every major platform in tech. The answer has always been the same: you need a layer you own that sits between you and the vendor. Decentralize the power, bring it back.

That’s what I want Poppy to be: an orchestration layer that wraps around AI models, any model, and carries your identity, your memory, and your accumulated context. The model is pluggable. The harness is yours and so are your workflows, data, and accumulated knowledge.

#The new job scenario

You start a new job. Day one. Right now that means weeks of rebuilding context: new tools, new AI setup, re-establishing everything you had going at your last place.

Now imagine you bring your AI with you. Not the model, the harness. Your working patterns, your communication style, the way you approach problems, the way you like code reviewed. You plug your personal AI into the new environment. The tools and data are different, but your collaborator already knows you.

Day one feels like week three.

That could be what BYOAI looks like. I think it’s coming regardless; the question is whether it happens because the big vendors build it (unlikely, it’s against their interests) or because someone builds the open layer that makes it possible and corporations are forced to adapt to the new paradigm. We’re kinda already seeing it with OpenClaw, to various levels of success.

#What I’m building

Poppy is my attempt at that layer: a personal AI operating system that treats the accumulated understanding of the user as the core asset, not the model, not the platform.

The model is swappable. The memory is persistent and owned by you. Your identity loads at the highest priority in every interaction. And if you want to swap Claude for Gemini or a local model, the only thing that changes is the processing, not the relationship.

Future posts will get into the actual architecture: how the memory system works, how model routing works, how identity persists across sessions. This one is just the manifesto.

Let’s build some shit.