Parenting wisdom for product managers, powered by Lenny's Podcast

Transparency

The AI Builder's Manifesto

This entire product was built by one person with AI. Here's what that means for you — the good, the honest, and the principles behind it.

Share:

What “built with AI” means

Tiny Stakeholders is what some people call a “vibe-coded” project. Every line of code, every database query, every UI component was written in collaboration with AI — specifically, Anthropic's Claude. I describe what I want to build, the AI writes the code, and I review, test, and iterate until it works the way it should.

I'm not a software engineer by training. I'm a former UX designer turned product manager with a builder's heart. AI gave me the power to turn ideas into a real, working product — but with that power comes a responsibility to be transparent about how it was made and what that means for you.

This page exists because I believe you deserve to know. And because I think more non-technical builders will follow this path — and they should do it responsibly.

What we did to keep it safe

No user accounts, no stored data

Tiny Stakeholders doesn't require sign-up or store personal data. All tips are public. There's nothing to leak because we don't collect anything.

Responsible AI usage

AI is powered by Anthropic's Claude, which does not use your data for training. All AI-generated content is clearly labeled as such — it's parenting humor, not parenting advice.

No exposed secrets

API keys, encryption keys, and credentials are stored as environment variables — never in the codebase. The repository is audited to ensure nothing sensitive is committed.

Open about the source

Every tip links back to the actual Lenny's Podcast episode it came from, with real quotes and timestamps. You can verify anything we say.

Not anonymous

This project is built by Ondrej Machart — a real person you can reach anytime. I'm accountable for what I ship. Find me on LinkedIn

What you should know

Transparency means naming the risks, not just the safeguards.

AI-generated parenting tips are meant to entertain and prompt reflection, not replace real parenting advice. The parallels between PM work and parenting are played for laughs — don't take them literally.

This is a solo project built with AI assistance. It has less peer review, less QA, and fewer eyes on it than software built by a traditional team. Bugs and edge cases will exist.

The content depends on third-party sources (Lenny's Podcast transcripts) and services (Anthropic for AI, Vercel for hosting). If any of them change, the product is affected.

AI-assisted code can introduce subtle issues that are hard to catch without deep expertise in every domain. I mitigate this with testing, code review with AI, and iterating — but it's not the same as a dedicated engineering team.

10 principles I follow as an AI builder

These are the commitments I make to myself and to you. If you're building with AI too, maybe they'll resonate.

1

Ship something real, not a toy

AI makes it easy to generate code. That doesn't mean it's easy to build something useful. Every feature should solve a real problem for a real person — not exist because it was easy to generate.

2

Understand what you're building, even if you didn't write every line

I may not write every function by hand, but I understand what every part of the system does and why. If I can't explain it, I don't ship it.

3

Security is not optional just because you're not a security expert

Encryption, authentication, data isolation, input validation — these aren't nice-to-haves. If you're storing people's data, you owe them the same diligence a traditional engineering team would provide.

4

Be honest about what AI generates

When AI produces insights, recommendations, or content for users, make it clear. Never present AI output as human expertise or clinical truth. It's a mirror, not an oracle.

5

Treat user data like it's your own diary

People share vulnerable things — what drains them, what they're struggling with, what they dream about. Encrypt it. Don't sell it. Don't train on it. Don't peek at it.

6

Don't hide behind anonymity

Put your name on it. If something goes wrong, people should be able to reach you. Accountability is the minimum bar for trust.

7

Test like a user, not like a developer

Click every button. Try the weird flows. Break it on your phone. AI can write tests, but only a human can feel when something is off.

8

Admit what you don't know

I'm not a security engineer, a data scientist, or a therapist. This product is built with care and best practices, but it's one person with AI — not a team of 50. Transparency about that is more trustworthy than pretending otherwise.

9

Optimize for insight, not engagement

It would be easy to gamify this. Send notifications. Add streaks. But the goal is genuine insight, not screen time. Respect the user's attention.

10

Keep it free as long as you can

Not everyone who needs a laugh and a moment of recognition can afford another subscription. If the tool is useful, make it accessible. Find other ways to sustain it.

Use this manifesto for your own project

Building with AI too? Feel free to adapt these principles for your own product. Copy the manifesto below — it's a generic version without product-specific references.

Something feels off? Let me know.

If anything doesn't work, seems wrong, or makes you uncomfortable — I want to hear about it. Reach me directly:

Ondrej Machart on LinkedIn

Built with care, AI, and a lot of nap-time iteration.

Explore parenting tips