St. Petersburg, FL · Solo Founder
Frank Anthony
Caruso, Jr.
Building AI systems that give creators and solo founders an unfair advantage. Currently: vyns.ai.
New here? Read how I got here →
About
Builder. Thinker. Occasionally a writer.
Always an entrepreneur.
I'm a solo founder based in St. Petersburg, Florida. I build AI-powered systems for markets where complexity and opacity work against ordinary people. I publish the ideas I can't build fast enough so someone else can.
I've worked inside startups, worn a lot of different hats, and spent years accumulating a clear picture of what I wanted to build and how I wanted to build it. AI gave me the tools to finally do it on my own terms. That combination, hard-won perspective plus a genuinely new set of capabilities, is what drives everything here.
I'm an infrastructure thinker by instinct. The ideas I keep returning to (verification systems, trust layers, reputation infrastructure) all trace back to the same observation: most markets would work better if the people in them had better information and fairer access. That's the thread that connects the majority of my creative thought processes and ideas as well as the motivation for my unique execution style. VYNS is the current expression of it.
Outside of building: I've written and self-published a mystery novel, and I share everything I'm learning in public because ideas that stay private don't do anyone any good.
Follow the build
Follow along as I build. I'll write when something's worth sharing.
How I Build
Morning
Think + catch up
Every morning starts with AI. Catching up on what happened overnight, reviewing where things stand, getting my head right for the day. Some mornings I take a walk and deep dive into the build. Those conversations get published here.
Midday
Build + ship
Afternoons are for building. I work with multiple AI tools as collaborators, each for different jobs. I don't write every line of code myself. I direct the build, make the judgment calls, and ship something every day.
Evening
Review + write
Evenings are for reviewing what shipped, writing observations, and updating the build log. If something surprised me or changed my thinking, it becomes content here.
The relationship between me and AI isn't one-directional. I give it context, constraints, and taste. It gives me speed, breadth, and a second perspective I can push back against. We build together. VYNS is what comes out of that process.
Active Work
vyns.ai
The AI Operating System for Solo Founders and Creators
VYNS analyzes how a creator or solo founder runs their business, generates a personalized AI implementation roadmap, and tracks whether they actually execute it. The gap between knowing what to do and doing it is where most advice fails. VYNS closes that gap. Free roadmap, $97 full report, $99/mo living dashboard that gets smarter the longer you use it.
vyns.ai →Current focus: creator enrichment pipeline and subscriber-scale intelligence. Following the build in real time on the build log.
Traction
Building in public
Real numbers from a real build. Updated live from the VYNS database. Admin-generated data is excluded. What you see is what real users created.
Metrics loading
Live product metrics from the VYNS backend. Connecting...
Ideas
Published to establish origin · Free to build
I publish ideas the way some people keep a private notebook, except I leave the notebook open. These aren't pitches. They're observations about broken systems, combined with a sketch of what fixing them might look like. If one of these solves a problem you have, build it. I just ask that if you start building, you let me know. Otherwise I might eventually come back to this list and duplicate your efforts. Plus I'd love to hear how it's going for you.
Voice Studio
/ AI Creative Partner That Writes in Your VoiceUpload your past writing. The AI learns your sentence structure, vocabulary, pacing, tone, and thematic tendencies, then helps you develop stories, draft chapters in your voice, design covers, format ebooks, and market your work. Not AI writing books. AI becoming your creative partner so one person can run an entire publishing studio.
RIAninja
/ Compliance SaaS for Registered Investment AdvisorsA compliance tracking platform for RIAs: automated annual review scheduling, weighted risk scoring, client compliance dashboards, 90/60/30-day reminders, crypto custody integration, and PDF generation. The commercial wedge into a larger post-quantum custody infrastructure play.
Startup Ranking Platform
/ LLM-Evaluated Ideas, Surfaced for InvestorsFounders opt in to have their startup ideas and development progress ranked by LLMs by sector and stage. Investors, incubators, and accelerators pay for access to the top-ranked ideas and teams as they emerge, before they raise, before they're visible.
Influencer/KOL Intelligence Market
/ Bridging Crypto Influencers and Verifiable PerformanceA marketplace connecting crypto projects with Key Opinion Leaders, built around verified performance data rather than follower counts. Influencers ranked by actual outcomes: did the projects they promoted perform? Did their audience act? Trust infrastructure for the influencer economy.
SphereOS
/ The Spatial Operating System for Your Digital LifeA reimagined computing interface that replaces app grids, tabs, and folder hierarchies with a zoomable 3D space of interactive orbs. You don't open apps. You navigate a living map of your digital world, guided by an AI consensus engine that routes queries to multiple models simultaneously.
The Seller-Finance Marketplace
/ Matching Buyers and Sellers Who Don't Need a BankWith mortgage rates elevated and conventional financing friction at a decade high, seller financing has re-emerged as a viable path for many transactions. A marketplace to surface, structure, and close seller-financed deals with AI-assisted underwriting and document generation.
PopCity
/ The Operating System for Real-World CitiesNot a dating app. Not an events app. The real-world connection layer for cities, connecting people to things to do, people to do them with, local businesses, services, creators, and economic opportunities. AI-orchestrated, city by city. The MVP wedge: what's happening in my city and who can I go with?
AetherNet
/ Post-Quantum Crypto Custody InfrastructureA proposed protocol layer for post-quantum secure crypto custody, designed for institutions and RIAs managing digital assets as quantum computing threatens current encryption standards. Positioned as a Fireblocks competitor for the post-quantum era, entered via a SaaS compliance wedge.
Purple Dog Listings
/ AI-Assisted FSBO for Sellers Who Don't Need an AgentUse AI to help homeowners declutter, stage, and photograph their listings without a traditional agent. The roadmap extended into full FSBO support: offer management, disclosure automation, seller-finance structuring. The idea that occupied 9 months of mental energy while nominally being a real estate agent.
Cities Powered by Their Own Trash
/ Municipal Plastic as Resilience InfrastructureA systems-level proposal for municipal plastic-to-fuel resilience grids: continuous pyrolysis plants, strategic buried fuel reserves, and neighborhood-scale generation networks that keep power on when the grid fails. A city of 100,000 generates enough plastic waste for 200+ days of emergency generation runtime. The engineering exists. The will doesn't yet.
Home-Scale Solar Pyrolysis
/ Turning Household Plastic into FuelA detailed engineering breakdown of how the solar thermal concentration principles behind large-scale plastic-to-fuel systems can be miniaturized for household use: parabolic dish concentrator, sealed pyrolysis reactor, condensation train, zeolite catalyst upgrade, and realistic fuel yield. Designed to be built for under $1,500 in components.
The Prediction Reputation Score
/ Accountability for Analysts and AI ModelsA standardized, public reputation layer for anyone making falsifiable predictions: analysts, influencers, AI models. Think credit scores, but for forecast accuracy. The original idea behind VYNS, and a long-term data infrastructure play in its own right.
Groundwire
/ Verified Presence NewsA live news network where only people physically present at an event can report what's happening. Traditional news seeds a story. Verified eyewitnesses on the ground provide the real-time updates. Location-verified, credibility-scored, proximity-ranked.
Observations
Not ideas, findings
Ideas are hypotheses about what could exist. Observations are what I've actually found while building: gaps, unexpected capabilities, design patterns that work, limitations that don't get talked about.
When you're deliberating, actively researching, and pivoting strategy inside a chat on a large language model, you end up with code or CLI prompts that were generated earlier in the conversation but are no longer relevant. Further down in the same conversation you changed direction, which generated new code going a different way. The difficulty is compounded when you work the way I work: strategizing from my phone, then going to my computer to implement whatever came out of those sessions. You sometimes have to run all the code sequentially to make sure you don't miss anything, or try to manually remember which parts of the conversation are obsolete due to a mid-session pivot and which are still fresh and applicable. It would be a genuinely useful feature for builders if these models automatically detected code or prompts that no longer match the current direction of the build and put a strikethrough on them. So you don't have to manually track this yourself or waste credits and time running prompts you don't need anymore.
Implication: The gap between mobile strategy sessions and desktop implementation is real friction that gets worse the longer and more iterative the conversation gets. A model that could visually mark obsolete code blocks based on detected pivots would meaningfully cut down on implementation errors and wasted effort for anyone building across devices.
During the process of building my personal website and fleshing out content to publish there and use on social media (for both my personal accounts and VYNS) I realized something. Most of it came from conversations with AI, and none of it could be easily exported to an agent I could run for discovery on which thoughts and ideas across the different large language models I use are worthwhile to publish as content. The only way to do it is to manually copy-paste, which is time-consuming. OpenAI has an export function and I could route that to an agent, but there would be a lot of context drift and training required to get it right, and that's only one model. This is another major limitation of every major LLM platform being a closed system. As a solo founder bootstrapping with basically zero funding, I'm always looking for practical, cost-effective ways to save time while building. For most solo founders, I imagine creative thinking and problem-solving happens inside these closed systems. I've seen a move toward more open-source models lately, but I haven't done a ton of investigative work there yet. What I do know is that the current landscape of closed models makes a layer that sits above everything and treats your conversations across every platform as a unified stream of thought awesome to imagine but currently impossible to build due to API read limitations on your own private data. For now the workaround is obvious and a little more time-consuming. I personally monitor my chats, and when something good surfaces I copy-paste it into my personal branding chat and evaluate it as content for my website or socials. Hopefully in the near future an option opens up that allows for automating some of this. It would free up time and, as the system gets optimized, make sure nothing gets missed.
Implication: When the portability layer for AI conversations does get built, it changes everything about how personal knowledge management works. The person who builds it owns something significant. Until then, the best system is the simplest one: notice when something valuable surfaces, capture it immediately, process it later.
Native LLM platforms that have built-in voice chat functionality don't offer transcription as a built-in feature or add-on. This could be useful for power users or anyone generating specific outputs from their conversations for media and marketing. This morning I spoke with ChatGPT on my morning walk. I'm turning that into a podcast and have to use an external transcription service just to get the text for my site. A tool that hooks into the API, captures voice chat in real time, and parses it into publishable content (blog posts, podcast transcripts, build logs) would be valuable. I might eventually build this into VYNS product offerings down the road if I don't see it emerge on its own within native LLM offerings.
Implication: The raw material exists. The infrastructure to capture it doesn't. Every power user having hour-long voice conversations with AI is generating valuable thinking that disappears the moment the session ends. Whoever builds native voice-to-publishable-content tooling, whether the LLM platforms themselves or a third party, captures a workflow that's already happening manually at significant friction cost.
Anthropic markets the 1M token context window as a capability milestone. And it is one. Being able to feed an entire codebase, a full document collection, or months of conversation history into a single API call is genuinely useful, especially when your whole value proposition depends on deep context, as mine does with VYNS. But for most of the last year, that capability came with a hidden trip wire. Once a request crossed 200K tokens, the entire call shifted into a premium pricing tier, a 2x multiplier that applied retroactively to the whole request, not just the overage. Anthropic removed that surcharge in March 2026. That's a good move. But the way it worked before and the way the change was communicated (a pricing page update, not an announcement) says something. Around the same time, they quietly adjusted how usage limits burn during peak hours. If you're building on weekday mornings Pacific time, your session capacity is used up quicker than at other times, while your weekly total stays the same on paper. There's no real-time visibility into this. A lot of users are frustrated and hoping Anthropic provides a dashboard or some real-time metric to view token burn and actual usage. What you ended up feeling was hitting a wall much earlier than the documentation implied. Anthropic has described the opacity of usage limits as a 'deliberate product decision.' Not sure why they went this route. The practical move is to build like the pricing will change again, because it will. Instrument every API call. Build cost ceilings by operation type. Never let a 1M context window become the default just because it's technically available. The capability is real. The bill is also real. Make sure you know which one you're actually using.
Implication: AI providers are incentivized to headline the capability and manage cost through pricing complexity. AI-native companies building on top of them, like VYNS, need the inverse: simple, predictable pricing and explicit communication when the rules change. The more capable the models get, the more expensive the edge cases become, and the more important transparency is. Hopefully as infrastructure, technology, and energy factors improve we will see more efficient and transparent methods that translate into reduced costs and less opacity around usage limits.
Claude Mythos leaked this week. Anthropic confirmed it's real, described it as a 'step change,' and said it's currently being tested with a small group of early access customers. That group isn't me. It probably isn't you either. This is new. A year ago, every model Anthropic shipped was effectively available to everyone with an API key on release day. The delta between what a funded enterprise customer could access and what a solo bootstrapped builder could access was close to zero. Mythos changes that. The most capable model ever built is being distributed on an invitation basis, tiered by relationship and use case, with general availability deferred while the cybersecurity implications get worked through. I don't think this is wrong. A model that can find and exploit software vulnerabilities faster than human defenders probably shouldn't be available to everyone immediately. The deliberate rollout makes sense. But I want to name what it means for the builder layer: the gap between what well-resourced teams can build and what solo founders can build just got wider, not because of money, but because of access. The most powerful reasoning and coding capabilities are going to the companies already in the room. The rest of us build with last quarter's model. One more thing: Anthropic left the announcement of a model with unprecedented cybersecurity capabilities in an unsecured, publicly searchable data store. The irony needs no elaboration.
Implication: The implication isn't to complain about it. It's to build fast enough that by the time Mythos reaches general API availability, you're already far enough along that the capability jump accelerates what you're already doing instead of being the thing that gets you started. The window is real. The urgency is real.
Everything I am building for VYNS involves multiple parallel workstreams across multiple chats. Product, infrastructure, brand, legal, GTM, marketing, build and deployment prompting, and more. Each lives in a different chat thread, often across different AI tooling. The problem is that there's no native mechanism to hand off state between sessions. You can't chain outputs from one conversation into another without significant manual effort and time to get the context right. I currently manage this with a combination of hand-written notes, copy-paste, and session summaries that I paste back in to start again. That friction is real and I estimate it is roughly a 20-30% tax on reconstruction time. It compounds daily.
Implication: The missing primitive is project-scoped session chaining: a way to define a project, attach relevant sessions to it, and have the AI walk into any new session with the accumulated state of every prior one. Until that exists, solo founders building complex products are spending 20-30% of their AI interaction time on context reconstruction rather than actual work.
I use Claude as my main technical co-founder for the VYNS build along with tools from OpenAI, Gemini, and xAI for various reasons I'll likely get into on my blog one day. The collaboration is genuine and unique. I am able to have the systems hold architecture decisions, debate tradeoffs, and generate production code. I feed different channels and chats live updates and make real-time decisions based on the current AI landscape from multiple angles including policy and regulation, new versioning, and tech releases. But as sessions grow, context fills and conversations end. Especially in Claude, to be honest, but that's because I'm most reliant on it currently. Each new chat starts cold and the discontinuity compounds over a long build. You rebuild context constantly, and the AI's knowledge of your project resets or pulls outdated info from relevant chats rather than accumulating properly and syncing only with updated decision frameworks rather than obsolete or pivoted ideas. Memory and past chat search help at the edges, but the fundamental architecture is still per-session.
Implication: The next competitive advantage in AI tooling isn't smarter models. It's durable, persistent, project-scoped memory that accumulates across every session. The founder who builds inside an AI that actually remembers everything will move faster than anyone working in isolation.
I structure my chats in Claude as departments for my startup, VYNS. I have a few chats that persist for personal use, and most other one-off chats I end up deleting after I gather the intel I need. But for VYNS, I keep a few persistent chats that are critical for my work as a solo founder. The most important of these are my Build logs, sequential chats where the previous one summarizes all key findings and stages as a prompt for the next to open and resume our work once the current chat reaches capacity. I also keep dedicated chats for Marketing, Philosophical, Competitor Research, Partnership Opps, and Local Resources. What I eventually want to build is an agent for each chat and an Executive Assistant agent that reads across and internally interacts with all of these based on my activity throughout the day, synthesizes findings, and reports back with a daily briefing highlighting my top priorities across the VYNS build and my personal brand. The limitation: no major LLM that I know of exposes read access to existing chat history. The threads are locked inside the web interface. This means I have to manually copy and paste references between chats and to my EA agent, as I largely do now. This isn't a small gap. It's a missing architectural primitive that would unlock an entirely new category of AI-assisted organizational design.
Implication: Once persistent, named, queryable chat context is exposed via API, solo founders and small teams can build AI-native organizational structures that simulate the operating layer of a full company without the overhead. One day you may only need one hire between yourself and your agent layer to compete with large organizations. Fixing this data access via the API would be the first step toward creating that operating environment.
Morning Walks
View all walks →Voice conversations recorded on morning walks — covering the VYNS build, AI news, strategy, and whatever is on my mind that day. Published when the walk happens.
3 min read
The Biggest Challenge Isn't Building Smart AI. It's Building AI People Trust Enough to Act On.
Five hard strategic questions about VYNS. Feedback loops, consensus vs. challenger AI, data moats, execution trust, and the founding story. The sharpest insight from this walk: analytical trust and execution trust are different things. Most AI companies have the first. Almost none have the second.
2 min read
I'm Not Building an AI Tool
The framing I've been using for VYNS is wrong. After an hour of thinking out loud this morning, I realized I'm not building a tool. I'm building a system that helps people decide what to do, and then helps them do it. That's a different category entirely. Session notes from the first morning walk.
Build Log
vyns.ai / ActiveA running record of what's been shipped.
| Date | Project | What shipped |
|---|---|---|
| Apr 4–6, 2026 | vyns.ai | Collapsed the two-step loading flow into one — one animation, one destination straight to the roadmap. Rewrote recommendations to adjust around what a creator already has, not just what they're missing in the abstract. Added subscriber-scale segmentation so advice lands differently for a creator starting out versus one mid-growth. Replaced the labor cost citation with an opportunity cost rate and added a methodology paragraph to every roadmap so the math is legible. Shipped a shared business header used across the free roadmap, the paid report, and the dashboard so branding and business identity stay consistent everywhere. Fixed a failure where the report didn't auto-render after payment — the waiting screen now flips to the finished report the moment generation completes. Added a personalized 'What This Means For You' section above the metric cards on the free roadmap. Email-list gap detection: if a creator has no newsletter path visible, that becomes the first recommendation. Hours saved now shows as a range derived from the actual recommendations generated, not a preset. Added a 30-day full dashboard trial to every $97 report purchase. |
| Apr 3, 2026 | vyns.ai | Confirmed the full funnel working end-to-end on a real user: URL submission → roadmap → $97 report → dashboard. It had been broken at multiple points. Now clean. Upgraded the core model and widened output depth. Shipped programmatic SEO: 50 industry-specific landing pages across target SMB verticals, each funneling back to the URL submission flow. Submitted the sitemap to Google Search Console. The standard dashboard is being built as a foundation, not a ceiling — there's a pattern-recognition layer that will sit above it later, but that comes after the base product is stable. |
| Mar 26–30, 2026 | vyns.ai | Shipped the diagnostic-free free roadmap. URL in, roadmap out, no friction. The original 10-question gate moved to post-purchase and got reframed as report personalization. Platform detection live for major social channels. Widened company enrichment so the analysis starts with more context on the business before the model ever sees it. Removed brand color transformation — extraction kept producing more disappointment than delight. Converted the homepage from a vanilla HTML prototype to a production build with dark/light mode persisted across the full funnel. |
| Mar 27–28, 2026 | vyns.ai | Shipped consultant prospecting mode. Same product, different entry frame. A consultant drops a client's URL and gets a cold outreach audit instead of an owner roadmap. One mode parameter, one prompt variant. The GTM unlock: AI consultants need to audit 10–20 prospects a week and VYNS already does that analysis. Also shipped the credits system: three pack tiers (Starter 3cr/$49, Standard 10cr/$129, Pro 25cr/$249), full transaction history, and subscription grant logic that drops 2 free credits monthly to dashboard subscribers. |
| Mar 26, 2026 | vyns.ai | Architect decision: removed the consultant marketplace from the live product. The 7-step onboarding, proposal system, milestone escrow, bid review, and dispute management all came off the surface. The replacement is a recommendation engine that indexes external specialist marketplaces and surfaces the best match for each roadmap. The moat isn't the marketplace. It's the AI-context-aware routing. Also shipped the report/dashboard split: the $97 report is a static branded snapshot, the $99/month subscription unlocks the living dashboard as a persistent product. |
| Mar 25–26, 2026 | vyns.ai | First successful full report generation on a real business. The report surfaced competitor landscape, regulatory signals, technology trends, a 90-day roadmap, and a revenue projection grounded in the owner's salary and time cost. The pipeline had timed out three times before this run. Report now generates in 20–30 seconds. |
| Mar 24–25, 2026 | vyns.ai | Upgraded the crawl pipeline to discover a site's full structure, prioritize the most diagnostic pages, and handle larger sites cleanly. Hardened the platform in parallel before onboarding real users — no details there on purpose. |
| Mar 15–22, 2026 | vyns.ai | A week of infrastructure work I'm intentionally not detailing. The outcome that matters outside the codebase: the engineering standards the model reads before every task went from loose preferences to a hard contract. Output quality went up immediately and hasn't come back down. |
| Mar 1–14, 2026 | vyns.ai | Core funnel complete: homepage URL input → analysis → free AI roadmap → $97 full report → $99/month living dashboard. Payment flow live, subscription billing working end-to-end, concierge booking at three price tiers. The product decision that shaped everything: diagnostic questions moved from pre-roadmap gate to post-purchase personalization. Friction before value is a conversion killer. Friction after value, when the user has already committed, is welcomed. |
Essays
Long-form thinking on AI, building, and what it means to start something from nothing.
11 min read
From Direct Mail to AI: How I Ended Up Building VYNS
I was 12 when I printed 1,000 flyers for a landscaping business and put them in mailboxes around my neighborhood. That instinct never went away. Here is the full story of how I ended up building VYNS.
Reading + Watching
What's shaping my thinking
Articles, papers, podcasts, and tools I keep coming back to. Not a curated list for clout. Just what's actually informing how I build.
Fighting for Democracy Amid the AI Race
Audrey Stienon & Yoshioka
Published in Competition Policy International / TechREG. This kicked off my thinking about AI concentration risk and the regulatory landscape around scraping infrastructure. Shaped how I think about what happens when a few companies control the models everyone else builds on.
NxCode
Referenced during my X vs. LinkedIn marketing strategy sessions. Practical framing for how solo founders should think about distribution when the product is AI-native.
WebProNews
Validated YouTube as a secondary platform for evergreen SEO. The thesis: video content is getting preferential treatment in AI-generated search results. Worth watching as a distribution channel.
ServiceTitan
Industry report on AI adoption among small businesses and trades. Confirmed what I was seeing anecdotally: most small businesses know they should be using AI but haven't figured out how. That gap is exactly where VYNS lives.
arXiv, Dec 2025
Research on how AI agents can maintain identity and memory across sessions. Interesting implications for any product that needs AI to remember context over time rather than starting fresh every conversation.
ResearchGate
How do you know when an AI model is confident vs. guessing? This paper surveys the current state of the research. Relevant to anyone building products where users need to trust AI recommendations enough to act on them.
arXiv:2504.19413
Practical architecture patterns for giving AI agents memory that persists and scales. The gap between demo agents and production agents is mostly about memory. This paper addresses that directly.
Anthropic
Claude Code can auto-generate reusable skill files from observed patterns. I adopted this for my own build workflow. If you use Claude Code, this is worth exploring.
Open Questions
Questions I'm sitting with. No answers yet. That's the point.
What is the one metric that proves VYNS is working?
What is the biggest reason it could fail?
Is the real product the roadmap, the dashboard, the agents, or the data?
What would make this a $10M company versus a $1B company?
Do founders actually need consensus from multiple AI models, or do they need a model that challenges them and argues back?
Follow the build
I write when something ships, breaks, or changes my thinking. No schedule, no spam. Just the work.
Follow the build
Follow along as I build. I'll write when something's worth sharing.