Jan 30, 2026

I Switched from ChatGPT to Claude: What I Learned About AI Strategy

Analyzing Anthropic's methodology moat and whether 'trustworthy AI' is defensible long-term.

Introduction

I use Claude almost every day. I switched from ChatGPT about three months ago, and I’ve kept using it, but I couldn’t explain why. Was it just personal preference, or was it interweaving features that enhanced my experience while using it?

The more I used Claude, the more I noticed decisions that confused me. Why do they limit my messages when ChatGPT offers unlimited? Why don’t they have a flashy app store like OpenAI? Why do they keep emphasizing “Constitutional AI”? These seemed like random choices, maybe even disadvantages. I wanted to understand whether there was actual strategic logic connecting these decisions, or if I was just rationalizing a subjective preference.

This is my first case study in Aakash Gupta’s AI Product Management transition guide. I’m systematically working through his roadmap to build the strategic thinking and technical fluency needed for AI PM roles. Each article in this series applies principles I’m learning to small experiments, such as analyzing existing products, building prototypes, and developing my own product philosophy for how AI should work.

This analysis specifically follows Aakash’s article “Your Guide to AI Product Strategy”, which breaks down how to evaluate AI products across the capability stack, competitive moats, positioning, and roadmap. Rather than just reading theory, I decided to reverse-engineer a product I use daily to understand how strategic decisions form coherent systems.

My main research questions:

  1. What makes Claude actually different from ChatGPT or Gemini at a strategic level?

  2. Do they have real competitive advantages, or are they just another AI chatbot?

  3. How do their product decisions connect to their business strategy?

  4. What could kill them?

Framework I used: I applied Aakash’s strategic analysis framework across four areas: capability stack and collaboration model; competitive moats (data vs. technology vs. network effects vs. brand); AI-native vs. AI-enhanced positioning; and roadmap strategy.

What I discovered: Claude’s apparent “limitations,” such as their message caps, slow iteration cycles, and low lock-in, aren’t bugs or oversights. They’re deliberate strategic choices that reinforce what I believe is a bet: that enterprises will pay a premium for trustworthy AI, even when competitors achieve “good enough” safety. Whether this bet pays off depends on how quickly constitutional AI can be replicated and whether trust remains valuable as the industry matures.

Part 1: What Kind of Product Is Claude?

Before diving into strategy, I needed to understand what Claude actually is at a technical level.

Where Claude Sits in the AI Stack

The AI industry has roughly five layers:

  1. Infrastructure (chips, cloud computing)

  2. Model Providers (companies that train foundation models)

  3. Development Platforms (APIs and tools for developers)

  4. Vertical Solutions (AI for specific industries)

  5. Enterprise Integration (connecting AI to company workflows)

Claude is primarily a Model Provider (layer 2), but it also has a development platform component via its API. They’re not trying to own the whole stack as they’re focused on building and distributing the underlying AI model.

What Claude Actually Does

Primary function: Generative AI for content creation and augmentation. Examples include writing, editing, analysis, problem-solving, and it’s mostly used for conversational assistance.

Key question: Is Claude trying to automate tasks, or just help humans do them better?

My take is that it’s mostly augmentation/collaboration, not automation. My reasoning is that, in the chat interface, you still need to copy-paste outputs, which means you have to rely on your own autonomy to distribute information. Claude generates content but doesn’t execute actions on its own.

But there are exceptions:

  • Computer Use can actually operate applications (click buttons, navigate websites).

  • Code Execution runs Python analysis automatically.

  • Skills are teaching Claude repeatable workflows.

So more accurately: Claude is primarily a collaborator with selective automation. They’re not trying to replace you; they’re trying to make you faster. I inferred that they’re not competing to be a fully autonomous agent (yet). They’re positioning as the “thoughtful assistant” rather than the “do everything for you” bot. This shows up in everything from their UI to their usage limits.

Part 2: What Makes Claude Defensible?

This was the most important part of my analysis. In tech, a “moat” is something that protects you from competition. It’s what keeps competitors from just copying you and winning.

I looked at four potential moats:

  1. Data (do they have unique data competitors can’t get?)

  2. Technology/Methodology (do they have proprietary tech that’s hard to replicate?)

  3. Network Effects (does the product get better as more people use it?)

  4. Brand/Positioning (do people choose them for specific advertised reasons?)

Constitutional AI Is the Actual Moat

At first, I thought Claude’s advantage might lie in data or a network effect. But it’s apparent that their real moat is something called Constitutional AI, which is their entire strategy.

Instead of training AI using thousands of human reviewers saying “this response is good, this one is bad,” Claude trains itself using a set of written principles (a “constitution”). The AI learns to critique its own responses and revise them to align with these principles.

Why this is different from standard approaches:

Most AI companies use something called RLHF (Reinforcement Learning from Human Feedback):

  • Show human reviewers two AI responses.

  • Humans pick which one is better.

  • Train the AI to produce more responses like the “better” ones.

The problem with RLHF: There’s a trade-off between helpful and harmless.

  • If you train an AI to never say anything potentially harmful, it becomes overly cautious and refuses to help with anything remotely controversial.

  • Example: An AI that responds to every question with “I can’t help with that” is perfectly harmless but completely useless.

How Constitutional AI solves this:

From Anthropic’s research paper, the key finding:

“Constitutional RL models trained with AI feedback learn to be less harmful at a given level of helpfulness.”

Claude can be both helpful AND safe at the same time. It’s not a trade-off anymore. If you ask Claude about a controversial topic, it engages with your question thoughtfully instead of just refusing. But it also won’t help you do something harmful. It found the middle ground that RLHF struggles with.

Constitutional AI represents a genuine competitive advantage across multiple dimensions. First, there’s a significant research moat. Anthropic spent years developing this methodology, and competitors can’t simply replicate it overnight. It requires substantial research investment in alignment techniques, and Claude maintains an ongoing advantage as they continue to refine the approach.

The cost structure also favors Constitutional AI over traditional methods. It requires far fewer human labelers than RLHF, scales more efficiently as models grow larger, and spares human reviewers from evaluating potentially disturbing content. This structural advantage compounds as they scale.

Transparency creates another layer of defensibility. Because the principles are written in plain language rather than hidden in a black box, regulators and enterprises can audit how Claude makes decisions. This builds trust through understandable reasoning, which matters enormously in regulated industries where you need to explain AI behavior to compliance teams.

Finally, there’s a performance advantage that users actually notice. Claude achieves a better balance of helpfulness and safety than competitors. It’s less evasive when you ask complex questions, and when it does refuse something, it explains why.

What About Data?

I initially thought Claude might have some special proprietary dataset. Turns out they don’t, and it’s intentional. Claude trains on data similar to everyone else's—books, code, websites. But they actively limit data collection from users. Enterprise customers’ data isn’t used for training by default, and even consumer users can opt out of data sharing. Their development cycles run 18-24 months rather than continuous real-time learning.

The strategic insight here is that their data advantage isn’t about what data they have. It’s about what they don’t do with data. This is a positioning moat, not a data moat. Compare this to competitors, where every Google search makes their AI smarter, every Meta like or share trains their models, every Netflix view refines recommendations. Claude takes the opposite approach: feedback gets aggregated over long cycles, and enterprise data stays private.

At first, I thought “slower learning equals disadvantage.” But enterprises don’t want their AI changing behavior unexpectedly based on other customers’ data. They want stability and predictability. Claude is trading optimization speed for trust, betting that enterprise buyers will pay a premium for consistent, auditable behavior rather than constantly evolving capabilities.

Does Claude Have Network Effects?

Not really, and it seems intentional. I looked at traditional lock-in mechanisms. Claude’s API ecosystem creates moderate switching costs for developers; projects provide minor organizational value; memory stores your preferences; skills let you teach custom workflows; and artifacts let you iterate on generated documents. These create some value lock-in, but nothing compared to competitors.

OpenAI has the GPT Store, custom GPTs, and tons of integrations, creating high lock-in. Gemini is built into Gmail, Docs, and Search with very high lock-in. With Claude, it’s easy to export everything, use the standard API, and keep low lock-in. You can copy your prompts to ChatGPT and switch tomorrow.

There are two types of lock-in worth distinguishing. Friction-based lock-in makes it hard to switch by imposing switching costs and data traps. Value-based lock-in makes you not want to leave because the product keeps improving specifically for you. Claude chose low-friction lock-in by respecting user agency and making export easy with only moderate value lock-in from projects and skills that build over time.

This is completely consistent with their trusted brand. They’re not trying to trap you, but it also makes them vulnerable. If ChatGPT matches their quality tomorrow, switching costs are minimal. They’re betting that quality and trust matter more than lock-in, which is either brilliant or naive depending on how the next two years play out

What About Brand?

Brand positioning turns out to be more significant than I initially thought. Claude positions itself as “the thoughtful, trustworthy AI,” and this isn’t just marketing; its product decisions actively reinforce it. The tone feels genuinely helpful without being cloying, which is what constitutional AI is meant to do. The project organization shows thoughtful UX rather than just feature dumping, and they intentionally cap usage to maintain quality, unlike competitors offering “unlimited” plans. And they tell you clearly what they do with your data rather than burying it in terms of service.

This creates a real positioning advantage with a specific customer segment: people who value thoughtfulness over raw power and are willing to accept some limitations if it means better quality and more trust. Whether that segment is large enough to build a sustainable business is an open question.

Part 3: Are They Building Something New, or Just Adding AI to Old Ideas?

This distinction matters a lot in product strategy. There are two approaches to AI products:

AI-Enhanced: Taking an existing product and adding AI features

  • Example: Microsoft Word adding a “rewrite this paragraph” button.

  • The core product works fine without AI; it’s just a nice addition.

AI-Native: Building something that couldn’t exist without AI

  • Example: ChatGPT—the entire product IS the AI.

  • Remove the AI, and there’s no product left.

Claude is clearly AI-native. But the more interesting question is, what are they reimagining?

What Problem Are They Actually Solving?

A couple of years ago, companies wanted to use AI, but they’re scared. They worried about security—will it leak our data? They faced compliance unknowns—can we actually use this in regulated industries? They feared unpredictable behavior—will it say something embarrassing or harmful? And they struggled with black box decisions—how do we audit what this thing is doing?

My interpretation is that Claude is betting to make AI safe and auditable enough that it becomes infrastructure. Think AWS for compute. You don’t worry about the servers; you just use them. Constitutional AI becomes productized safety out of the box.

This positioning targets a specific B2B customer type: risk-averse organizations that need AI but can’t afford mistakes. Healthcare deals with patient data and life-or-death decisions; finance navigates regulatory compliance and requires audit trails; legal handles confidential information and liability concerns.; enterprise IT operates under strict security policies and data governance requirements. These industries pay premium prices for “safe” solutions. If Claude can own “the trustworthy AI” position, they don’t need to be the fastest or the cheapest.

Part 4: Where Are They Headed?

To understand Claude’s strategy, I reviewed their recent product launches and identified patterns in their roadmap.

Recent Launches (Last 6 Months)

Over November and December 2025, Claude shipped Claude Opus 4.5 (their most powerful model yet), Claude in Excel in beta to integrate into Microsoft’s ecosystem, a Chrome extension to make Claude available everywhere, Skills for organizations letting companies deploy custom AI workflows, and infinite conversation length through automatic summarization.

Claude’s evolution breaks into two clear phases. In Phase 1, they were chat-first and safety-focused—building the best conversational AI, establishing trust through constitutional AI, and serving both consumers through the chat interface and developers through the API.

Now in Phase 2, they’re expanding beyond chat. Skills move them toward AI agents that can be taught workflows. Excel integration embeds them in existing enterprise tools. The Chrome extension provides ambient AI, always available like Gemini. Claude Code targets developers specifically. And tighter usage limits help them manage costs as they scale.

What this tells me is they’re trying to expand from “safe chatbot” to “safe AI infrastructure that works everywhere.” But they’re doing it selectively, like with Excel and Chrome, not building their own ecosystem like OpenAI’s GPT Store. They’re focused on knowledge worker productivity in contexts where mistakes are expensive, not creative play or consumer entertainment, such as image or video generation.

Usage Limits Getting Tighter

Recently, Claude started limiting heavy users more aggressively. Heavy Claude Code users now get 40-80 hours per week, down from higher previous limits. This affects less than 5% of users, but those users were extremely upset.

This matters strategically because it suggests inference costs are a real problem. Running these models is expensive, and unlimited usage isn’t sustainable at current prices. Claude is prioritizing many moderate users over a few power users, which aligns with their “quality over quantity” positioning but could alienate developers who want to use Claude Code heavily.

My interpretation is that they’re betting that thoughtful, moderate usage is more valuable and profitable than unlimited power-user access. This aligns with their brand, but it’s also a constraint forced by economics, as they may not have a choice here.

How They Handle AI Limitations

I wanted to understand how transparent Anthropic is about what Claude can’t do well. They maintain documentation on usage limits and rate limiting, techniques to minimize hallucinations, their constitutional AI policy explaining what principles guide Claude, and context window management for handling long conversations.

More importantly, they don’t hide limitations. They tell you when you’re approaching limits, explain why certain requests are declined, and summarize earlier messages when chats get long instead of just cutting you off. As I mentioned previously, this transparency reinforces their “trustworthy” brand; they’re not overpromising or hiding constraints.

Part 5: Connecting the Dots

After analyzing all these pieces, I wanted to see whether Claude’s strategy made sense as a coherent whole or if they weren’t being deliberate with their product decisions. However, from what I’ve observed, their decisions systematically reinforce one another. They don’t hoover up data, which reinforces their “we respect your privacy” message. They have slower feedback loops, creating the predictable, stable behavior enterprises need. They don’t lock you in, showing they respect user agency and building trust.

Why I Actually Switched to Claude

When I started this analysis, I couldn’t explain why I preferred Claude, but I realize now I switched despite the constraints.

The tone feels thoughtful because constitutional AI isn’t trying to be maximally helpful at all costs. It’s trying to be genuinely useful while staying safe. Projects keep me organized because they build thoughtful UX instead of throwing features at the wall (I really couldn’t stand ChatGPT’s project interface). Usage limits feel intentional rather than annoying because, as a user, I understand they are trying to enforce quality, essentially saying, “We’d rather you have 50 great conversations than 500 mediocre ones.”

I can assume that there is a customer segment (including me) that values thoughtfulness over raw power. We’re willing to accept some limitations if it means better quality and more trust. The question is whether that segment is large enough to sustain a business competing against OpenAI and Google.

Market Concerns

Claude is betting that once AI gets powerful and ubiquitous, trust will matter more than speed. In 2023, people wanted the fastest, most capable model, where ChatGPT won on that dimension. In 2024, people started worrying about data privacy and hallucinations. By 2025 and beyond, companies such as IBM are predicting that enterprises will pay a premium for “safe AI we can audit.”

This only works if four conditions hold true. First, enterprises must actually care enough about safety to pay more. Second, competitors can’t easily copy constitutional AI. Third, “trust” must remain valuable even as all models get safer. Fourth, the cost of maintaining quality at scale needs to be sustainable.

Regulated industries like healthcare, finance, and legal will always need auditable AI, while enterprise buyers are risk-averse and willing to pay for “nobody gets fired for choosing Claude.” Constitutional AI is genuinely hard to replicate, as I mentioned previously, it’s a research moat, not just a feature.

But competitors such as OpenAI could copy the approach with vastly more resources. Google’s ecosystem integration might matter more than trust, especially for organizations already committed to Google Workspace. Inference costs might force Claude to compromise on quality. And “trust” might become a commodity feature rather than a differentiator as the industry matures and all models achieve baseline safety.

Part 6: What Could Kill Them?

Every strategy has vulnerabilities. Here are the biggest threats I identified to Claude’s positioning:

Threat 1: OpenAI Copies Them

OpenAI has significantly more money and resources than Anthropic*. The natural fear is that they could invest in constitutional AI-style safety and match Claude’s trust positioning while maintaining better performance and a stronger ecosystem. But the reality is more nuanced than this binary scenario suggests.

Replicating constitutional AI isn’t simply about throwing money at the problem. The methodology requires years of alignment research, and even with substantial investment, matching Claude’s brand positioning as “the trustworthy one” proves harder than matching technical capabilities. OpenAI already carries baggage from prioritizing capability over caution, as repositioning takes time and consistent execution.

The bigger question is timing. If OpenAI achieves “good enough” safety within 1-2 years while maintaining its performance and ecosystem advantages, Claude’s moat could erode quickly. But if it takes 5+ years, Claude gains time to build additional defenses, such as deeper enterprise relationships, a more refined methodology, and a stronger brand association with trust. The threat is real and represents Claude’s biggest vulnerability, but it’s not the overnight copy-and-win scenario it might appear to be. Market dynamics, enterprise sales cycles, and brand perception all move more slowly than technology.

*As of January 2026, OpenAI seems to be struggling...So we will see...

Threat 2: Google’s Integration Wins by Default

The integration advantage scenario looks compelling on the surface, as Gemini becomes so deeply embedded in Gmail, Docs, and Search that convenience outweighs trust considerations. Users don’t actively switch—they just use whatever is already available in their existing workflows, which is really sneaky. I typically use Claude for email, but I’ve seen Gemini prompting me to use suggestions...

But this threat plays out differently across market segments. In consumer markets, Google’s integration advantage is formidable and probably insurmountable for Claude. The compound effect works over time; the more places Gemini appears, the stickier it becomes, and Claude can’t compete without owning platforms.

In enterprise markets, the dynamics shift. Many organizations intentionally avoid single-vendor lock-in for strategic reasons. An IT department already committed to Google Workspace for email and documents might prefer a different vendor for AI capabilities to maintain negotiating leverage and reduce risk concentration. Claude’s Excel integration and API-first approach lets them integrate anywhere without requiring platform ownership.

The real question isn’t whether Google’s integration creates an advantage, as it clearly does. The question is whether that advantage overwhelms trust considerations in the specific segments Claude is targeting. For regulated industries with audit requirements, the answer may be no. For startups optimizing for convenience, the answer is probably yes. Claude’s strategy only needs to work in the former segment to succeed.

Threat 3: Costs Force Compromise

The tightening usage limits signal that inference costs create real constraints. The question is how Claude responds to these economic pressures.  Lowering quality to reduce costs, raising prices to lose customers, or compromising privacy to build better feedback loops presents false choices.

They could optimize model efficiency to reduce inference costs per query. They could develop better usage prediction to allocate capacity more effectively. They could tier their service more explicitly, with different quality levels at different price points. They could focus exclusively on high-value enterprise customers willing to pay premium prices, abandoning the consumer market entirely (I hope not!).

Their entire brand is built on NOT compromising. If they lower the quality, they lose their differentiation. If they raise prices too high, they push customers toward competitors. If they compromise privacy, they destroy trust. Their positioning leaves little room for maneuver.

Competitors with deeper pockets could subsidize losses longer, essentially forcing Claude into an impossible choice. Constitutional AI being more cost-efficient than RLHF helps, but may not be enough if the fundamental unit economics don’t work at scale. The tightening of the usage limit serves as an early warning. Whether it’s a temporary optimization or a signal of structural problems remains to be seen.

Threat 4: “Trust” Stops Mattering

The commoditization scenario assumes that all AI models eventually achieve sufficient safety that enterprises stop differentiating based on trust. Constitutional AI becomes table stakes—expected rather than special—and Claude must compete on price and performance where they have fewer advantages.

But “good enough” safety means different things to different buyers. A startup might accept baseline safety for the sake of speed and cost. A hospital handling patient data requires demonstrable, auditable safety with explicit reasoning. A bank needs safety standards that satisfy regulators who may not understand how AI works. The threshold for “good enough” varies enormously by use case.

Moreover, constitutional AI creates ongoing research advantages beyond just safety. The methodology influences how models reason, explain their decisions, and handle edge cases. Even if competitors match safety outcomes, they may not match explainability or auditability. These attributes matter in regulated industries even after safety becomes universal.

The timing question remains central. If the industry achieves universal baseline safety within 1-2 years, Claude faces serious pressure to compete on other dimensions. If it takes 5+ years, Claude gains time to build additional moats through enterprise relationships, refined methodology, and deeper integration into compliance workflows. The commoditization threat is real, but it’s not binary; instead, it’s a gradual erosion that Claude can counter with continued innovation if they have enough runway.

Questions I Still Can't Answer

About the business model:

  • What % of revenue is API vs. consumer subscriptions?

  • Is a consumer product just brand-building for the real business (API)?

  • Can they sustain enterprise pricing if OpenAI drops prices?

About the market:

  • Do enterprises actually choose Claude for safety, or just price/performance?

  • How price-sensitive are enterprise buyers in this category?

  • Is there really a “thoughtful AI” customer segment at scale?

About technology:

  • How long can Constitutional AI remain unique?

  • What’s their next research breakthrough?

  • Can they compete in multimodal (voice, video, images) with the same approach?

These gaps matter because they affect whether the strategy is actually defensible in the long term.

My Takeaways

What This Exercise Taught Me About Strategic Analysis

When I started, I thought I was analyzing Claude. What I was actually doing was learning how to think strategically about products.

The biggest lesson: Strategic coherence matters more than individual features. Before this analysis, I evaluated products feature-by-feature. Does it have X? Is Y better than the competitor? But that misses the point entirely. What matters is whether the decisions form a system that reinforces a single bet. Claude’s message limits, slow feedback loops, and low lock-in looked like weaknesses when viewed in isolation. As a system supporting “trustworthy AI for enterprises,” they made sense.

I also learned to distinguish between different types of competitive moats. I used to think “moat” meant “hard to copy.” But Claude taught me there are methodology moats (constitutional AI), positioning moats (what you don’t do with data), and brand moats (attracting specific customer segments). These operate differently and erode at different rates. A methodology moat might last 3-5 years. A brand moat could last decades if maintained, or collapse overnight with one incident.

Principles I’m Developing for My Product Philosophy

This analysis surfaced three principles I’m starting to use as heuristics for evaluating AI products:

1. Constraints can be features if they serve a strategy. Claude’s usage limits initially annoyed me as a user. As a strategic choice, they make sense because they enforce quality, manage costs, and attract customers who value thoughtfulness. The constraint becomes the feature when it filters for your target customer. This only works when the constraint is deliberate, not accidental.

2. Trust is a moat only if it’s expensive to build and easy to destroy. Claude is betting on trust, but trust alone isn’t defensible. It needs to be backed by something structural (constitutional AI methodology, privacy guarantees, audit trails). Without that infrastructure, trust is just marketing, and competitors can claim it too.

3. Strategic bets become clearer through comparison. I couldn’t fully understand Claude’s strategy until I imagined how Google and OpenAI would approach the same problem differently. They’re all solving “make AI useful” but with completely different moats and vulnerabilities. The comparison clarifies what’s actually strategic versus what’s just implementation.

What I’ll Do Differently Next Time

For my next case study, I’ll start by identifying the core strategic bet, then work backward to see whether product decisions support it. With Claude, I analyzed bottom-up, by looking at features, finding patterns, and inferring a strategy. Top-down should be faster and help me identify contradictions more quickly.

I’ll also push harder on the “what could kill them” analysis. I was too binary in my initial thinking—each threat was “this could happen or not.” Threats play out at different speeds across segments, with varying customer responses. Nuance matters in strategy because real competition doesn’t happen in cleanly defined categories.

Finally, I’ll track what I couldn’t answer. The questions I listed in Part 6 (revenue mix, pricing sustainability, technological uniqueness timeline) are the questions that actually determine whether a strategy succeeds. Next time, I’ll organize my analysis around surfacing these questions early rather than listing them at the end.

About This Analysis

This case study was based on the strategic analysis framework from Aakash Gupta’s article “Your Guide to AI Product Strategy”. I used the framework as a baseline to practice applying the concepts I learned—specifically analyzing capability stack, competitive moats, AI-native positioning, and roadmap strategy. The goal was to apply theoretical concepts to a real product I use daily, developing my own insights through hands-on analysis.

Bibliography

Anthropic. (2023). Constitutional AI: Harmlessness from AI Feedback [White Paper]. Retrieved from https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf

Anthropic. (2024). Updates to Our Consumer Terms. Retrieved from https://www.anthropic.com/news/updates-to-our-consumer-terms

Anthropic. (2024). Usage Policy Update. Retrieved from https://www.anthropic.com/news/usage-policy-update

Anthropic. (2024). Protecting the Well-Being of Users. Retrieved from https://www.anthropic.com/news/protecting-well-being-of-users

Anthropic Support. (2024). What is the Enterprise Plan?. Retrieved from https://support.claude.com/en/articles/9797531-what-is-the-enterprise-plan

Anthropic Support. (2025). Release Notes. Retrieved from https://support.claude.com/en/articles/12138966-release-notes

Document Version: 2.0
Last Updated: January 2026
Analysis Framework: AI Product Strategy (Capability, Moat, Positioning, Roadmap)

Create a free website with Framer, the website builder loved by startups, designers and agencies.