The world is an array of edits

Most of my days are filled with numbers: ARR, DAUs, MAUs, J curves, funnel drop-offs. As a product designer at a data analytics company, I live in charts and tables. Founders, too, often find themselves staring at charts, trying to make sense of a spike or a dip, tying the what to the why. And vice versa: Once they make a decision, they might look to the numbers for proof of success — or signs of failure.
The numbers work tremendously well for some purposes. They can connect a button’s color change to a sudden spike in users, or a new CTA to a drop in growth. But lately, I’ve been drawn to a narrative that I don’t think exists in the numbers. Namely, that of breakthrough ideas and where they come from — the story of how these numbers came to exist at all, before product-market fit, the launch, the metrics dashboard. Before something becomes inevitable.
There are details in this pre-history, the stretch before product-market fit, before metrics and dashboards, that are the secret to making anything significant. I’ve come to believe that the best founders understand this pre-history too, even if only intuitively. They can read the moment when tech, human behavior, and market conditions align to pave the way for something new — and, of course, luck plays a big part.
Read more stories like this one in Meridian: Details — on the subtle choices that shape how ambition comes to life.

Take, for example, the history of the internet. The internet is basically the latest iteration of our species’ preoccupation with communication. Language extended our ability to share information beyond immediate experience. Writing allowed information to persist across time. Some form of global information network does seem inevitable, at least to me. The invention of the internet depicts a tempting story: technology as the natural extension of human desires.
But the actual history of the internet is messier and more accidental, and its market opportunity was not always clear along the way. In 1969, ARPANET, the network that would become the internet’s foundation, emerged from Cold War military concerns about surviving nuclear attacks. The United States' Defense Department wanted a communication system that could route around damaged infrastructure if parts of the country's communication grid were destroyed. What they got were four computers, scattered across universities, (UCLA, Stanford Research Institute, UC Santa Barbara, and University of Utah) exchanging data.
There are details in this pre-history, the stretch before product-market fit, before metrics and dashboards, that are the secret to making anything significant.
J.C.R. Licklider, a psychologist turned computer scientist at DARPA, had an even more radical vision in 1963 — an “Intergalactic Computer Network,” in which computers around the world shared information and resources in ways that seemed, at the time, like science fiction. People could access programs and data from anywhere, collaborating across vast distances through machines that could think alongside humans. To most of his colleagues, it sounded like science fiction — too outlandish to take seriously, let alone build toward.
In 1989, Tim Berners-Lee invented the World Wide Web, a way to navigate and link information across the internet, because he was frustrated with document management at CERN, the European physics research lab where scientists from dozens of countries were collaborating on particle accelerator experiments — not because the world was crying out for hyperlinked information.
All of these inventions were tremendously important, leading to robust architectures and breakthrough ideas. But it wasn’t until founders began building on that groundwork — spotting opportunities shaped by decades of open standards, public investment, and technical collaboration — that the internet’s value as a platform for global communication truly accelerated. Consider Dropbox co-founder and CEO Drew Houston, who in 2007 saw that cloud computing was emerging. Broadband was becoming ubiquitous, and people were starting to own multiple devices: laptops, phones, work computers. Houston noticed that people were still emailing files to themselves, carrying USB drives, and losing work when devices crashed, despite all this connectivity.
The gap wasn’t technical — the infrastructure existed to store files on the cloud. It was behavioral. They were inching into Licklider’s networked world but while the infrastructure was still catching up, people were still thinking like their files lived in isolated boxes.
Dropbox saw that people’s relationship with their data was becoming fundamentally more fluid and device-agnostic, a shift already visible in services like Gmail and Flickr, than what existed. Houston read the conditions: If the internet could sync so much else in real-time, why not apply that to files? Why not make saving to a specific computer feel as antiquated as keeping documents in physical filing cabinets?
What made Dropbox feel inevitable once it existed was that the internet had already changed how people think about access and availability. Seamless UX, instant syncing, and viral referral model made that shift feel frictionless. People expected their photos, messages, and social connections to follow them everywhere. The conditions were right for that same expectation to extend to their work, their creative projects, and their digital lives. Houston wasn’t solving the problem people were complaining about. He was solving a problem he’d personally run into, one that turned out to be surprisingly universal. His bet aligned with shifting behavior, emerging infrastructure, and a dose of luck — enough to make Dropbox feel inevitable in hindsight.
Windows and bets
One way to think about progress is to see inevitability as hinging on two conditions:
- Opportunity: Something has changed in the world. A window has opened — and with it, the conditions shift in a way that makes a certain kind of idea feel not just possible, but increasingly necessary. New expectations begin to form. The market starts leaning toward something new, even if it doesn’t know what that is yet.
- Agency: The inevitable won’t happen automatically — people will need to create new opportunities and make bets on what will manifest.
If something feels inevitable, and you’re early enough to notice and act on it, and you can marshal the resources to make it real, then your odds of building something valuable go way up. Luck is important, but it also requires a particular kind of mindset to be comfortable with uncertainty about the specifics while being confident about the general direction. As Steve Jobs said:
“You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”
The window opens (condition 1), but it still takes individuals with the right combination of vision and risk tolerance to walk through it (condition 2). The delta between the two conditions is where you can create a lot of value.
Figma: The collaborative edit
Figma is widely recognized today as a design tool, but it has had another major impact in internet infrastructure — helping push forward how state, presence, and computation are coordinated in the browser.
By the mid-2010s, multiple technological threads were converging. WebGL, the JavaScript API that handles 2D and 3D graphics, had matured enough to handle the rendering of complex graphics in browsers. Cloud infrastructure could support real-time synchronization at scale, and JavaScript engines had become powerful enough to handle real-time operations with minimal lag. A window had opened, but most design tool companies were thinking incrementally: better export options, cloud storage, maybe some commenting features.
Figma’s co-founders Dylan Field and Evan Wallace made a counterintuitive leap. Incumbents like Adobe and Sketch had vast resources and design expertise, but were working with established mental models of what design software should look like. Before 2015, that model was fundamentally solitary: download massive applications to your computer, work on files locally, export static images or PDFs to share with others. Collaboration required emailing Photoshop files back and forth, or uploading designs to separate platforms for feedback. Designers would spend hours managing file versions — “FINAL_REAL_REAL_v27.psd” is a running joke that has its roots in this reality.
The creative process was fragmented across multiple tools: design in one app, prototype in another, hand off specifications through yet another system. Teams worked in parallel rather than together, with designers essentially throwing designs over the wall to developers who had to guess at spacing, colors, and interactions.
... it also requires a particular kind of mindset to be comfortable with uncertainty about the specifics while being confident about the general direction.
Instead of building another desktop app with online features bolted on, they bet everything on the browser as a platform for professional creative work. They enabled real-time multiplayer editing, infinite canvas architecture, and component systems that, while echoing trends in productivity software, felt novel in the context of creative tools.
As Wallace puts it on the Figma blog: “Pulling this off was really hard; we’ve basically ended up building a browser inside a browser.” They weren’t just making design collaborative. They were betting that multiplayer editing, web-native rendering, and real-time feedback would unlock a fundamentally new creative workflow, one that the browser was just beginning to make possible. It was a window most incumbents couldn’t see clearly, but Field and Wallace recognized it and placed a bold bet.
On December 3, 2015, Field published his goal (also on the Figma blog): “Ever since Writely (now called Google Docs) launched ten years ago, I’ve believed that all software should be online, real-time, and collaborative. Creative tools haven’t made the leap because the browser has not been powerful enough. Now, with WebGL, everything has changed.”
Within five years, Figma grew from a scrappy startup to a design platform used by teams at Google, Microsoft, and Twitter. By 2022, when Adobe announced its plan to acquire Figma for $20 billion, it had become clear that the browser-based approach hadn’t just created a successful product. (While the proposed 2022 acquisition fell though, Figma ultimately IPO'd in July 2025.) The incumbent that had dominated design software for decades was paying a rare premium to acquire the company that had made their desktop-first approach feel antiquated. What started as a counterintuitive bet on web technology had redefined expectations around collaboration and workflow within interface design.
Netflix: The streaming pivot
By the late 1990s, condition 1 was obvious to anyone paying attention: The internet was poised to disrupt video rental. Online bandwidth was improving, storage was getting cheaper, and compression formats like DivX were making video files dramatically smaller. Platforms like Kazaa and RealPlayer were quietly training users to expect digital access. Meanwhile, frustrations with Blockbuster were mounting: late fees, limited selection, inconvenient return windows. The window was opening for digital video distribution.
The go-to response seemed to be a better version of Blockbuster. Digital kiosks, faster delivery, maybe some kind of download service. Even Netflix initially followed this logic with their DVD-by-mail service — they were just making rental more convenient.
Over the next decade, Netflix’s founder Reed Hastings made two non-obvious leaps that, in my view, define the difference between knowing something will happen and knowing how it will reshape everything. First, he realized that digital distribution wouldn’t just make rental better — it would make the entire concept of “rental” feel increasingly outdated. Why rent individual movies when you can have unlimited access? Second, he saw that unlimited access would create entirely new consumption patterns that would require entirely new types of content.
Multiple shifts were creating the foundation for these leaps. People were getting comfortable with browsing endless newsfeeds on social media and expected information to be available instantly and infinitely. The iPod had already proven that people would abandon physical media for digital libraries, and subscription models were conditioning consumers to pay for access rather than ownership. Meanwhile, YouTube’s explosive growth demonstrated that people were hungry for video content in new formats and would watch it on their computers rather than just their televisions. In just four years, their daily views surged from eight million in 2005 to over a billion by 2009, a sign of just how quickly user behavior was shifting.
The pattern is always the same: The infrastructure gets built first, then comes the wave of watchful and curious founders.
In 2007, Netflix launched its streaming service as a free add-on to DVD subscriptions. The company trained customers to enjoy unlimited access while streaming technology caught up to the vision. While others like Amazon and Apple were experimenting with digital rentals, Hastings was quietly reframing the behavior itself, bundling streaming into the DVD plan, not as a separate offering. By the time broadband became ubiquitous and streaming quality had improved, Netflix had already conditioned millions of customers to expect unlimited access for a flat monthly fee. They had solved the behavioral transition before the technical one was complete.
Next, Hastings had to make an even bigger bet. First, he spent hundreds of millions licensing existing content from studios, convincing skeptical executives to put their movies and shows on a platform most people still associated with DVD delivery. Then came the truly audacious leap: Hastings had to convince investors and employees that a streaming company should spend hundreds of millions producing House of Cards, which would launch in 2013. The traditional media companies — HBO, NBC, Disney, Warner Bros — had vastly more experience and resources for content creation, but they were trapped by their existing business models. Hasting writes about this conundrum in his book No Rules Rules:
“If this year the target is to increase operating profit by 5 percent, the way to get your bonus — often a quarter of annual pay — is to focus doggedly on increasing operational profit. But what if, in order to be competitive five years down the line, a division needs to change course? Changing course involves investment and risk that may reduce this year’s profit margin. The stock price might go down with it. What executive would do that? That’s why a company like WarnerMedia or NBC may not be able to change dramatically with the times, the way we’ve often done at Netflix.”
Hastings kept updating his understanding of the inevitability as it unfolded. First it was “the internet will disrupt video rental,” then “streaming will replace physical media,” then “global platforms will replace regional content companies.” Each phase required new contrarian bets about the specific form the inevitable future would take. While many companies recognized the shifts, few were able to pivot with the same clarity or conviction.
Before the metrics
I used to think the secret to building significant products was hidden in better data, clearer metrics, smarter analyses. But the more time I spend looking at dashboards, the more convinced I become that the real leverage happens before any of that exists. It happens in the quiet moments when someone decides that the way things are isn’t the way things have to be.
We’re living through one of these moments right now. As foundational AI models reach a baseline of competence, there’s a young Drew Houston somewhere, staring at their laptop screen, noticing that we still manually copy-and-paste between applications when AI could translate context across every tool we use.
The public conversation is focused on the models, but the models are only the substrate. They’re impressive but also interchangeable, increasingly commoditized. What isn’t commoditized is experience: the tools that translate raw capability into something intuitive, accessible, and quietly transformative.
That’s where the future is forming. In the toolbar that removes a dozen steps without anyone noticing. In the autocomplete that understands context instead of syntax. In the interface that makes you feel like the system just knows you. Value is quietly accumulating in the application layer, and you can already hear whispers of it in OpenAI’s acquisition of Windsurf and Jony Ive’s new venture. A new platform is unfolding right in front of our eyes.
The pattern is always the same: The infrastructure gets built first, then comes the wave of watchful and curious founders. We saw it play out with the internet, with mobile, with cloud computing. Now we’re seeing it with AI. The companies that will define this era won’t be the ones with the biggest models — they’ll be those who decide to close the gap between what's possible and what people actually experience. The future belongs to whoever notices the detail that everyone else has learned to ignore.
About the author
Keshav Chauhan is a product designer at Sundial obsessed with the fundamentals of value creation. His writing explores the dynamics of innovation: the fleeting windows when multiple technological capabilities mature simultaneously, the psychology of recognizing opportunity before it's validated by data, and the space between seeing something and deciding to act on it.
Related reads

Truth decay

What comes next?

