The Future of Space, Part I: The Setup

Expansion of life on Earth across our solar system and beyond has been a dream of mine since childhood. Of course, this isn’t uncommon among other sci-fi enthusiasts, and anyone who grew up seeing footage of humans walking on the Moon only to never return.

Space is fascinating for many reasons. It’s a frontier in every sense: physically, technically, even socially. It’s at the bleeding edge of what humanity is capable of. “Looking to the stars” and “shooting for the moon” are common idioms because space has defined our limits for generations.

Now (finally!) the technical and business tailwinds are coming together to make it possible. The cost and ease of getting to space are about to improve by many orders of magnitude. This will drive the space industry to be one of the biggest sources of growth over the next 10-20 years.[1] It will make existing technologies cheaper and more ubiquitous, like allowing worldwide high-speed internet in even the most remote, rural areas. It will also open up a host of new possibilities previously only imagined in science fiction.

This is the first of a two-part essay on the upcoming future of the space industry. I’ve been closely following SpaceX’s progress in particular since their first launch of the Falcon 9 in 2010, so I’m excited to finally write about it.

Why now?

TLDR: SpaceX has pushed cost to orbit down by 10x, and will by another 10x in 5 years. Along with further commercialization and government funding, a threshold has been crossed.

The success of commercial launch services puts the space industry in the same place as the early days of railroads in the 1800s or commercial ocean shippers in the 1600s. The key here is early days as things are really just getting started.

The “why now?” can be reduced to one chart — the average cost to get 1 kilogram to orbit:

Data from

In the next section I’ll go over the reasons why this makes such a big difference. But first, how did it happen? As should be evident by the chart, this is essentially the story of one company — SpaceX.

The driving ambition for Elon Musk when he founded SpaceX in 2002 was to drastically reduce the cost of escaping Earth’s gravity. Their “MVP” was the Falcon 1, a single-engine rocket that could launch small satellites. Falcon 1 only launched 5 times, with only the last 3 succeeding. Haven proven viability, SpaceX quickly moved onto production of the Falcon 9, a scaled up version with nine Merlin engines eventually capable of delivering over 22,000 kg to Low Earth Orbit (LEO). Here’s the price progression of each SpaceX rocket, starting from the base of what a conventional rocket costs:

From a conventional rocket price of $10k per kg to LEO, to a price of $60/kg for a Starship with 50 launches, over 100x lower. See the Google Sheet here to check my math.

Driving the first order-of-magnitude reduction in cost are the following:

  • Engineering from first principles. Designed and engineered from the ground up, famously using first principles to rethink standard industry practices. This led to seemingly trivial savings like using ethernet cables rather than serial cable bundles. But added up they make a huge difference.
  • Better incentives. Traditional government contracts were cost-plus. This incentivizes contractors to increase their costs both to make more profit and for more admin overhead to track expenses. With fixed-prices, companies are incentivized to drive costs down as much as possible.
  • Standardization of launch config. Rather than customized configurations for each launch and customer, SpaceX “productized” the Falcon 9, allowing for cheaper setups and repeated processes.
  • Reusability. Why is air travel cheaper than space travel? It’s obvious, right? Aircraft are reusable while rockets are destroyed after a single use. But not anymore, as anyone not living under a rock now knows that SpaceX can land and reuse the first stage(s) of their rockets.

And the next 10x reduction with Starship:

  • Bigger rocket. There are economies of size in rocketry: The bigger the rocket, the higher the payload-to-fuel ratio can be.
  • Full flow combustion cycle engine. This higher-complexity engine design makes it more efficient and capable of being reused many times with very little maintenance.
  • Lower-cost methane as fuel. Methane is cheaper than the previously used RP1 (rocket fuel), and SpaceX is planning on literally drilling for methane gas on their Texas property and synthesizing it on their converted oil rigs. (It can also be synthesized on Mars…)
  • Full reusability. 100% of Starship will be reusable, allowing dozens (or hundreds?) of uses for each stage and engine.
  • More launches. The more launches you can sell in a year, the less markup you need to charge to cover admin costs. Economies of scale and purchasing power are also achieved in raw materials and fuel production.
  • Refuel in orbit. Starship can park in orbit while it’s refueled by up to 8 other launches. This makes payload capacity to orbit the same as payload capacity to nearly anywhere in the solar system. Imagine what we can do with the ability to send over 100 tons to Moon, Mars or Europa.

Government funding, particularly from NASA, has been a key enabler. Without these contracts it would have been very difficult for SpaceX to fund R&D. And they’ll continue to play a key role for SpaceX and other commercial space providers. In recent years NASA has stepped up their commercial contracts significantly, and with further falling costs this is likely to continue. (See footnote [2] for a list of recent milestones.)

This moment for space companies is the equivalent of 1995 when the NSF dropped all restrictions on Internet commerce, which let private companies take over the backbone. The breaking of the dam that releases a tidal wave of activity.

The cost-driven industry flywheel

Expensive launches aren’t just costly in their own right — they lead to cost inflation of everything else. If it costs $100M to get a satellite to orbit, reducing the cost of development from $10M to $5M is only a 5% difference. So why not over-engineer, paying up for components and testing to ensure everything is perfect? Now if a launch costs $10M, there’s more incentive to cut costs. Even if there’s an issue, a second launch is much cheaper. Order-of-magnitude-lower launch costs will lead to similar decreases in payload costs.

From a Morgan Stanley report:

Currently, the cost to launch a satellite has declined to about $60 million, from $200 million, via reusable rockets, with a potential drop to as low as $5 million. And satellite mass production could decrease that cost from $500 million per satellite to $500,000.

More launches will lead to even cheaper costs, which will lead to cheaper payloads, which… see where I’m going here?

There are 4 distinct feedback loops here, all driving more launches. Not shown in this diagram are balancing (negative) loops involving things like launch failures or excessive regulations.

SpaceX has initially started the flywheel that got the industry to this inflection point.[3] But it won’t be the only one turning it. Ultimately to truly take advantage of space transportation we’ll be seeing many competing service providers, at all different levels of payload size and capability.

The flywheel is already turning and has led to a higher volume of launches:

At some point in the near future we’ll be seeing a launch per day, with spaceports treated more like shipping ports: hubs of travel and commercial activity.

Current state of the industry

Before moving on to Part II, I want to quickly review the two main categories of payload currently being launched:

  1. Government research and exploration.
    1. International Space Station cargo. In the U.S. this encompasses missions for Commercial Resupply (sending equipment and supplies) and Commercial Crew (sending people).
    2. Other research and exploratory efforts. This includes missions like the recently landed Mars Perseverance rover, the James Webb Telescope set to launch after much delay later this year on an Ariane 5 rocket, and the Europa Clipper set to launch in 2024.
  2. Satellites. Communication and Imaging satellites account for a vast majority of the space industry. Exploratory missions get all the publicity, but they are currently very tiny. This will continue, especially with broadband internet constellations.

    The use of communication satellites in particular is already a ubiquitous part of everyday life: from GPS navigation[4] to phone calls, TV signals, internet, and more. Satellite imagery as well: what once was a tool for only the military and intelligence agencies of large governments is now used by anyone with a smartphone.

    Satellites come in a range of sizes, from tiny CubeSats the size of a shoebox launched 100s at a time; to huge geostationary satellites that take up the entire payload of a rocket.[5] Most of this hardware — particularly for the larger ones — requires costly, sophisticated engineering and infrastructure. The full stack can include satellite manufacturers, operators, suppliers, and ground equipment. As costs come down, so will satellite size and launch frequency.

What’s to come

I hope I’ve convinced you that getting to space is about to get a whole lot easier.

In Part II, I’ll talk about the progress we will potentially see in space in the upcoming 10 to 20 years: commercial space stations, tourism, manufacturing, mining, exploration and more.


  1. The same is true for biotech in the upcoming decades. Areas like AI and Crypto will play big roles as well, but they’re not the thing. They’re the “thing that gets us to the thing“.
  2. Here’s a timeline of a few milestones:
    • 2008-12 — Commercial Resupply Services (CRS) contract of $1.6B to SpaceX and $1.9B to Orbital Sciences to deliver supplies to ISS. This helps fund Falcon 9 development.
    • 2012-05 — SpaceX Dragon capsule launches “empty” to perform tests and dock with the ISS, the first commercial spacecraft ever to do so.
    • 2012-10 — SpaceX CRS-1 mission sends Dragon with supplies to ISS. Dragon is the only cargo vehicle at the time capable of returning supplies to Earth.
    • 2014-09 — NASA awards final Commercial Crew Program (CCP) contract to SpaceX ($2.6B) and Boeing ($4.2B) for the capability to send 4-5 astronauts to the ISS. First flights for both initially planned in 2017.
    • 2020-01 — NASA awards Axiom Space the first ever contract to build a commercial module for the ISS.
    • 2020-04 — NASA awards lunar lander contracts to Blue Origin, Dynetics, and SpaceX under the Artemis program. The goal is to land “the first woman and the next man” on the Moon by 2024.
    • 2020-05 — Commercial Crew Demo mission sends 2 astronauts to ISS. These are the first astronauts on a commercial mission, and the first from US soil since retirement of the Space Shuttle in 2011. 10 million people worldwide watched it live.
    • 2020-11 — Crew 1, the first operational flight, sends 4 astronauts to ISS. Due to delays and other issues, Boeing’s Starliner isn’t set to fly for another year.
    • 2020-12 — NASA awards Blue Origin a Launch Services contract to transport planetary, Earth observation, exploration and scientific satellites.
  3. Elon Musk is a master at many things, but one of the greatest is his ability to get massive, company- or industry-wide flywheels moving.
  4. Global Positioning System (GPS) was developed by the military in the 1960s but not made public until 1996. GPS is an extremely critical part of our current technical infrastructure. Every time you use your phone to navigate, order food, or track your run, it is pinging multiple GPS satellites to triangulate your exact location.
  5. Here’s a good visual size comparison of satellites:

Tech Stack Trees

Every product is built on and enabled by one or more technologies.

Understanding where a product fits on its higher-level tech stack is an important part of any long-term strategy or investment thesis.

The following is an exploration of tech stacks: what they are, how to model them, and what roles their components play. I also discuss what insights can be gained, such as potential opportunities to expand the business.

Stack Trees

Typically, a tech stack shows what infrastructure a technology is directly built on or requires. A SaaS startup for example could have a front- and back-end software stack with a cloud provider like AWS under it. The tech in focus is on top of the stack, with the supporting layers below it.

A tech stack tree is a higher-level version, branching both above and below the “layer” in focus. It shows both what the technology directly relies on and what relies on it. Stacks are fractal in nature, just like trees. An innovation spawns many others that use it, which further enable others, and so on.

A stack tree shows the relevant “slice” of the full dependency graph, going only a few nodes out. It looks something like this:

How to model a stack tree

Step 1: Determine the core tech. The first step is to decide what the actual technology in focus is. A technology in this case is a physical tool, process or system of other technologies that combine to do a job. It does not include businesses or social innovations. (A “business” is just a group of physical and social technologies united under a strategy — but we’re only concerned with the physical part here.[1])

Examples can range from the simple: hammers, glass, newspapers, or an assembly line process; to the more complex: CPUs, streaming video services, blockchains, smartphones, or nuclear reactors.

Step 2: Layers below. What are the primary technologies and processes needed to create and maintain the core tech? What does it need not only to function but to be sustainable? Clues can be found in:

  • Why now: What enabled the tech in the first place? Why wasn’t it widely used 20 or 50 years earlier?
  • Suppliers & major cost centers of businesses producing the tech. (Infrastructure, manufacturing tech, service networks…)
  • Supply chain & logistics: What gets the product/service to customers? (Transportation, shipping, internet…)
  • Distribution tech: What gets the customers to the product? (Retailers, advertising, search engines…)

Step 3: Layers above. What does the tech directly enable? It’s possible there are no layers here. Many well-known innovations don’t directly enable other tech, like Netflix.

  • What do other businesses use it for? Who is it a supplier to?
  • Is there anything in-between the technology and the ultimate end-user?
  • Is it a marketplace or multi-sided network that connects many groups of users?

Stack tree examples

Here’s a few examples of stack trees from the tech industry, although they can be drawn out for products any industry:

(The Amazon “Vampire Squid” is the best example I can think of traversing the stack, starting as an online marketplace and expanding outward in all directions: up, down, and sideways (I left out Prime, Music, Video, etc.).

What insights can be gained?

Companies are embedded in value networks because their products generally are embedded, or nested hierarchically, as components within other products and eventually within end systems of use. — Clayton Christensen

A tech stack tree is one way of looking at a company’s value network. This can lead to insights into where value flows, who captures it, and potential opportunities to expand the business.

What layers in the stack capture the most value?

Which technologies accrue most of the value depend on many things: how much value (productivity) created relative to alternatives, availability of potential substitutes, the size of the overall need, or other competitive advantages inherent to the business model.

One of the models Clayton Christensen uses makes sense to apply here: Where is the bottleneck in demand? In other words, where in the stack is the biggest difference between performance demanded and performance supplied? What needs to be better?

Nvidia is a good example here. They keep churning out GPUs with higher capabilities and the market keeps needing more. Supply hasn’t kept up with demand and that’s likely to continue for some time. This bottleneck (along with other factors ) allows the GPU layer to capture a lot of value.

Are there opportunities to expand into adjacent technologies?

Amazon (see stack above) is the prototypical example here. They started as an online marketplace with some fulfillment operations, and over time have expanded in all directions.

In more traditional business thinking, you consider expanding vertically into suppliers and customers or horizontally across industries. Traversing a tech stack is similar, but to me more focused on the true technological and needs-based relationships. Traditional business thinking would have never led to Amazon expanding into internet infrastructure via AWS.

Of course, expanding for the sake of it is never a good strategy. You have to ask:

  • Do our current products or processes give us an advantage here?
  • How much value does the layer capture? (Is it a bottleneck in demand?)
  • Are there existing barriers to entry, and if so, does our position in other stack layers help overcome them?
  • Does this improve the outcomes for our current customers?
  • Will expansion endanger our relationships with partners or customers?

Short case study: Food delivery apps

The core tech here is a mobile or desktop app where you can order food from many local restaurants and get it delivered within ~1 hour. DoorDash, UberEats, Postmates, etc.

Layers below: What are their major cost centers? Restaurants and delivery drivers. What enabled delivery apps? Primarily ubiquitous smartphones and access to GPS-based navigation. Restaurants also need to have some way to communicate, whether by phone or Wifi-based tablets, and be able to package food in proper take-out containers (plus potentially many others to manage operations).

Layers above: What needs delivery apps to run? Cloud kitchens, which operate large strategically located kitchens that can make food for many different branded “restaurants”. Recently a further layer was added with the concept of pop-up branded chains, which uses the cloud kitchen & delivery infrastructure.

What captures the value? In the stack above, smartphones capture far more value than any other tech — but they’re a platform with thousands of other use cases. In this case we just want to focus on value flow within the food delivery market. It may not be clear at first who captures more value: the delivery apps or the restaurants, given companies like DoorDash are losing so much money. But it’s clear that restaurants are not a bottleneck in demand — so it’s likely the apps that capture more value. And it seems their unit economics bear this out.

Opportunities for expansion? The clearest opportunity to expand within the tech stack is into cloud kitchens. This could potentially alienate some restaurant partners, but restaurants are so fragmented it shouldn’t matter. I think this has a lot of potential given: captive customers, synergies with delivery app, and lower costs with economies of scale and not having to operate normal restaurant ops.

Functions in the stack

How would you classify technologies in the stack? I think it’s most informative to categorize by what pattern (or archetype) they match in the greater ecosystem. These are functions that can exist in any industry or stack tree: Platforms, protocols, etc.

I’ll follow up with another post including examples of different tech functions and stack patterns.

To be continued…

Thanks to Leo Polovets and Jake Singer for helpful feedback on this post. Header photo from veeterzy on Unsplash.


  1. Physical technologies are “methods and designs for transforming matter, energy, and information from one state into another in pursuit of [goals]“. There are also social technologies (organizational processes, culture, values, incentive systems, memes, etc.) that evolve and build off of each other over time. (Definitions from The Origin of Wealth, by Eric Beinhocker.) ↩︎

Managing Modes of Effort

This is the second essay in the Build Series.
The first was Wayfinding Through the Web of Efforts.

Making progress — in society, a team, or life — isn’t straightforward most of the time. Knowing where you want to go is generally the first step, but the destination can be very broad. And even if there’s a specific goal, the path to get there may be very indirect.

As strategy transitions into execution, it’s important to understand how these attributes affect progress. If an effort is managed or guided the wrong way, it may be doomed to failure no matter how difficult it is.

Project management as a discipline works great, but not with everything. Managing large, more uncertain endeavors in particular has been problematic recently. The wide-scale ongoing effort to fight COVID-19 and return the world to normalcy brings this challenge front and center.

Why are certain efforts harder than others and how do we navigate them? How were we able to accomplish such large scale collaborative efforts such as the Apollo program or Manhattan Project, but can’t do the same thing for curing cancer?

If we want to build, we need to understand the answers to these questions. The following is a framework for classifying efforts by the certainty of both their objectives and the paths to achieve them. Knowing which “mode” an effort is in is critical to understanding and managing its progress.

Classifying efforts

How do we classify efforts into modes? The best paradigm I’ve come across is the how/what quadrants.

In his 1994 book “All Change!”, Eddie Obeng described 4 different types of projects along with the difficulties and peculiarities of each: quests, movies, painting by numbers, and fog. It turns out putting a project on both the know how and the know what scales tells you a lot about how it should be managed.

Venkatesh Rao explored the concept much further in his essay on the “Grand Unified Theory of Striving”, pulling in other ideas like convergent thinking, critical paths, and lean methodology. Venkat’s visualization of the critical paths and point frontiers of each quadrant is a particularly insightful way to think about the concept.

Defining the dimensions and modes

Here’s how I’d describe the axes of the 2×2:

  • “Why”-axisKnow what vs. don’t know what. Do you know what the goal is? How specific is the desired outcome? Not knowing the goal (or having a very broad idea) is in the realm of divergent thinking: there are many potential solutions and progress can be non-linear. It’s the exploration phase of the explore vs. exploit tradeoff, searching for goals or areas of value.

    Knowing what and why is in the realm of convergent thinking: there is a single “correct” solution or destination. It squarely aligns with Peter Thiel’s deterministic approach of viewing the future: “There is just one thing—the best thing—that you should do.”

  • Horizontal-axisKnow how vs. don’t know how. Is it known how to accomplish the goal? Are the bottlenecks or resource-sensitive parts generally understood? When you know exactly how to accomplish something, there is a clear critical path1 (the red lines in the diagram below). Other paths of effort may still be required, but they are oblique with more slack, running parallel to the critical path.

    Knowing how allows you to operate lean because you can—in theory—use the least amount of resources necessary to get the job done. In the fat mode of operation, you don’t really know how to reach your goal. You can’t be efficient because you don’t know how to be, and there will be a lot of slack in the system. The path is determined opportunistically as you go, with critical paths only in smaller subsections.

Each quadrant can be described as follows:

Continue reading “Managing Modes of Effort”

Build Series: Frameworks for Effort

In April, Marc Andreessen put out the call to build. It was in response to our failure to control and mitigate the effects of Covid-19 — institutions on every level were unprepared for the pandemic, and have continued to show their inability to quickly find and scale solutions.

But more than anything it was in response to our failure to build in general. We chose not to build, he claims. “You see it throughout Western life, and specifically throughout American life.” The problem isn’t a lack of resources or technical ability — it’s with supply and demand of desire. Demand is limited by our ambition and will to build. Supply is limited by the people and organizations holding it back.

Andreessen is generally an optimist, which is why I see his essay as positive in overall tone. But it was also somewhat of a mea culpa. Andreessen has for years been on the other side of Peter Thiel’s view of modern technical stagnation.

Thiel’s view may be too pessimistic, but there’s a kernel of truth to it. If you’re familiar with the history of tech and innovation, something feels different. The late-1800s to mid-1900s had explosions of innovation in fields from medicine to consumer products, transportation, energy, communication, computing, food, and more.1

This is the introduction to a series of ongoing essays centered around the question:

What frameworks can help us build more, better?

And further attempting to investigate the answers to the following:

  • What are the best ways to approach solving big, complex problems?
  • Why are certain efforts so much harder to achieve than others?
  • How are these efforts best managed at every level?
  • How do we build things faster? (Without sacrificing quality or safety.)
  • What is holding us back from building more?
  • How do we overcome these barriers?

Many of these lessons apply not just to “building” in the physical sense, but for solving problems, scientific discoveries, improving systems, and making progress overall. Building in a way is symbolic. It represents making big, necessary changes to move humanity and our planet forward. This can be building something physical or digital, pushing the boundaries of fundamental research, or trying new uncertain ways to solve problems.

It doesn’t even have to be anything new or innovative per se. Andreessen gives many examples of expanding existing tech: housing, infrastructure, education, manufacturing. Even preservation and restoration — in many ways opposites of building — can still apply. In the early 1900s, President Teddy Roosevelt established over 230 million acres of public lands and parks. This added an incalculable amount of value to future generations. I would love to see E.O. Wilson’s Half-Earth Project executed at scale. This is in the spirit of building: making progress and pushing humanity toward a better future.

Here’s a preview of some of the specific topics I want to explore in the series: Ladders of Abstraction (why/how chains), Oblique vs. direct approaches, Modes of effort (why/how quadrants), traversing fitness landscapes, the explore vs. exploit tradeoff, the role of trust in building things fast, forcing functions, and the specific methods we used to accomplish large-scale collaborative efforts such as the Apollo program, the Manhattan Project, etc.

Table of Contents

  • IntroBuild Series: Frameworks for Effort
  • Part I: Lay of the Land
    • Wayfinding Through the Web of Efforts [8 minutes] — Putting goals on a ladder or hierarchy of abstraction. Defining efforts and their multi-scale nature. Determining the hierarchy of efforts using a why/how chain. The difference between making progress directly and obliquely, and the consequences of misplaced directness.
    • Managing Modes of Effort [10 minutes] — A framework for understanding how managing progress differs across scales of effort. Classifying efforts into four modes on the how/what quadrants. Defining the modes and how they fit on the hierarchy of abstraction. A Covid-19 case study. How to manage an effort based on its mode.


  1. What was different about this era? The following is a good rundown from Vaclav Smil’s book “Creating the Twentieth Century” on the remarkable attributes of the pre-WWI technical era:
    • The impact of the late 19th and early 20th century advances was almost instantaneous, as their commercial adoption and widespread diffusion were very rapid. A great deal of useful scientific input that could be used to open some remarkable innovation gates was accumulating during the first half of the 19th century. But it was only after the mid-1860s when so many input parameters began to come together that a flow of new instructions surged through Western society.
    • The extraordinary concentration of a large number of scientific and technical advances.
    • The rate with which all kinds of innovations were promptly improved after their introduction—made more efficient, more convenient to use, less expensive, and hence available on truly mass scales.
    • The imagination and boldness of new proposals. So many of its inventors were eager to bring to life practical applications of devices and processes that seemed utterly impractical, even impossible, to so many of their contemporaries.
    • The epoch-making nature of these technical advances. Most of them are still with us not just as inconsequential survivors or marginal accoutrements from a bygone age but as the very foundation of modern civilizations. ↩︎

The Scale of Large Projects

$100 million +

  • Midsize commercial airplane — $120m ^
  • Big budget video game — $150m ^
  • F-22 Raptor jet — $157m ^
  • iPhone R&D (2007) — $185m ^
  • Titanic (1912) — $190m ^
  • Big budget movie — $250m ^
  • SpaceX Falcon 9 v1 R&D — $350m ^
  • Empire State Building (1931) — $400m ^
  • Modern cruise ship — $750m ^
  • Hoover Dam (1936) — $863m ^

$1 billion +

  • Modern sports stadium — $1.3b ^
  • Modern skyscraper — $1.5b ^
  • Space Shuttle launch — $1.5b ^
  • Erie Canal (1825) — $4b ^
  • Human Genome Project (2003) — $5b ^
  • Panama Canal (1912) — $9b ^
  • Hubble Space Telescope (1990) — $9b ^

$10 billion +

  • Global Positioning System (1989) — $10b ^
  • Large Hadron Collider (2009) — $13b ^
  • Great Pyramid of Giza (~2500 BCE) — $20b ^
  • Three Gorges Dam (2009) — $25b ^
  • Transcontinental railroad (1863) — $30b ^
  • Manhattan Project (1945) — $30b ^
  • F-22 Raptor development (1997) — $42b ^
  • Great Wall of China (220 BCE) — $50b ^
  • SR-71 Blackbird development (1964) — $90b ^

$100 billion +

  • International Space Station — $150b ^
  • Apollo program (1969) — $200b ^
  • U.S. Interstate Highway System (~1980) — $500b ^

Many of these numbers are rough estimates. Figures adjusted for inflation after 1900 that weren’t already. Any figure before 1900 was adjusted via per capita GDP to more accurately reflect the scale of the undertaking.

If it were possible, the best metric to compare the scale of projects would be something like “Man-years + Value of Raw Materials (possibly in ounces of gold)“. This is especially true for projects like the Great Pyramid, the Suez Canal, the Great Wall of China, or the Manhattan Project which used mostly unpaid or low-paid labor.

Related: The Tallest Skyscrapers in the World, Pyramids vs. Skyscrapers

Product Study: Falcon 9

Last week I was outside of Vandenberg Air Force Base to watch the launch of SpaceX’s Falcon 9 rocket. (It was perfect weather and an amazing experience for my first launch!) To commemorate it, this is another one of a handful of product case studies I wrote to help understand successful product launches.

Falcon 9 was finished in early 2010, and had been in development since 2005. Its first flight occurred on June 4, 2010, a demonstration flight to orbit where it circled Earth over 300 times before reentry.

  • 1st flight to ISS: May 22, 2012
  • 1st cargo resupply (CRS-1): October 7, 2012
  • 1st successful commercial flight: September 29, 2013

Development costs for v1.0 were estimated at $300M. NASA estimated that under traditional cost-plus contracts costs would have been over $3.6B. Total combined costs for F9 and Dragon up to 2014 were ~$850M, $400M of that provided by NASA. 

By September 2013, the SpaceX production line was manufacturing 1 F9 every month.

(1) Value created — Simply describe the innovation. How did it create value? 

The Falcon 9 is a two-stage rocket that delivers payloads to Earth orbit or beyond. It’s a transportation vehicle to space. F9 drastically reduced launch costs, allowing NASA and small satellite companies to send payloads at a fraction of the cost.

(2) Value captured — Competitive advantages, barriers to entry. Why didn’t incumbents have a reason to fight them?

  • Ahead on the learning curve — highly advanced, experiential, expert knowledge
  • Capital and time barriers — lots of money and time needed to get to scale
  • F9 was a disruptive innovation, built from the ground up at low cost. Incumbent launch companies had no reason to start from scratch and lower their profits when they had strong (mainly cost-plus) contracts with existing customers. Industry was viewed as very inelastic and that little demand existed at low end.

Continue reading “Product Study: Falcon 9”

Product Study: iPhone

One of a handful of product case studies I wrote last year to help understand successful product launches.

Apple’s iPhone was announced December 9, 2007 and released June 29, 2007. It was $499 for the 4GB version, $599 for 8GB. After 8 years it had captured 50% of U.S. smartphone market and >66% of sales, with 100 million users.

(1) Value created — Simply describe the innovation. How did it create value?

The iPhone is a pocket computer. It has typical phone capabilities including phone calls and text messaging, along with cellular internet connectivity. Differences between other smartphones at the time were:

  • Large multi-touch screen with no tactile keyboard, no need for stylus — this allowed full use of screen when not using keyboard
  • Ability to browse normal, non WAP, websites (can zoom easily using multi-touch)
  • Ability to run desktop-class applications
  • Multiple sensor inputs — proximity, light, accelerometer

(2) Value captured — Competitive advantages, barriers to entry. Why didn’t incumbents have a reason to fight them?

  • Distribution:
    • Extension from existing Apple network — iTunes, Mac OS, iPod.
    • Brand attachment to Apple.
    • Economies of scale exist with integration and complexity of engineering.
  • Switching costs once owning an iPhone.
  • Strong habit attached to usage many times / day — strong attachment to UX.
  • Phone makers saw it as toy for rich people at first. Computer makers didn’t see it as a computer (low-end disruption).

Continue reading “Product Study: iPhone”

Mashgin: The Future of Computer Vision

twitter-picAbout a year ago I invested in and joined a startup called Mashgin. In this post I want to talk a little about what we’re working on.

Mashgin is building a self-checkout kiosk that uses cameras to recognize multiple items at once without needing barcodes.

The current version of the kiosk is designed for cafeterias, where customers can slide their tray up and everything on it is recognized instantly. Excluding payment, the process takes around 2 seconds. No more waiting for a single line held up by a price check!

But retail checkout is just a package around Mashgin’s core fundamental technology. We believe there is an opportunity to apply recent technical advancements to many fields. Advancements such as:

  • Smartphone dividends — cheap sensors and ubiquitous, miniaturized electronic components
  • Cheap parallel processing power including low-cost GPUs
  • An explosion in collaborative, open-source software tools
  • Machine learning methods, in particular convolutional neural networks (a byproduct of the 2 preceding trends)
  • Cheap cloud infrastructure

Chris Dixon talks more about some of these trends in his post What’s Next in Computing?

So how is Mashgin applying this technology?

Adaptive Visual Automation

Face swap: billionaire edition

Computer Vision transforms images into usable data (descriptions) using software. If cameras are the “eyes” of a machine, computer vision would be the brain’s visual cortex–processing and making sense of what it sees.

When computers know what they’re looking at, it opens up a world of potential. You can see it in existing use cases from facial recognition in Facebook photos (…or face swap apps) to Google Image Search and OCR. Newer, much more sophisticated applications include driverless cars, autonomous drones, and augmented reality.

Gradient Descent
A visual example of using gradient descent (the reverse of hill climbing in a fitness landscape) as part of the learning process of a neural network

These recent applications tend to be more complex, and as a result use machine learning in addition to traditional image processing methods. Machine learning, and in particular deep learning through neural networks, has changed the game in many areas of computer science, and we are just beginning to see its potential. ML can simplify a large amount of data into a single algorithm. As the name implies, it can learn and adapt to new information over time with little or no “teaching” from engineers.

Both CV and ML can be applied to many fields, but one of the biggest immediate needs is in Automation. There are a surprising amount of simple (to humans) visual tasks ripe for automation. This includes industrial use cases in manufacturing and distribution, and consumer use cases in household robotics and relief of everyday bottlenecks.

I call the above combination adaptive visual automation: using machine learning to automate vision-based tasks. Although relatively new, this combination covers a large and quickly growing class of real-world problems. Autonomous cars (and especially trucks) are a good up-and-coming example that will have huge ramifications.

Mashgin’s future

Mashgin uses adaptive visual automation to improve the speed, accuracy and cost of applications in recognition, measurement, and counting in a closed environment. That was a bit of a mouthful, so here’s the short version: Mashgin wants to make visual automation intelligent.

There’s a broader category of AI vision companies whose purpose is giving computers the ability to understand what they see. Mashgin is a subset of this group, focusing on automating well defined real-world problems.

There are further subsets such as eliminating bottlenecks in everyday circumstances — speeding up checkout lines being one example. In many of the activities you do on a daily basis, intelligent automation has the ability to save a huge amount of time and money.

Retail checkout is a big market (even for just cafeterias) but it only scratches the surface of the value Mashgin will eventually be capable of. We have already established a foundation for applying recent advancements to these problems and it will only get better from here.

Atlastory: Mapping the history of the world

Certain ideas are “inevitable” over time. Paul Graham calls them “[squares] in the periodic table” — if they don’t exist now, they’ll be created shortly. It’s only a matter of when, not if.

I believe that Atlastory is one of those ideas. The following is a long post about a project I’ve been passionate about for some time now and am currently in the process of winding down.

The Idea

Atlastory is an open source project to create an interactive map that chronicles the history of life on earth. It’s a “Google Maps” for history. The ultimate goal is the ability to see what the world looked like 50, 200, 1000+ years ago. It was inspired by OpenStreetMap & Wikipedia: combining historic maps with cultural & statistical data.

Atlastory map in action

I started Atlastory at first because I’m a fan of both history and good data visualizations. I was surprised something like this didn’t already exist and thought that it would be an amazing educational tool.

Maps are one of the best ways to clearly show an enormous amount of information. Since everything in the past took place at a certain time and location, maps are an obvious choice to visualize that knowledge. Understanding history requires seeing changes and interactions over time, and a four-dimensional map allows this.

To envision information—and what bright and splendid visions can result—is to work at the intersection of image, word, number, art.” — Edward Tufte

Good design will be a key aspect of the final product. Good information design can communicate a huge amount of knowledge in a small window of time or space. Great information design has a high amount of density and complexity while remaining completely understandable.

The Vision (version ∞)

Atlastory’s purpose is to improve understanding of the past by organizing and visualizing historic knowledge.

My vision for Atlastory was that one day it would become a tool like Wikipedia that’s used regularly around the world. A journalist could use it to go back 20 years to see the geography and timeline of a major world event. A student could use it to go back 20,000 years to see the expansion of human culture across the globe. A climatologist could use it to visualize the historic overlap of population growth with changes in global climate patterns.

Wikipedia organizes information by creating a searchable network of interconnected articles that combine text and other multimedia. Atlastory can be the first medium that allows completely visual navigation, displaying information at a much higher density and level of interactivity.


Imagine students in a classroom learning about World War II. You’d be able to see the country borders of Europe as they existed in 1942. Drag the timeline, and see the borders change as the years go on. Turn on an overlay of population density or GDP per capita and see the flow of activity throughout the war. Zoom in and see the troop movements of a pivotal battle.

The visual interactivity would make it much more enticing for people, young and old. Almost game-like in terms of exploration and discovery.

Eventually, the timeline could go back far enough that you’re able to see continental drift and other pre-historic geographic or environmental changes.

Map content

Maps can be broken down into a few different types:

  • Physical — shows the physical landscape including mountains, rivers, lakes.
  • Political — sovereign, national and state boundaries, with cities of all sizes. The typical world map you see will be political with some physical features.
  • Road — shows roads of various sizes along with destinations and points of interest. Google Maps & other navigation apps fall into this category.
  • Statistical — shows statistics about human populations such as economic stats, population density, etc.
  • Scientific — thematic maps that can show climate, ecological regions, etc. (see the climate map below)
  • Events — shows how a specific event played out geographically, like WWII or Alexander the Great’s conquests.

Climate patterns

Any map type that has enough data to span long periods could eventually go into the Atlastory system. Event, thematic, statistical, and scientific maps could all seamlessly layer on top of the main “base map”.

Base map

The Atlastory base map should be an elegant combination between 3 map types: physical (basic landscape features), political (sovereign and administrative boundaries), and cultural (see below). Major roads and infrastructure would be added only after a worldwide “structure” of the base map was created.

Importantly, map creation should be top down, from global to local. The purpose of an Atlastory map is not navigation, it is understanding of history. Creating a global structure will also provide context and make it easier to interest other users/contributors.

Cultural cartography

Most world maps made today (of the present time or of the last few hundred years or so) are of the political variety. But what happens when you go back a few thousand years? What about areas of the world where, even now, aren’t necessarily defined by geopolitical boundaries?

The solution is mapping cultural regions. Culture, in this case, being human societies with common language, belief systems, and norms. “A cultural boundary (also cultural border) in ethnology is a geographical boundary between two identifiable ethnic or ethno-linguistic cultures.”

A cultural map would have different levels, just like political maps: from dominant cultural macroregions to local divisions between subcultures or classes within a society (blue collar vs. white collar, etc.).

Combining cultural cartography with typical map types allows for a much better understanding of both modern and ancient history. Culture plays a major role in world events & limiting the map to only defined borders paints an inaccurate view of history.

Cultural regions

(Notice any overlap between cultural regions and the climate regions in the map above it?)

The Tech

The technical infrastructure behind Atlastory has a few basic components:

  1. A database of nodes (latitude/longitude points) organized into shapes, layers, types, and time periods.
  2. An API that manages, imports and exports data from the database.
  3. crowdsourced map editor interface (like iD for OpenStreetMap, but designed specifically for top-down time-based editing).
  4. A map rendering service that turns raw map data from the database into vector tiles that can be styled for viewing.
  5. The map itself: a web interface to view and navigate the maps.

Most of the components would be built from existing open-source tools created by organizations like OpenStreetMap, MapBox, and CartoDB. There has been a lot of technical innovation in this field over the past few years which is one of the main reasons something like Atlastory is now possible to build. (Although given what I known about the requirements still very challenging.)

Read more about the technical requirements…

The current status and future of Atlastory

I’ve been working on this as a side project for more than 3 years now. Originally I imagined being able to quickly find a way to profit from the service. But as development dragged on and other commitments began taking up more of my time, I realized I’d never be able to finish it alone.

Earlier this year I joined Mashgin, a startup in the Bay Area, as a full-time “Generalist.” My spare time completely dried up and I decided everything needed to be completely open sourced and distributed to anyone interested in the project.

Due to personal time constraints, I can’t continue with it so I’m looking for others who are interested. This could mean taking over / adapting the codebase or using other means to pursue the idea. See below for more details on what’s currently done. Although many of the back-end components are functional, the infrastructure is in a rather unusable state right now.

Please contact me or leave a comment below if this strikes your curiosity or you know anyone else who would be interested. I’m happy to answer any questions.


The dawn of immersive storytelling

From a previous #tweetstorm:

Immersive storytelling will be a big industry in the near future: movies viewed on Oculus Riftdome-like cinemas, or interactive games. We have co’s like Jaunt, Condition One & (consumer) making 360 cameras that will be used for filming.

A new visual “grammar” will have to be discovered by filmmakers through trial and error (i.e. no fast cuts, super close-ups, etc.). Parts of the legacy film industry will rebel at first, as they have over the last 100 years since storytelling evolved from live performances to filmed, pre-recorded stories.

Just like audiences were frightened at the sight of a train barreling towards them in early theaters, there will be a learning curve for immersive experiences. Early players of demo games for the Oculus Rift have been scared to the point of ripping their headsets off. Dome cinemas could be the social alternative to VR headsets. (If you ever been on Disney’s Soarin’ Over California ride that’s an example.)

Technology-wise, I feel a complete 360 field-of-view (FOV) like this Jaunt setup won’t be the way to go. There has to be some direction to the audience’s attention. A complete FOV is too immersive and incompatible with users’ prior experiences. Maybe at some point down the road. Something like a 180-220 degree FOV + 180 up and down to allow some freedom of motion (immersion) but still directed view with surround sound.

There is lots of experimentation ahead in the near future in both technology and storytelling grammar. I look forward to both observing and participating.