I’ve been wanting to explore doing something in the audio/podcasting area for a while now. There’s plenty of good interview-focused shows out there so I didn’t want to go that route. Taking inspiration from Stratechery, I settled on doing an audio version of selected blog posts. It’s just an experiment at this point but I’ve been enjoying the creation of the first few episodes so I’ll see how things go.
The first full episode is already up: “The Future of Space, Part I: The Setup”. I’ll follow with the audio version of Part II in the next week. (Though similar to the blog, new episodes will be sporadic.)
So although the podcast is available, it is not discoverable on the Apple Podcast app. [Update: It is now available in search and in the link below.] You can subscribe using one of the buttons below or click the “RSS Feed” button, copy the feed URL, and paste it directly into your Podcast app. (IMO, Overcast is the best player out there so I’d highly recommend it.)
A 10x reduction in cost to orbit has already started to change things. The next 10x reduction will lead to outcomes and use cases much harder to comprehend or predict. It would have been hard for anyone in the late 1800s to predict what drastically lower costs of energy and electricity would eventually bring. Or for anyone in the 1970s to predict the consequences of abundant computing power and ubiquitous global communication (Reddit? NFTs? Protein folding?).
But we can try.
This summary is focused on some of the changes we’re likely to see in the next 5 to 20 years. A lot can happen in that time frame. For reference, it’s taken SpaceX only 19 years to accomplish what they have. But progress compounds and is exponential — especially so once a tipping point like this has been crossed. The change we’ll see in the next 20 years will dwarf that of the last 20.
(Quick note: This isn’t meant to be comprehensive. It’s a highlight of the new areas I find most interesting, and doesn’t include anything on the two biggest space segments: communication and Earth observation. Although there are plenty of interesting potentials here — like globally available high-speed internet [Starlink] or ubiquitous, near-real-time worldwide monitoring [Planet].)
Transportation & Launch Services
The progress of SpaceX, the current leader here, was detailed in Part I. Given the Falcon 9’s low costs, it’s likely to be the preferred choice for medium-sized payloads, and even smaller payloads with rideshares.
Until now, SpaceX has self-funded their Starship super-heavy launch vehicle. That changed a few weeks ago when NASA announced that Starship had won the contract to land humans on the Moon again. This is huge. The contract will fund $2.9 billion of development costs and speed up the timeline for Starship to become human rated. With the pace of their current development, Starship is on track to become fully operational within 3 years. This should keep SpaceX the leader for heavy and super-heavy launches for some time.
When it comes to delivering humans, the other Commercial Crew competitor, Boeing, is more than a year behind after testing mishaps. Blue Origin may the next best bet for heavy-launch vehicles and a dark horse candidate given its potential funding from Jeff Bezos. There’s multiple smaller upstarts like Rocket Lab,Relativity Space or Astra at the low-end of the market, potentially moving disruptively upward. SpaceX blazed the path for these rocket companies, showing how far costs can come down, and proving that lower prices can expand market size.
Also included in this category are spaceports. Most spaceports are currently owned and operated by governments — like Kennedy Space Center at Cape Canaveral, Florida, or Vandenberg Air Force Base in California. This will start to change in tandem with the growth of commercial space.
Spaceport America in New Mexico is an example of an all-commercial spaceport, similar to most airports in that it’s owned and operated by the state. Rocket Lab built their own spaceport, Rocket Lab Launch Complex 1, in New Zealand. SpaceX’s R&D facilities in Boca Chica, Texas are now being converted into not only a spaceport, but a township to support Starship launches. Given Starship’s eventual importance, there’s no doubt this will become a hub of activity. Launches, and more importantly landings, will also take place on converted offshore oil rigs.
Most current space activity takes place in Earth orbit. As it becomes cheaper to leave the influence of Earth’s gravity, we’ll start expanding further out into the Solar System. The best staging point for this expansion isn’t spaceports on Earth — it’s the Moon and lunar orbit.
The Moon has one-sixth the gravity of Earth and no atmosphere. The means the energy (or delta-v) required to launch from its surface is much lower. The Moon also contains 600 million tons of ice, and its soil is 40-45% oxygen by mass. These raw materials can be used to produce propellants for launch, along with water and breathable oxygen — nearly 100 grams for every kilogram of soil. A Moon base is not far off in our future.
On the Moon, concepts like space elevators or skyhooks also become possible. Imagine a structure — similar to piers going into the ocean — extending from the lunar surface into orbit. Satellites can be sent up the elevator into orbit and ships can “dock” at the top, where supplies can be loaded with much less energy cost.
Once other infrastructure like commercial space stations and lunar bases get set up, I think we’ll start seeing regularly scheduled launches to specific destinations. From quarterly launches to monthly, weekly, and eventually daily. (Questions to ponder: Can rockets fit in the existing intermodal shipping system? What would a new space-specific intermodal container look like?)
Expansion of life across our solar system and beyond has been a dream of mine since childhood. Of course, this isn’t uncommon among other sci-fi enthusiasts, or anyone who grew up knowing we’ve sent humans to the Moon but haven’t sent them back in nearly 50 years.
Space is fascinating for many reasons. It’s a frontier in every sense: physically, technically, even socially. It’s at the bleeding edge of what humanity is capable of. “Looking to the stars” and “shooting for the moon” are common idioms because space has defined our limits for generations.
Now (finally!) the technical and business tailwinds are coming together to make it possible. The cost and ease of getting to space are about to improve by many orders of magnitude. This will drive the space industry to be one of the biggest sources of growth over the next 10-20 years. It will make existing technologies cheaper and more ubiquitous, like allowing worldwide high-speed internet in even the most remote, rural areas. It will also open up a host of new possibilities previously only imagined in science fiction.
This is the first of a two-part essay on the upcoming future of the space industry. I’ve been closely following SpaceX’s progress in particular since their first launch of the Falcon 9 in 2010, so I’m excited to finally write about it.
TLDR: SpaceX has pushed cost to orbit down by 10x, and will by another 10x in 5 years. Along with further commercialization and government funding, a threshold has been crossed.
The success of commercial launch services puts the space industry in the same place as the early days of railroads in the 1800s or commercial ocean shippers in the 1600s. The key here is early days as things are really just getting started.
The “why now?” can be reduced to one chart — the average cost to get 1 kilogram to orbit:
In the next section I’ll go over the reasons why this makes such a big difference. But first, how did it happen? As should be evident by the chart, this is essentially the story of one company — SpaceX.
The driving ambition for Elon Musk when he founded SpaceX in 2002 was to drastically reduce the cost of escaping Earth’s gravity. Their “MVP” was the Falcon 1, a single-engine rocket that could launch small satellites. Falcon 1 only launched 5 times, with only the last 3 succeeding. Haven proven viability, SpaceX quickly moved onto production of the Falcon 9, a scaled up version with nine Merlin engines eventually capable of delivering over 22,000 kg to Low Earth Orbit (LEO). Here’s the price progression of each SpaceX rocket, starting from the base of what a conventional rocket costs:
Driving the first order-of-magnitude reduction in cost are the following:
Better incentives. Traditional government contracts were cost-plus. This incentivizes contractors to increase their costs both to make more profit and for more admin overhead to track expenses. With fixed-prices, companies are incentivized to drive costs down as much as possible.
Standardization of launch config. Rather than customized configurations for each launch and customer, SpaceX “productized” the Falcon 9, allowing for cheaper setups and repeated processes.
Full reusability. 100% of Starship will be reusable, allowing dozens (or hundreds?) of uses for each stage and engine.
More launches. The more launches you can sell in a year, the less markup you need to charge to cover admin costs. Economies of scale and purchasing power are also achieved in raw materials and fuel production.
Refuel in orbit. Starship can park in orbit while it’s refueled by up to 8 other launches. This makes payload capacity to orbit the same as payload capacity to nearly anywhere in the solar system. Imagine what we can do with the ability to send over 100 tons to Moon, Mars or Europa.
Government funding, particularly from NASA, has been a key enabler. Without these contracts it would have been very difficult for SpaceX to fund R&D. And they’ll continue to play a key role for SpaceX and other commercial space providers. In recent years NASA has stepped up their commercial contracts significantly, and with further falling costs this is likely to continue. (See footnote  for a list of recent milestones.)
Expensive launches aren’t just costly in their own right — they lead to cost inflation of everything else. If it costs $100M to get a satellite to orbit, reducing the cost of development from $10M to $5M is only a 5% difference. So why not over-engineer, paying up for components and testing to ensure everything is perfect? Now if a launch costs $10M, there’s more incentive to cut costs. Even if there’s an issue, a second launch is much cheaper. Order-of-magnitude-lower launch costs will lead to similar decreases in payload costs.
Currently, the cost to launch a satellite has declined to about $60 million, from $200 million, via reusable rockets, with a potential drop to as low as $5 million. And satellite mass production could decrease that cost from $500 million per satellite to $500,000.
More launches will lead to even cheaper costs, which will lead to cheaper payloads, which… see where I’m going here?
SpaceX has initially started the flywheel that got the industry to this inflection point. But it won’t be the only one turning it. Ultimately to truly take advantage of space transportation we’ll be seeing many competing service providers, at all different levels of payload size and capability.
The flywheel is already turning and has led to a higher volume of launches:
At some point in the near future we’ll be seeing a launch per day, with spaceports treated more like shipping ports: hubs of travel and commercial activity.
Current state of the industry
Before moving on to Part II, I want to quickly review the two main categories of payload currently being launched:
Government research and exploration.
International Space Station cargo. In the U.S. this encompasses missions for Commercial Resupply (sending equipment and supplies) and Commercial Crew (sending people).
Satellites. Communication and Imaging satellites account for a vast majority of the space industry. Exploratory missions get all the publicity, but they are currently very tiny. This will continue, especially with broadband internet constellations.
The use of communication satellites in particular is already a ubiquitous part of everyday life: from GPS navigation to phone calls, TV signals, internet, and more. Satellite imagery as well: what once was a tool for only the military and intelligence agencies of large governments is now used by anyone with a smartphone.
Satellites come in a range of sizes, from tiny CubeSats the size of a shoebox launched 100s at a time; to huge geostationary satellites that take up the entire payload of a rocket. Most of this hardware — particularly for the larger ones — requires costly, sophisticated engineering and infrastructure. The full stack can include satellite manufacturers, operators, suppliers, and ground equipment. As costs come down, so will satellite size and launch frequency.
What’s to come
I hope I’ve convinced you that getting to space is about to get a whole lot easier.
In Part II, I’ll talk about the progress we will potentially see in space in the upcoming 10 to 20 years: commercial space stations, tourism, manufacturing, mining, exploration and more.
The same is true for biotech in the upcoming decades. Areas like AI and Crypto will play big roles as well, but they’re not the thing. They’re the “thing that gets us to the thing“.
Here’s a timeline of a few milestones:
2008-12 — Commercial Resupply Services (CRS) contract of $1.6B to SpaceX and $1.9B to Orbital Sciences to deliver supplies to ISS. This helps fund Falcon 9 development.
2012-05 — SpaceX Dragon capsule launches “empty” to perform tests and dock with the ISS, the first commercial spacecraft ever to do so.
2012-10 — SpaceX CRS-1 mission sends Dragon with supplies to ISS. Dragon is the only cargo vehicle at the time capable of returning supplies to Earth.
2014-09 — NASA awards final Commercial Crew Program (CCP) contract to SpaceX ($2.6B) and Boeing ($4.2B) for the capability to send 4-5 astronauts to the ISS. First flights for both initially planned in 2017.
2020-04 — NASA awards lunar lander contracts to Blue Origin, Dynetics, and SpaceX under the Artemis program. The goal is to land “the first woman and the next man” on the Moon by 2024.
2020-05 — Commercial Crew Demo mission sends 2 astronauts to ISS. These are the first astronauts on a commercial mission, and the first from US soil since retirement of the Space Shuttle in 2011. 10 million people worldwide watched it live.
2020-11 — Crew 1, the first operational flight, sends 4 astronauts to ISS. Due to delays and other issues, Boeing’s Starliner isn’t set to fly for another year.
2020-12 — NASA awards Blue Origin a Launch Services contract to transport planetary, Earth observation, exploration and scientific satellites.
Elon Musk is a master at many things, but one of the greatest is his ability to get massive, company- or industry-wide flywheels moving.
Global Positioning System (GPS) was developed by the military in the 1960s but not made public until 1996. GPS is an extremely critical part of our current technical infrastructure. Every time you use your phone to navigate, order food, or track your run, it is pinging multiple GPS satellites to triangulate your exact location.
Here’s a good visual size comparison of satellites:
Every product is built on and enabled by one or more technologies.
Understanding where a product fits on its higher-level tech stack is an important part of any long-term strategy or investment thesis.
The following is an exploration of tech stacks: what they are, how to model them, and what roles their components play. I also discuss what insights can be gained, such as potential opportunities to expand the business.
Typically, a tech stack shows what infrastructure a technology is directly built on or requires. A SaaS startup for example could have a front- and back-end software stack with a cloud provider like AWS under it. The tech in focus is on top of the stack, with the supporting layers below it.
A tech stack tree is a higher-level version, branching both above and below the “layer” in focus. It shows both what the technology directly relies on and what relies on it. Stacks are fractal in nature, just like trees. An innovation spawns many others that use it, which further enable others, and so on.
A stack tree shows the relevant “slice” of the full dependency graph, going only a few nodes out. It looks something like this:
How to model a stack tree
Step 1: Determine the core tech. The first step is to decide what the actual technology in focus is. A technology in this case is a physical tool, process or system of other technologies that combine to do a job. It does not include businesses or social innovations. (A “business” is just a group of physical and social technologies united under a strategy — but we’re only concerned with the physical part here.)
Examples can range from the simple: hammers, glass, newspapers, or an assembly line process; to the more complex: CPUs, streaming video services, blockchains, smartphones, or nuclear reactors.
Step 2: Layers below. What are the primary technologies and processes needed to create and maintain the core tech? What does it need not only to function but to be sustainable? Clues can be found in:
Why now: What enabled the tech in the first place? Why wasn’t it widely used 20 or 50 years earlier?
Suppliers & major cost centers of businesses producing the tech. (Infrastructure, manufacturing tech, service networks…)
Supply chain & logistics: What gets the product/service to customers? (Transportation, shipping, internet…)
Distribution tech: What gets the customers to the product? (Retailers, advertising, search engines…)
Step 3: Layers above. What does the tech directly enable? It’s possible there are no layers here. Many well-known innovations don’t directly enable other tech, like Netflix.
What do other businesses use it for? Who is it a supplier to?
Is there anything in-between the technology and the ultimate end-user?
Is it a marketplace or multi-sided network that connects many groups of users?
Stack tree examples
Here’s a few examples of stack trees from the tech industry, although they can be drawn out for products any industry:
(The Amazon “Vampire Squid” is the best example I can think of traversing the stack, starting as an online marketplace and expanding outward in all directions: up, down, and sideways (I left out Prime, Music, Video, etc.).
What insights can be gained?
Companies are embedded in value networks because their products generally are embedded, or nested hierarchically, as components within other products and eventually within end systems of use. — Clayton Christensen
A tech stack tree is one way of looking at a company’s value network. This can lead to insights into where value flows, who captures it, and potential opportunities to expand the business.
What layers in the stack capture the most value?
Which technologies accrue most of the value depend on many things: how much value (productivity) created relative to alternatives, availability of potential substitutes, the size of the overall need, or other competitive advantages inherent to the business model.
One of the models Clayton Christensen uses makes sense to apply here: Where is the bottleneck in demand? In other words, where in the stack is the biggest difference between performance demanded and performance supplied? What needs to be better?
Nvidia is a good example here. They keep churning out GPUs with higher capabilities and the market keeps needing more. Supply hasn’t kept up with demand and that’s likely to continue for some time. This bottleneck (along with other factors ) allows the GPU layer to capture a lot of value.
Are there opportunities to expand into adjacent technologies?
Amazon (see stack above) is the prototypical example here. They started as an online marketplace with some fulfillment operations, and over time have expanded in all directions.
In more traditional business thinking, you consider expanding vertically into suppliers and customers or horizontally across industries. Traversing a tech stack is similar, but to me more focused on the true technological and needs-based relationships. Traditional business thinking would have never led to Amazon expanding into internet infrastructure via AWS.
Of course, expanding for the sake of it is never a good strategy. You have to ask:
Do our current products or processes give us an advantage here?
How much value does the layer capture? (Is it a bottleneck in demand?)
Are there existing barriers to entry, and if so, does our position in other stack layers help overcome them?
Does this improve the outcomes for our current customers?
Will expansion endanger our relationships with partners or customers?
Short case study: Food delivery apps
The core tech here is a mobile or desktop app where you can order food from many local restaurants and get it delivered within ~1 hour. DoorDash, UberEats, Postmates, etc.
Layers below: What are their major cost centers? Restaurants and delivery drivers. What enabled delivery apps? Primarily ubiquitous smartphones and access to GPS-based navigation. Restaurants also need to have some way to communicate, whether by phone or Wifi-based tablets, and be able to package food in proper take-out containers (plus potentially many others to manage operations).
What captures the value? In the stack above, smartphones capture far more value than any other tech — but they’re a platform with thousands of other use cases. In this case we just want to focus on value flow within the food delivery market. It may not be clear at first who captures more value: the delivery apps or the restaurants, given companies like DoorDash are losing so much money. But it’s clear that restaurants are not a bottleneck in demand — so it’s likely the apps that capture more value. And it seems their unit economics bear this out.
Opportunities for expansion? The clearest opportunity to expand within the tech stack is into cloud kitchens. This could potentially alienate some restaurant partners, but restaurants are so fragmented it shouldn’t matter. I think this has a lot of potential given: captive customers, synergies with delivery app, and lower costs with economies of scale and not having to operate normal restaurant ops.
Functions in the stack
How would you classify technologies in the stack? I think it’s most informative to categorize by what pattern (or archetype) they match in the greater ecosystem. These are functions that can exist in any industry or stack tree: Platforms, protocols, etc.
I’ll follow up with another post including examples of different tech functions and stack patterns.
Physical technologies are “methods and designs for transforming matter, energy, and information from one state into another in pursuit of [goals]“. There are also social technologies (organizational processes, culture, values, incentive systems, memes, etc.) that evolve and build off of each other over time. (Definitions from The Origin of Wealth, by Eric Beinhocker.)↩︎
The more complex the world gets, the more we need models to simplify it. One of the models I return to often is fitness landscapes, which can help solve problems, design better experiences, and explain the world around us.
Imagine you and a group of friends are on a team playing a game.
The game takes place on a huge playing field with rough, mountainous terrain, like the Himalayas or Alps. The only goal is to increase your team’s average altitude. This seems easy enough, but there are a few catches: (1) any player can only see a few feet ahead of them, (2) the terrain slowly changes over time, and (3) if a player drops below a certain altitude for long enough, they’re eliminated. Given these rules, what strategies would your team use to find the highest peaks?
This is a metaphor for the “game” that species must play to survive in an ecosystem.
The terrain is a fitness landscape representing a library or design space of every possible variation of organism, spread out over a nearly infinite surface. The closer together on the surface, the more similar the genotype. This means single species would be clustered together. Dogs would be near wolves, far from fish, and even farther from fungi.
Altitude indicates the fitness of the organism — or how likely it is to survive in a particular environment. The higher it is on the landscape, the better the design and more fit the organism. Below a certain threshold, organisms can’t survive and species go extinct.
As a model, landscapes can help show us visually and mathematically how to find the best designs. The original concept was developed by evolutionary theorist Sewell Wright in 1931, and focused only on biological entities. But a design space could represent almost any set of possibilities — as long as it has building blocks or variables that combine into many variations, each with a value (or fitness level) that can be assigned. This means it could apply to design spaces of problems, equations, technologies, strategies, memes, or even sets of LEGOs.
Features of landscapes
A vast majority of the variations on a typical landscape are bad designs. These are oceans of low fitness, below the surface of which organisms are incapable of survival or reproduction.
But certain regions — springing out of the oceans like islands or continents — are full of a range of potential variations, all with some usable level of fitness. The basic features of these regions of terrain are:
Local peaks or plateaus — A point or area of high fitness where all surrounding paths go down.
Global peak — The highest peak in the region. The fittest entity in the area. The best design of all similar variations.
Valleys — Flatter areas of low fitness adjacent to hills and mountain ranges.
Pits or Crevasses — Deep holes of low fitness below the “sea-level” of survival.
Peaks are good. Pits are bad. And crossing valleys is very risky: you could find higher fitness, but likely not.
The unconscious process of evolution drives genotypes uphill over time, finding and settling on peaks of fitness until the landscape shifts or some other factor forces a move. More on this later.
Making progress — in society, a team, or life — isn’t straightforward most of the time. Knowing where you want to go is generally the first step, but the destination can be very broad. And even if there’s a specific goal, the path to get there may be very indirect.
As strategy transitions into execution, it’s important to understand how these attributes affect progress. If an effort is managed or guided the wrong way, it may be doomed to failure no matter how difficult it is.
Project management as a discipline works great, but not with everything. Managing large, more uncertain endeavors in particular has been problematic recently. The wide-scale ongoing effort to fight COVID-19 and return the world to normalcy brings this challenge front and center.
Why are certain efforts harder than others and how do we navigate them? How were we able to accomplish such large scale collaborative efforts such as the Apollo program or Manhattan Project, but can’t do the same thing for curing cancer?
If we want to build, we need to understand the answers to these questions. The following is a framework for classifying efforts by the certainty of both their objectives and the paths to achieve them. Knowing which “mode” an effort is in is critical to understanding and managing its progress.
How do we classify efforts into modes? The best paradigm I’ve come across is the how/what quadrants.
In his 1994 book “All Change!”, Eddie Obeng described 4 different types of projects along with the difficulties and peculiarities of each: quests, movies, painting by numbers, and fog. It turns out putting a project on both the know how and the know what scales tells you a lot about how it should be managed.
Venkatesh Rao explored the concept much further in his essay on the “Grand Unified Theory of Striving”, pulling in other ideas like convergent thinking, critical paths, and lean methodology. Venkat’s visualization of the critical paths and point frontiers of each quadrant is a particularly insightful way to think about the concept.
Defining the dimensions and modes
Here’s how I’d describe the axes of the 2×2:
“Why”-axis — Know what vs. don’t know what. Do you know what the goal is? How specific is the desired outcome? Not knowing the goal (or having a very broad idea) is in the realm of divergent thinking: there are many potential solutions and progress can be non-linear. It’s the exploration phase of the explore vs. exploit tradeoff, searching for goals or areas of value.
Knowing what and why is in the realm of convergent thinking: there is a single “correct” solution or destination. It squarely aligns with Peter Thiel’s deterministic approach of viewing the future: “There is just one thing—the best thing—that you should do.”
Horizontal-axis — Know how vs. don’t know how. Is it known how to accomplish the goal? Are the bottlenecks or resource-sensitive parts generally understood? When you know exactly how to accomplish something, there is a clear critical path1 (the red lines in the diagram below). Other paths of effort may still be required, but they are oblique with more slack, running parallel to the critical path.
Knowing how allows you to operate lean because you can—in theory—use the least amount of resources necessary to get the job done. In the fat mode of operation, you don’t really know how to reach your goal. You can’t be efficient because you don’t know how to be, and there will be a lot of slack in the system. The path is determined opportunistically as you go, with critical paths only in smaller subsections.
“An old story tells of a visitor who encounters three stonemasons working on a medieval cathedral and asks each what he is doing. ‘I am cutting this stone to shape,’ says the first, describing his basic actions. ‘I am building a great cathedral,’ says the second, describing his intermediate goal. ‘And I am working for the glory of God,’ says the third, describing his high-level objective. The construction of architectural masterpieces required that high objectives be pursued through lesser, but nonetheless fulfilling, goals and actions.”
John Kay, Obliquity
All efforts, from daily personal projects to global collaborative endeavors, fit in a webbed hierarchy of abstraction.
Understanding the full hierarchy of an effort is critical to accomplishing it, along with its higher-level objectives in the long-term. Not understanding it can result in bad planning, mismanagement, and failed expectations.
Ladder of Abstraction
“…the most powerful way to gain insight into a system is by moving between levels of abstraction.” — Bret Victor
The ladder represents a top-level concept or domain, with each rung a subset of the one above it. The rungs move from abstract at the top, to concrete at the bottom. The lower down, the more detailed and specific. The higher up, the broader and more abstract the concept.
The model is very simple, and can be applied to almost any discipline with a hierarchy of nested groups. This includes applying it to efforts.
First of all — what do I mean by effort?
An effort is the active search for the best outcome of an objective. It encompasses both the objective and the pursuit of that objective — both of which are not fixed and can evolve over time. The objective always has some boundaries, but otherwise can be very broad (“solving climate change”) or very narrow (“double next-month’s sales volume”).
All efforts are multi-scale and nested.1 This means we can put them on a ladder of abstraction, each rung with an objective or method that’s a prerequisite of the one above it. Lower-level goals are nested in higher-level purposes. Good project managers do this intuitively when breaking an objective down into tasks and sub-goals, mapping their dependencies.
Because efforts can have many dependencies and relationships aren’t just one-to-one, they exist in more of a webbed hierarchy of abstraction than a ladder. A simple one-dimensional ladder of abstraction is just a slice of the larger hierarchy.
Here’s an example of a hierarchy of abstraction for the efforts relating to Covid-19:
What’s the best way to determine the hierarchy of abstraction for an effort?
A simple way to move up and down the ladder is the Why/How Chain. To move up, ask “Why?”; to move down, ask “How?”. Many know this technique from the Toyota Production System’s method of asking 5 Why’s to find the root cause of an issue.
You can start by finishing the phrase: In what ways might we ___? This method can work on almost everything, from large-scale efforts to small-scale jobs-to-be-done:
⬆️ Why? To make my home look good.
⬆️ Why? To hang a picture.
❇️ In what ways might we drill a hole?
⬇️ How? Use a drill.
In the Covid-19 example, you could start at whatever level is most relevant to you.
❇️ In what ways might we provide better medical care for COVID patients?
⬆️ Why? To stop people from dying and reopen the economy.
⬇️ How? Protect medical workers from getting sick.
⬇️ How? Source and distribute PPE.
⬇️ How? Contact regional manufacturers.
There will always be multiple “how”s, which is the essence of breaking a goal down into sub-goals. There can be multiple “why”s as well, especially the further you go down the ladder. But high up in the hierarchy the whys and hows become more and more vague. This means you have to approach them in a completely different way.
Abstraction = Obliquity
Knowing where an effort fits on the hierarchy is the first step. Now we need to understand how the different levels of scale need to be treated.
This is where John Kay’s concept of obliquity, from his book of the same title, comes in.
To solve a problem obliquely is to solve it through experiment and adaptation. In general, the bigger the scope and complexity of an objective, the more indirect the path is to achieve it.
The ladder of abstraction is a proxy for obliquity. The higher on the ladder, the more adaptive the problem should be solved. John Kay: “High-level objectives — live a fulfilling life, create a successful business, produce a distinguished work of art, glorify God — are almost always too imprecise for us to have any clear idea how to achieve them.” In the process of making progress on these objectives, we don’t only learn how improve, but “about the nature of the objectives themselves.” You’re wayfinding, rather than following a prescribed path.
The lower on the ladder, the more direct. “Directness is only appropriate when the environment is stable, objectives are one-dimensional and transparent and it is possible to determine when and whether goals have been achieved.”
The following table compares the different aspects of both ends of abstraction.
Clear and simple
Loosely defined and multidimensional
Most outcomes are intended
Outcomes arise through complex processes with no simple cause and effect
Interactions with others
Limited and predictable
Dependent on many variables, including interpretation of them
Range of available options is fixed and known
Only a subset of options are available from successive limited comparison
Can be described probabilistically
Uncertain: Range of what might happen is not known
Insists on consistency: always treating the same problem the same way
Consistency is minor and possibly dangerous — rare that same problem is encountered twice
Conscious maximization of objectives
Adapt continuously to changing circumstances
Consistency is vital when you’re low on the ladder, not so much higher up. “The oblique decision maker, the fox,” John Kay remarks, “is not hung up on consistency and frequently holds contradictory ideas simultaneously.”
But the real power of solving an oblique problem lays in adaptation: “If the environment is uncertain, imperfectly understood and constantly changing, the product of a process of adaptation and evolution may be better adapted to that environment than the product of conscious design. It generally will be.” There is no map, so instead you have to wayfind and look for clues in front of you, making your way with the tools you have at hand.
Keep in mind again that this is a scale — it’s rare that an effort would completely check all the above boxes for either Direct or Oblique. The point is that efforts always fall somewhere on the scale and that this determines the best methods to pursue them.
Consequences of Misplaced Directness
I’ll try to keep this section short, as whole books have been written on the consequences of misplaced directness. See Nassim Taleb’s Incerto for example.
Attempting to approach a large, complex effort too directly almost always leads to failure — or at the very least a failure to meet initial expectations. Directness is only appropriate when the objective is one-dimensional and the path to achieve it is known.
The intention was to create a new Brazilian capital from scratch that was truly unique and modern, paying special attention to cars and traffic flow. (This was the same time period the U.S. began building out the intercontinental highway system.)
As time went on, unforeseen circumstances in the messiness of the real world intervened. Overpopulation drove traffic congestion, slums, and general inequality. The focus on form over function from the top-down design caused alienation and poor quality of life. This is exactly why any such an effort can’t be planned with precision. Not only are the details of the true goal not understood, but the methods to achieve it involve unpredictable complexity. They’re in the world of “extremistan” as Taleb would say.
“The structures in this artificial capital are impressive,” read an FT article on one of the architects, “yet few want to walk its barren streets. Politicians leave as soon as possible to return to grittier, but livelier, Brazilian cities.”
The Brasilia master plan was partly based on architect Le Corbusier’s misplaced utopian vision of creating the ideal city. Corbusier’s work also included the Indian city of Chandigarh with similar consequences. This was, in the words of John Kay, “the hope that rational design by an omniscient planner could supersede practical knowledge derived from a process of adaptation and discovery.”
Many overfunded startups suffer from the same fate. When you have access to seemingly unlimited resources, it’s easy to be fooled into thinking you can build your exact vision into reality. But these visions generally exist in a complex world of human culture, desires, and economic feedback loops.
Quibi — the $1.8 billion funded short-form media startup — is case in point. Ultimate success or failure remains to be seen but results have yet to come close to expectations. They had the resources to build a particular product and business model into reality — a reality where most customers don’t seem to want what they’re selling. A destination was chosen on a map that couldn’t be seen.
Magic Leap has an amazing vision of seamless AR glasses to enable a digital layer on top of the real world. The actual objective in this case might actually be the right destination. But the complexity of the problem means the approach still can’t be direct. Currently their product has seen minimal success as they struggle to find a sustainable business model.
WeWork seemed to start at the right level of abstraction, then an ambitious “visionary” founder is given unlimited funds, and a direct approach is applied to a still-oblique problem.
Summary: The Right Strategy for the Right Level
All efforts, and the efforts within them, can be placed on a ladder of abstraction. The higher up you go, the less concrete the objectives and less straightforward the methods to achieve them.
Where the effort falls on the scale is critical to the strategies for making progress on them. Direct, methodical approaches are only appropriate at lower, smaller-scale levels. This is where it’s good to plan details, use processes, and keep things consistent.
You can still have a grand, abstract vision. You just need to wayfind to get there: working from the bottom up, adapting and evolving the path while shaping and refining the details of the goal. Keep things flexible, adaptive, and opportunistic at the top.
Organizations: Given the definition of an effort above, what about coordinated groups of efforts or goals? ↩︎
In the book The Origin of Wealth, Eric Beinhocker describes organizations as “goal directed, boundary-maintaining, and socially constructed systems of human activity. . . . There is a boundary distinguishing the inside world from the outside world, and the goals of the organization drive activities that lower entropy inside the organizational system.” This is the best description I’ve come across given its abstract nature. But I’d like to propose simpler, yet still compatible definition.
An organization is a group of people pursuing one or more ongoing efforts, generally with the same high-level objective. This means an organization can be anything from seven-person hunting parties, to fleets of exploratory vessels, to philanthropies, to a multinational conglomerate.
In April, Marc Andreessen put out the call to build. It was in response to our failure to control and mitigate the effects of Covid-19 — institutions on every level were unprepared for the pandemic, and have continued to show their inability to quickly find and scale solutions.
But more than anything it was in response to our failure to build in general. We chose not to build, he claims. “You see it throughout Western life, and specifically throughout American life.” The problem isn’t a lack of resources or technical ability — it’s with supply and demand of desire. Demand is limited by our ambition and will to build. Supply is limited by the people and organizations holding it back.
Andreessen is generally an optimist, which is why I see his essay as positive in overall tone. But it was also somewhat of a mea culpa. Andreessen has for years been on the other side of Peter Thiel’s view of modern technical stagnation.
Thiel’s view may be too pessimistic, but there’s a kernel of truth to it. If you’re familiar with the history of tech and innovation, something feels different. The late-1800s to mid-1900s had explosions of innovation in fields from medicine to consumer products, transportation, energy, communication, computing, food, and more.1
This is the introduction to a series of ongoing essays centered around the question:
What frameworks can help us build more, better?
And further attempting to investigate the answers to the following:
What are the best ways to approach solving big, complex problems?
Why are certain efforts so much harder to achieve than others?
How are these efforts best managed at every level?
How do we build things faster? (Without sacrificing quality or safety.)
What is holding us back from building more?
How do we overcome these barriers?
Many of these lessons apply not just to “building” in the physical sense, but for solving problems, scientific discoveries, improving systems, and making progress overall. Building in a way is symbolic. It represents making big, necessary changes to move humanity and our planet forward. This can be building something physical or digital, pushing the boundaries of fundamental research, or trying new uncertain ways to solve problems.
It doesn’t even have to be anything new or innovative per se. Andreessen gives many examples of expanding existing tech: housing, infrastructure, education, manufacturing. Even preservation and restoration — in many ways opposites of building — can still apply. In the early 1900s, President Teddy Roosevelt established over 230 million acres of public lands and parks. This added an incalculable amount of value to future generations. I would love to see E.O. Wilson’s Half-Earth Project executed at scale. This is in the spirit of building: making progress and pushing humanity toward a better future.
Here’s a preview of some of the specific topics I want to explore in the series: Ladders of Abstraction (why/how chains), Oblique vs. direct approaches, Modes of effort (why/how quadrants), traversing fitness landscapes, the explore vs. exploit tradeoff, the role of trust in building things fast, forcing functions, and the specific methods we used to accomplish large-scale collaborative efforts such as the Apollo program, the Manhattan Project, etc.
Wayfinding Through the Web of Efforts[8 minutes] — Putting goals on a ladder or hierarchy of abstraction. Defining efforts and their multi-scale nature. Determining the hierarchy of efforts using a why/how chain. The difference between making progress directly and obliquely, and the consequences of misplaced directness.
Managing Modes of Effort[10 minutes] — A framework for understanding how managing progress differs across scales of effort. Classifying efforts into four modes on the how/what quadrants. Defining the modes and how they fit on the hierarchy of abstraction. A Covid-19 case study. How to manage an effort based on its mode.
What was different about this era? The following is a good rundown from Vaclav Smil’s book “Creating the Twentieth Century” on the remarkable attributes of the pre-WWI technical era:
The impact of the late 19th and early 20th century advances was almost instantaneous, as their commercial adoption and widespread diffusion were very rapid. A great deal of useful scientific input that could be used to open some remarkable innovation gates was accumulating during the first half of the 19th century. But it was only after the mid-1860s when so many input parameters began to come together that a flow of new instructions surged through Western society.
The extraordinary concentration of a large number of scientific and technical advances.
The rate with which all kinds of innovations were promptly improved after their introduction—made more efficient, more convenient to use, less expensive, and hence available on truly mass scales.
The imagination and boldness of new proposals. So many of its inventors were eager to bring to life practical applications of devices and processes that seemed utterly impractical, even impossible, to so many of their contemporaries.
The epoch-making nature of these technical advances. Most of them are still with us not just as inconsequential survivors or marginal accoutrements from a bygone age but as the very foundation of modern civilizations. ↩︎
Whole Earth Discipline: An Ecopragmatist Manifesto
Ecological balance is too important for sentiment. It requires science. The health of natural infrastructure is too compromised for passivity. It requires engineering. What we call natural and what we call human are inseparable. We live one life.
We are forced to learn planet craft — in both senses of the word: craft as a skill and craft as cunning. The forces in play in the Earth system are astronomically massive and unimaginably complex. Our participation has to be subtle and tentative, and then cumulative in a stabilizing direction. If we make the right moves at the right time, all may yet be well.
“Find (a) simple solutions (b) to overlooked problems (c) that actually need to be solved, and (d) deliver them as informally as possible, (e) starting with a very crude version 1.0, then (f) iterating rapidly.” — Paul Graham
For sensitive ecosystem engineering at planet scale, what we need most is better knowledge of how the Earth system works. We are model-rich and data-poor. We need to monitor in detail and map in detail what’s really going on, and the measuring has to be sustained and consistent. Donella Meadows laid down the commandment: “Thou shalt not distort, delay, or sequester information.” You can drive a system crazy by muddying its information streams. You can make a system work better with surprising ease if you can give it more timely, accurate, and complete information. We must build a digital Gaia.
“A project is sustainable if it is cheap enough to be the first of a series continuing indefinitely into the future. A project is unsustainable if it is so expensive that it cannot be repeated without major political battles. A sustainable project marks the beginning of a new era. An unsustainable project marks the end of an old era.” — Freeman Dyson
One important negative feedback may be operative. The world’s land areas are absorbing more carbon dioxide than they’re releasing lately. “Believe it or not, plant life is growing faster than it’s dying. This means land is a net sink for carbon dioxide, rather than a net source.” This might be due to simple CO2 fertilization–additional CO2 stimulates plant growth.
In Jim Lovelock’s worst-case climate scenario, Earth stabilizes at 9°F warmer; a fraction of the present human population survives. But the exact outcome in such a complex system is unpredictable. Threshold effects are sneaky. At some point, though, a threshold is reached. Then in an unstoppable cascade the rain forests melt like Arctic ice, leaving savannah, scrub, and desert in their place.
Humanity currently runs on about 16 terawatts of power. We have to cut our fossil fuel use to around 3 terawatts a year, and we have to do it in about 25 years.
On the old astronomical schedule, a new ice age should have begun a couple thousand years ago. “A glaciation is now overdue, and we are the reason.”
Our terraforming thus far has been unintentional. Now that we have the curse and blessing of knowing what’s going on, unintentional is no longer an option. We finesse climate, or climate finesses us.
The following are my thoughts taken from a memo to family office investors I sent out today regarding the pandemic.
These are unprecedented times in modern history. Not since World War II has there been such a large disruption in daily lives across the world at such a quick pace.
The pandemic we’ve entered is a classic Black Swan — an unpredicted event that has extreme consequences. Of course, Black Swan events are relative. A surprise to you or I may have been wholly anticipated by others. And in this case, it very much was.
To epidemiologists and people who had seriously thought it through, a global pandemic quickly sweeping humanity was an inevitability. It was a matter of when, not if. In 2018 Bill Gates gave a short TED Talk about the dangers of a global flu-like pandemic and the measures we could take to help prevent or reduce it. As we’re now aware, the advice was unheeded.
The human lives lost from the virus will be a tragedy of epic proportions. The current and upcoming economic malaise may be nearly as bad — particularly affecting those without the means to ride it out. Recent wide-ranging government stimulus and intervention can soften the blow, but ultimately the only solution is getting rid of the virus.
We will get through this, as humanity has always done in the past. When the entire world has a common enemy, people get creative. Everyone should expect the world to look different after. Especially in areas like healthcare, biotech, and government.
These differences will all be for the better. Humanity is always searching for higher peaks of “fitness”, and on the rough landscape of possibilities sometimes you have to go down to eventually go up. Life getting worse before it gets better has always been a common theme. From the shift to agricultural societies, to world wars, to global pandemics.
We just need to work together to get through it first.