Creating Creator

The following is a short case study on “Creator”, a cloud-based content management system I built at Mashgin, where we make visual self-checkout kiosks that use computer vision to see items so you don’t have to scan barcodes.

In the years since launch, it has given location managers the ability to customize their menus in ways they were unable to in the past. This empowers them to make frequent changes, tailoring the menu to customer needs rather than just “using the default”.


Mashgin Creator is a tool for operators to build and manage their menus, from items to discounts, schedules, and more.

Mashgin customers have been able to easily edit their checkout items in the cloud since we first launched in 2016. But when we began to design our mobile and in-person ordering app, we realized customers would need an easy way to design more complex menus, with custom item options, photos, nested categories, scheduling, and more. This is where the idea for Creator came in.

Creator is what they call in the industry a “CMS”, or content management system. Any software tool used to manage content of any type could apply.

In the food service industry, a CMS is used to manage their menu items, pricing, discounts, taxes, etc. The scope could be anywhere from an individual cafe to a nationwide chain of stores.

Most existing CMS software for food service was cumbersome to use and poorly designed. It was really just a simple layer on top of a database, allowing users to edit basic item information. Some software didn’t even allow for real-time syncing of data — any changes are “submitted” and someone behind the scenes has to deploy them to the menu.

The output of these menus is very simple: it’s just items in some nested menus, each with its own data like price, type, options, etc. But the work and consideration that has to go into building each menu is anything but simple.

It was clear that our customers needed something much better.

Designing the app

Believing that all the existing tools weren’t very good, we chose not to base the core design off of any other examples or prior work. Creator would be rethought from the ground up based on the needs and jobs of its users.

Continue reading “Creating Creator”

Take the Iterative Path

How SpaceX innovates by moving fast and blowing things up.

Take the Iterative Path FutureBlind Podcast


One of the greatest business successes over the last 20 years has been SpaceX’s rise to dominance. SpaceX now launches more rockets to orbit than any other company (or nation) in the world. They seem to move fast on every level, out executing and out innovating everyone in the industry.

Their story has been rightfully told as one of engineering brilliance and determination.

But at its core, the key their success is much simpler.

There’s a clue in this NASA report on the Commercial Crew Program:

SpaceX and Boeing have very different philosophies in terms of how they develop hardware. SpaceX focuses on rapidly iterating through a build-test-learn approach that drives modifications toward design maturity. Boeing utilizes a well-established systems engineering methodology targeted at an initial investment in engineering studies and analysis to mature the system design prior to building and testing the hardware. Each approach has advantages and disadvantages.

This is the heart of why SpaceX won. They take an iterative path.

Taking the determinate path

Let’s talk about the Boeing philosophy first, which is the most common approach taken by other traditional aerospace companies. “There are basically two approaches to building complex systems like rockets: linear and iterative design,” Eric Berger writes in the book “Liftoff” about the early history of SpaceX:

The linear method begins with an initial goal, and moves through developing requirements to meet that goal, followed by numerous qualification tests of subsystems before assembling them into the major pieces of the rocket, such as its structures, propulsion, and avionics. With linear design, years are spent engineering a project before development begins. This is because it is difficult, time-consuming, and expensive to modify a design and requirements after beginning to build hardware.

I call this the “determinate path” — in trying to accomplish a goal, the path to get there is planned and fixed in advance.

Continue reading “Take the Iterative Path”

To Increase Progress, Increase Desire

The key to faster progress is increased desire for more. That’s my theory, at least.

In all the commentary on the “Great Stagnation”, much is written about the lack of progress in tech areas like transportation. Commercial airplane speeds, for example, have decreased on average since the ‘70s:

Since 1973, airplane manufacturers have innovated on margins other than speed, and as a result, commercial flight is safer and cheaper than it was 40 years ago. But commercial flight isn’t any faster—in fact, today’s flights travel at less than half the Concorde’s speed. (Airplane Speeds Have Stagnated for 40 Years, by Eli Dourado and Michael Kotrous.)

There are clearly many contributors to this. Regulation is cited in the above post and seems to be most common reason mentioned. Rising energy costs is another major one. The less-talked-about contributor is consumer demand.

Most things are “good enough”

Clayton Christensen’s theory on disruptive innovation shows that as average performance demanded goes up, the performance level supplied by products generally goes up faster, eventually surpassing the majority of the market.

As a technology improves, its performance surpasses most market demand, and things became “good enough” over time. Customers aren’t willing to pay more for better performance. This leaves the market open for disruptors — either on the low-end (good enough performance but cheaper), or by having better performance on a completely different metric.

Back to airline travel. Flying from NYC to LAX in 6 hours became good enough for most people. Sure, less would be better, but not at much more cost. Only high end, richer users truly needed more. So airplane makers moved on to other attributes that weren’t good enough: safety, flexibility, price.

Continue reading “To Increase Progress, Increase Desire”

The new wave of science and research models

There has been an increasing amount of experimentation in the philanthropic and scientific funding space over the past few years. This is good news — as I mentioned in my last post, we need better ways to fund crazy ideas.

Here’s a sampling of some of the recent efforts:

  • The Astera Institute — Pursuing new tech areas through multiple models including FROs, PARPA (based on the DARPA model).
  • Fast Grants — An effort by Tyler Cowen, Patrick Collison and others to quickly disburse grant money to COVID-related ideas. Funded by many wealthy donors and philanthropies. Impetus Grants for longevity research was recently launched and inspired by Fast Grants.
  • New Science — Funding life science labs outside of academia. Partly funded by Vitalik Buterin.
  • Arcadia Science — Bio research institute.
  • Arc Institute — Funds individuals similar to HHMI, in partnership with Stanford, Berkeley, and UCSF. Founded by Fast Grants “alumni” Silvana Konermann, Patrick Hsu, and Patrick Collison.
  • Convergent Research — Uses focused research organizations (FROs) to solve specific scientific or technological problems. Funded by Eric Schmidt’s philanthropy.
  • Altos Labs — Biotech lab, another “academia outside of academia” model.
  • VitaDAO — A DAO-based longevity funding org where holders get a cut of IP proceeds.
  • Actuate — Also using the DARPA approach to fund and implement R&D.
  • FTX Future Fund — A non-profit fund from the FTX crypto exchange, aiming to allocate at least $100M this year to a wide variety of long-term focused areas.

In “Illegible Medicis and Hunting for Outliers” Rohit observes that:

There are two common themes here. That’s speed and autonomy. They mostly act under the assumption (the correct assumption it would seem from a betting lens) that they can identify talent, not bug them excessively, and leave them to do their thing. Instead of imposing rules and strictures and guidelines, they focus on letting the innate megalomania do the work of focusing their research.

The academic and government driven funding models have come up against their limits in recent years (decades?). These experiments provide new methods to allocate capital to research, development, and implementation of efforts that for whatever reason aren’t amenable to the startup funding ecosystem.

Prior to World War II, support from non-government or educational institutions was the norm. Patrons like Alfred Loomis ran a lab at Tuxedo Park, hosting scientists and engineers from around the world that was integral in the creation of radar. Funding was provided by philanthropies from the likes of Carnegie and Rockefeller. Or private R&D from Edison, Bell Labs or Cold Springs Harbor Lab.

These past models are still doing well of course — HHMI, the Gates Foundation, Google X, etc. — but much more is needed to expand experimentation. The government can continue to play a valuable role, particularly as a buyer of first resort.

I’m super excited to see what comes from these orgs. A few like Fast Grants have already had some impact.

For more on the topic, see:

Cover photo by The National Cancer Institute, Unsplash.

Let’s jumpstart the new industrial revolution

There is as much headroom in physics and engineering for energy as there is in computation; what is stopping us is not lack of technology but lack of will and good sense. — J. Storrs Hall

There have been three industrial revolutions. The first two spanned from the late 1700s to the early 1900s and essentially created the technological world we know today. Energy, transportation, housing, and most “core” infrastructure is pretty similar now as it was at the end of this period — especially if you extend it into the 1970s. The third revolution, the “Digital Revolution”, started around this time and as anyone reading this knows has made computing and communication ubiquitous.

There were bad things that came from these revolutions: pollution, environmental destruction, war, child labor, etc. But the good overwhelmed the bad, leading to GDP per capita (”resources per person”) doing this, which we can use as a proxy for progress in a host of other areas like longer/healthier lifespan, lower child mortality, less violence, lower poverty, and more.

Wikipedia describes the potential Fourth Industrial Revolution as “…the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds.”

These things are great, but we need more. Much more.

As just one example, it’s become abundantly clear over the past few weeks the importance of energy independence. But why don’t we already have it?

The cost of PV cells has collapsed over the past few decades. We also know it’s possible to build nuclear reactors far safer and more productive than any in the past. There should be solar panels on every home, geothermal wells in every town, and multiple nuclear fission (possibly fusion?) reactors in every state. A setup like this would lead to redundant energy at every scale, not reliant on geopolitics or over-centralization.

We should want to consume more energy, not less. (And unlike the second industrial revolution, it can be clean energy with minimal externalities.)

What else could a new industrial revolution bring? Just imagine what you’d see in a typical sci-fi movie:

Space parks/hotels/colonies, limb regeneration, flying cars, supersonic jets, same-day shipping to anywhere on Earth, self-replicating nanobots, new animal species, plants everywhere, infrastructure made out of GM trees, universal vaccines for all viruses, mobile robotic surgeons that can save lives on-location, convoys of self-driving cars, batteries with 50x current power, etc. etc.

To build these things — or even to see if they’re possible — a lot needs to change. Here’s just a few I’ve been thinking about:

  • Create a pro-progress culture. Pro-progress means anti-stasis. We’ve come a long way, and things are pretty good now. But they could be better. Far more people should be optimistic about the future and what they can do now to make it better.
  • Find more ways to celebrate and fund scientists and inventors like we do founders, celebrities, executives and sports stars. More crazy ideas should be funded, and even if they don’t succeed, the culture should be accepting of it.
  • Take more risks as a society. Incremental progress is great but even over long periods it can lead to a local optimum. To get to a higher peak, we need more exploration, experimentation, and invention. With this comes risk. We should do whatever we can to be conscious of and mitigate these risks, but in the end if the precautionary principle is applied to everything, we’ll be stuck in stasis until a global catastrophe forces our hand.
  • Allocate more resources to efforts that have high expected return to life on Earth. Nuclear fusion, for example, may have only a small probability of succeeding in the next 10 years. But if it does, it could bring enormous benefits to the world (to humans, animals, plants, you name it). The probability-weighted return to life on Earth is thus very large, and yet minimal resources are being devoted to it. The industrialization of space is another example. Concerned about depleting Earth’s resources or peak “X”? You wouldn’t be if we could mine asteroids and move potentially harmful processes off-planet.

If you agree with any of the above or are interested in similar ideas, here’s a few good resources I’ve enjoyed recently:

Singularities

The “Singularity” in artificial intelligence is the future moment when generalized AI becomes smarter than humans. In theory this starts a feedback loop of runaway intelligence that radically changes our world in ways that are hard to predict.

Similar points exist in other industries as well. These are ultra tipping points that would lead to drastic changes in the industry and our world — changes so great we could only make very rough guesses as to what they’d be.

What are some potential examples?

  • Highly reliable level 5 autonomous cars.
  • Rockets able to sustainably send a kilogram to orbit for under $100.
  • Abundant renewable energy under $20 per MWh.
  • Near perfect protein folding algorithm available via API call.
  • Low-cost ability to manufacture any protein at scale.
  • Battery cost below $100 per kWh at scale.
  • Battery energy density over 500 Watt-hours per kilogram.
  • Plant- or cell-based meat cheaper than animal meat with ~same nutritional profile.
  • Affordable VR/AR glasses with variable depth of focus and up to 60 pixels per degree of resolution (~matching the human eye).
  • A definitive method for stopping cellular senescence without noticeable side effects.
  • Cost of aerospace-grade carbon fiber comparable to aluminum. (Currently ~10x more.)
  • Cost of carbon nanotubes comparable to current carbon fiber. (Currently 5-10x more.)

Some of these tipping points look like they’re in our near future, and there’s no reason to believe any of them aren’t possible. A few of them would likely make others on the list easier. Every one of them has downsides but the upsides are massive. How exciting!

What else can be added to the above list that I forgot?

(Tipping points in brain-machine interfaces, construction and building, healthcare, etc.)

Roundup: Space updates, Progress studies, New World’s Fair, Web3, DAOs, and “The First Tycoon”

Greetings FutureBlind readers!

It’s been a while. Although I have 3 or so posts outlined and in various states of completion, life has gotten in the way. My wife and I’s first child is due in a few months (Are we in the thick of a post-Covid baby boom?) and in an act of complete lunacy this summer we started a major home renovation. This has, to put it mildly, put a damper on my free time.

Nonetheless I really wanted to write a bit and put something out there. So instead of the typical focused post, I’m doing it roundup style. Each section below is an area I follow or find interesting.

Here’s an outline of the roundup so you can jump to whichever section sounds interesting:

  • 🚀 Space updates
  • Progress Storytelling & a New World’s Fair
  • Web3, tokens, and the future of governance
  • Solving big problems
  • What I’ve been reading
  • Quotes from “The First Tycoon”
Continue reading “Roundup: Space updates, Progress studies, New World’s Fair, Web3, DAOs, and “The First Tycoon””

The Future of Space, Part II: The Potential

The Future of Space, Part II: The Potential FutureBlind Podcast

Getting to space is about to get a lot easier. I reviewed the reasons why in Part I. Now for the fun part: what it will lead to.

A 10x reduction in cost to orbit has already started to change things. The next 10x reduction will lead to outcomes and use cases much harder to comprehend or predict. It would have been hard for anyone in the late 1800s to predict what drastically lower costs of energy and electricity would eventually bring. Or for anyone in the 1970s to predict the consequences of abundant computing power and ubiquitous global communication (Reddit? NFTs? Protein folding?).

But we can try.

This summary is focused on some of the changes we’re likely to see in the next 5 to 20 years. A lot can happen in that time frame. For reference, it’s taken SpaceX only 19 years to accomplish what they have. But progress compounds and is exponential — especially so once a tipping point like this has been crossed. The change we’ll see in the next 20 years will dwarf that of the last 20.

(Quick note: This isn’t meant to be comprehensive. It’s a highlight of the new areas I find most interesting, and doesn’t include anything on the two biggest space segments: communication and Earth observation. Although there are plenty of interesting potentials here — like globally available high-speed internet [Starlink] or ubiquitous, near-real-time worldwide monitoring [Planet].)

Infrastructure

Transportation & Launch Services

The progress of SpaceX, the current leader here, was detailed in Part I. Given the Falcon 9’s low costs, it’s likely to be the preferred choice for medium-sized payloads, and even smaller payloads with rideshares.

Until now, SpaceX has self-funded their Starship super-heavy launch vehicle. That changed a few weeks ago when NASA announced that Starship had won the contract to land humans on the Moon again. This is huge. The contract will fund $2.9 billion of development costs and speed up the timeline for Starship to become human rated. With the pace of their current development, Starship is on track to become fully operational within 3 years. This should keep SpaceX the leader for heavy and super-heavy launches for some time.

When it comes to delivering humans, the other Commercial Crew competitor, Boeing, is more than a year behind after testing mishaps. Blue Origin may the next best bet for heavy-launch vehicles and a dark horse candidate given its potential funding from Jeff Bezos. There’s multiple smaller upstarts like Rocket Lab, Relativity Space or Astra at the low-end of the market, potentially moving disruptively upward. SpaceX blazed the path for these rocket companies, showing how far costs can come down, and proving that lower prices can expand market size.

Also included in this category are spaceports. Most spaceports are currently owned and operated by governments — like Kennedy Space Center at Cape Canaveral, Florida, or Vandenberg Air Force Base in California. This will start to change in tandem with the growth of commercial space.

Spaceport America in New Mexico is an example of an all-commercial spaceport, similar to most airports in that it’s owned and operated by the state. Rocket Lab built their own spaceport, Rocket Lab Launch Complex 1, in New Zealand. SpaceX’s R&D facilities in Boca Chica, Texas are now being converted into not only a spaceport, but a township to support Starship launches. Given Starship’s eventual importance, there’s no doubt this will become a hub of activity. Launches, and more importantly landings, will also take place on converted offshore oil rigs.

Most current space activity takes place in Earth orbit. As it becomes cheaper to leave the influence of Earth’s gravity, we’ll start expanding further out into the Solar System. The best staging point for this expansion isn’t spaceports on Earth — it’s the Moon and lunar orbit.

The Moon has one-sixth the gravity of Earth and no atmosphere. The means the energy (or delta-v) required to launch from its surface is much lower. The Moon also contains 600 million tons of ice, and its soil is 40-45% oxygen by mass. These raw materials can be used to produce propellants for launch, along with water and breathable oxygen — nearly 100 grams for every kilogram of soil. A Moon base is not far off in our future.

On the Moon, concepts like space elevators or skyhooks also become possible[1]. Imagine a structure — similar to piers going into the ocean — extending from the lunar surface into orbit. Satellites can be sent up the elevator into orbit and ships can “dock” at the top, where supplies can be loaded with much less energy cost.

Once other infrastructure like commercial space stations and lunar bases get set up, I think we’ll start seeing regularly scheduled launches to specific destinations. From quarterly launches to monthly, weekly, and eventually daily. (Questions to ponder: Can rockets fit in the existing intermodal shipping system? What would a new space-specific intermodal container look like?)

Continue reading “The Future of Space, Part II: The Potential”

The Future of Space, Part I: The Setup

The Future of Space, Part I: The Setup FutureBlind Podcast

Expansion of life across our solar system and beyond has been a dream of mine since childhood. Of course, this isn’t uncommon among other sci-fi enthusiasts, or anyone who grew up knowing we’ve sent humans to the Moon but haven’t sent them back in nearly 50 years.

Space is fascinating for many reasons. It’s a frontier in every sense: physically, technically, even socially. It’s at the bleeding edge of what humanity is capable of. “Looking to the stars” and “shooting for the moon” are common idioms because space has defined our limits for generations.

Now (finally!) the technical and business tailwinds are coming together to make it possible. The cost and ease of getting to space are about to improve by many orders of magnitude. This will drive the space industry to be one of the biggest sources of growth over the next 10-20 years.[1] It will make existing technologies cheaper and more ubiquitous, like allowing worldwide high-speed internet in even the most remote, rural areas. It will also open up a host of new possibilities previously only imagined in science fiction.

This is the first of a two-part essay on the upcoming future of the space industry. I’ve been closely following SpaceX’s progress in particular since their first launch of the Falcon 9 in 2010, so I’m excited to finally write about it.

Why now?

TLDR: SpaceX has pushed cost to orbit down by 10x, and will by another 10x in 5 years. Along with further commercialization and government funding, a threshold has been crossed.

The success of commercial launch services puts the space industry in the same place as the early days of railroads in the 1800s or commercial ocean shippers in the 1600s. The key here is early days as things are really just getting started.

The “why now?” can be reduced to one chart — the average cost to get 1 kilogram to orbit:

Data from https://aerospace.csis.org/data/space-launch-to-low-earth-orbit-how-much-does-it-cost/

In the next section I’ll go over the reasons why this makes such a big difference. But first, how did it happen? As should be evident by the chart, this is essentially the story of one company — SpaceX.

The driving ambition for Elon Musk when he founded SpaceX in 2002 was to drastically reduce the cost of escaping Earth’s gravity. Their “MVP” was the Falcon 1, a single-engine rocket that could launch small satellites. Falcon 1 only launched 5 times, with only the last 3 succeeding. Haven proven viability, SpaceX quickly moved onto production of the Falcon 9, a scaled up version with nine Merlin engines eventually capable of delivering over 22,000 kg to Low Earth Orbit (LEO). Here’s the price progression of each SpaceX rocket, starting from the base of what a conventional rocket costs:

From a conventional rocket price of $10k per kg to LEO, to a price of $60/kg for a Starship with 50 launches, over 100x lower. See the Google Sheet here to check my math.

Driving the first order-of-magnitude reduction in cost are the following:

  • Engineering from first principles. Designed and engineered from the ground up, famously using first principles to rethink standard industry practices. This led to seemingly trivial savings like using ethernet cables rather than serial cable bundles. But added up they make a huge difference.
  • Better incentives. Traditional government contracts were cost-plus. This incentivizes contractors to increase their costs both to make more profit and for more admin overhead to track expenses. With fixed-prices, companies are incentivized to drive costs down as much as possible.
  • Standardization of launch config. Rather than customized configurations for each launch and customer, SpaceX “productized” the Falcon 9, allowing for cheaper setups and repeated processes.
  • Reusability. Why is air travel cheaper than space travel? It’s obvious, right? Aircraft are reusable while rockets are destroyed after a single use. But not anymore, as anyone not living under a rock now knows that SpaceX can land and reuse the first stage(s) of their rockets.

And the next 10x reduction with Starship:

  • Bigger rocket. There are economies of size in rocketry: The bigger the rocket, the higher the payload-to-fuel ratio can be.
  • Full flow combustion cycle engine. This higher-complexity engine design makes it more efficient and capable of being reused many times with very little maintenance.
  • Lower-cost methane as fuel. Methane is cheaper than the previously used RP1 (rocket fuel), and SpaceX is planning on literally drilling for methane gas on their Texas property and synthesizing it on their converted oil rigs. (It can also be synthesized on Mars…)
  • Full reusability. 100% of Starship will be reusable, allowing dozens (or hundreds?) of uses for each stage and engine.
  • More launches. The more launches you can sell in a year, the less markup you need to charge to cover admin costs. Economies of scale and purchasing power are also achieved in raw materials and fuel production.
  • Refuel in orbit. Starship can park in orbit while it’s refueled by up to 8 other launches. This makes payload capacity to orbit the same as payload capacity to nearly anywhere in the solar system. Imagine what we can do with the ability to send over 100 tons to Moon, Mars or Europa.

Government funding, particularly from NASA, has been a key enabler. Without these contracts it would have been very difficult for SpaceX to fund R&D. And they’ll continue to play a key role for SpaceX and other commercial space providers. In recent years NASA has stepped up their commercial contracts significantly, and with further falling costs this is likely to continue. (See footnote [2] for a list of recent milestones.)

This moment for space companies is the equivalent of 1995 when the NSF dropped all restrictions on Internet commerce, which let private companies take over the backbone. The breaking of the dam that releases a tidal wave of activity.

The cost-driven industry flywheel

Expensive launches aren’t just costly in their own right — they lead to cost inflation of everything else. If it costs $100M to get a satellite to orbit, reducing the cost of development from $10M to $5M is only a 5% difference. So why not over-engineer, paying up for components and testing to ensure everything is perfect? Now if a launch costs $10M, there’s more incentive to cut costs. Even if there’s an issue, a second launch is much cheaper. Order-of-magnitude-lower launch costs will lead to similar decreases in payload costs.

From a Morgan Stanley report:

Currently, the cost to launch a satellite has declined to about $60 million, from $200 million, via reusable rockets, with a potential drop to as low as $5 million. And satellite mass production could decrease that cost from $500 million per satellite to $500,000.

More launches will lead to even cheaper costs, which will lead to cheaper payloads, which… see where I’m going here?

There are 4 distinct feedback loops here, all driving more launches. Not shown in this diagram are balancing (negative) loops involving things like launch failures or excessive regulations.

SpaceX has initially started the flywheel that got the industry to this inflection point.[3] But it won’t be the only one turning it. Ultimately to truly take advantage of space transportation we’ll be seeing many competing service providers, at all different levels of payload size and capability.

The flywheel is already turning and has led to a higher volume of launches:

https://en.wikipedia.org/wiki/Timeline_of_spaceflight

At some point in the near future we’ll be seeing a launch per day, with spaceports treated more like shipping ports: hubs of travel and commercial activity.

Current state of the industry

Before moving on to Part II, I want to quickly review the two main categories of payload currently being launched:

  1. Government research and exploration.
    1. International Space Station cargo. In the U.S. this encompasses missions for Commercial Resupply (sending equipment and supplies) and Commercial Crew (sending people).
    2. Other research and exploratory efforts. This includes missions like the recently landed Mars Perseverance rover, the James Webb Telescope set to launch after much delay later this year on an Ariane 5 rocket, and the Europa Clipper set to launch in 2024.
  2. Satellites. Communication and Imaging satellites account for a vast majority of the space industry. Exploratory missions get all the publicity, but they are currently very tiny. This will continue, especially with broadband internet constellations.

    The use of communication satellites in particular is already a ubiquitous part of everyday life: from GPS navigation[4] to phone calls, TV signals, internet, and more. Satellite imagery as well: what once was a tool for only the military and intelligence agencies of large governments is now used by anyone with a smartphone.

    Satellites come in a range of sizes, from tiny CubeSats the size of a shoebox launched 100s at a time; to huge geostationary satellites that take up the entire payload of a rocket.[5] Most of this hardware — particularly for the larger ones — requires costly, sophisticated engineering and infrastructure. The full stack can include satellite manufacturers, operators, suppliers, and ground equipment. As costs come down, so will satellite size and launch frequency.

What’s to come

I hope I’ve convinced you that getting to space is about to get a whole lot easier.

In Part II, I’ll talk about the progress we will potentially see in space in the upcoming 10 to 20 years: commercial space stations, tourism, manufacturing, mining, exploration and more.


Footnotes

  1. The same is true for biotech in the upcoming decades. Areas like AI and Crypto will play big roles as well, but they’re not the thing. They’re the “thing that gets us to the thing“.
  2. Here’s a timeline of a few milestones:
    • 2008-12 — Commercial Resupply Services (CRS) contract of $1.6B to SpaceX and $1.9B to Orbital Sciences to deliver supplies to ISS. This helps fund Falcon 9 development.
    • 2012-05 — SpaceX Dragon capsule launches “empty” to perform tests and dock with the ISS, the first commercial spacecraft ever to do so.
    • 2012-10 — SpaceX CRS-1 mission sends Dragon with supplies to ISS. Dragon is the only cargo vehicle at the time capable of returning supplies to Earth.
    • 2014-09 — NASA awards final Commercial Crew Program (CCP) contract to SpaceX ($2.6B) and Boeing ($4.2B) for the capability to send 4-5 astronauts to the ISS. First flights for both initially planned in 2017.
    • 2020-01 — NASA awards Axiom Space the first ever contract to build a commercial module for the ISS.
    • 2020-04 — NASA awards lunar lander contracts to Blue Origin, Dynetics, and SpaceX under the Artemis program. The goal is to land “the first woman and the next man” on the Moon by 2024.
    • 2020-05 — Commercial Crew Demo mission sends 2 astronauts to ISS. These are the first astronauts on a commercial mission, and the first from US soil since retirement of the Space Shuttle in 2011. 10 million people worldwide watched it live.
    • 2020-11 — Crew 1, the first operational flight, sends 4 astronauts to ISS. Due to delays and other issues, Boeing’s Starliner isn’t set to fly for another year.
    • 2020-12 — NASA awards Blue Origin a Launch Services contract to transport planetary, Earth observation, exploration and scientific satellites.
  3. Elon Musk is a master at many things, but one of the greatest is his ability to get massive, company- or industry-wide flywheels moving.
  4. Global Positioning System (GPS) was developed by the military in the 1960s but not made public until 1996. GPS is an extremely critical part of our current technical infrastructure. Every time you use your phone to navigate, order food, or track your run, it is pinging multiple GPS satellites to triangulate your exact location.
  5. Here’s a good visual size comparison of satellites:

Tech Stack Trees

Every product is built on and enabled by one or more technologies.

Understanding where a product fits on its higher-level tech stack is an important part of any long-term strategy or investment thesis.

The following is an exploration of tech stacks: what they are, how to model them, and what roles their components play. I also discuss what insights can be gained, such as potential opportunities to expand the business.

Stack Trees

Typically, a tech stack shows what infrastructure a technology is directly built on or requires. A SaaS startup for example could have a front- and back-end software stack with a cloud provider like AWS under it. The tech in focus is on top of the stack, with the supporting layers below it.

A tech stack tree is a higher-level version, branching both above and below the “layer” in focus. It shows both what the technology directly relies on and what relies on it. Stacks are fractal in nature, just like trees. An innovation spawns many others that use it, which further enable others, and so on.

A stack tree shows the relevant “slice” of the full dependency graph, going only a few nodes out. It looks something like this:

How to model a stack tree

Step 1: Determine the core tech. The first step is to decide what the actual technology in focus is. A technology in this case is a physical tool, process or system of other technologies that combine to do a job. It does not include businesses or social innovations. (A “business” is just a group of physical and social technologies united under a strategy — but we’re only concerned with the physical part here.[1])

Examples can range from the simple: hammers, glass, newspapers, or an assembly line process; to the more complex: CPUs, streaming video services, blockchains, smartphones, or nuclear reactors.

Step 2: Layers below. What are the primary technologies and processes needed to create and maintain the core tech? What does it need not only to function but to be sustainable? Clues can be found in:

  • Why now: What enabled the tech in the first place? Why wasn’t it widely used 20 or 50 years earlier?
  • Suppliers & major cost centers of businesses producing the tech. (Infrastructure, manufacturing tech, service networks…)
  • Supply chain & logistics: What gets the product/service to customers? (Transportation, shipping, internet…)
  • Distribution tech: What gets the customers to the product? (Retailers, advertising, search engines…)

Step 3: Layers above. What does the tech directly enable? It’s possible there are no layers here. Many well-known innovations don’t directly enable other tech, like Netflix.

  • What do other businesses use it for? Who is it a supplier to?
  • Is there anything in-between the technology and the ultimate end-user?
  • Is it a marketplace or multi-sided network that connects many groups of users?

Stack tree examples

Here’s a few examples of stack trees from the tech industry, although they can be drawn out for products any industry:

(The Amazon “Vampire Squid” is the best example I can think of traversing the stack, starting as an online marketplace and expanding outward in all directions: up, down, and sideways (I left out Prime, Music, Video, etc.).

What insights can be gained?

Companies are embedded in value networks because their products generally are embedded, or nested hierarchically, as components within other products and eventually within end systems of use. — Clayton Christensen

A tech stack tree is one way of looking at a company’s value network. This can lead to insights into where value flows, who captures it, and potential opportunities to expand the business.

What layers in the stack capture the most value?

Which technologies accrue most of the value depend on many things: how much value (productivity) created relative to alternatives, availability of potential substitutes, the size of the overall need, or other competitive advantages inherent to the business model.

One of the models Clayton Christensen uses makes sense to apply here: Where is the bottleneck in demand? In other words, where in the stack is the biggest difference between performance demanded and performance supplied? What needs to be better?

Nvidia is a good example here. They keep churning out GPUs with higher capabilities and the market keeps needing more. Supply hasn’t kept up with demand and that’s likely to continue for some time. This bottleneck (along with other factors ) allows the GPU layer to capture a lot of value.

Are there opportunities to expand into adjacent technologies?

Amazon (see stack above) is the prototypical example here. They started as an online marketplace with some fulfillment operations, and over time have expanded in all directions.

In more traditional business thinking, you consider expanding vertically into suppliers and customers or horizontally across industries. Traversing a tech stack is similar, but to me more focused on the true technological and needs-based relationships. Traditional business thinking would have never led to Amazon expanding into internet infrastructure via AWS.

Of course, expanding for the sake of it is never a good strategy. You have to ask:

  • Do our current products or processes give us an advantage here?
  • How much value does the layer capture? (Is it a bottleneck in demand?)
  • Are there existing barriers to entry, and if so, does our position in other stack layers help overcome them?
  • Does this improve the outcomes for our current customers?
  • Will expansion endanger our relationships with partners or customers?

Short case study: Food delivery apps

The core tech here is a mobile or desktop app where you can order food from many local restaurants and get it delivered within ~1 hour. DoorDash, UberEats, Postmates, etc.

Layers below: What are their major cost centers? Restaurants and delivery drivers. What enabled delivery apps? Primarily ubiquitous smartphones and access to GPS-based navigation. Restaurants also need to have some way to communicate, whether by phone or Wifi-based tablets, and be able to package food in proper take-out containers (plus potentially many others to manage operations).

Layers above: What needs delivery apps to run? Cloud kitchens, which operate large strategically located kitchens that can make food for many different branded “restaurants”. Recently a further layer was added with the concept of pop-up branded chains, which uses the cloud kitchen & delivery infrastructure.

What captures the value? In the stack above, smartphones capture far more value than any other tech — but they’re a platform with thousands of other use cases. In this case we just want to focus on value flow within the food delivery market. It may not be clear at first who captures more value: the delivery apps or the restaurants, given companies like DoorDash are losing so much money. But it’s clear that restaurants are not a bottleneck in demand — so it’s likely the apps that capture more value. And it seems their unit economics bear this out.

Opportunities for expansion? The clearest opportunity to expand within the tech stack is into cloud kitchens. This could potentially alienate some restaurant partners, but restaurants are so fragmented it shouldn’t matter. I think this has a lot of potential given: captive customers, synergies with delivery app, and lower costs with economies of scale and not having to operate normal restaurant ops.

Functions in the stack

How would you classify technologies in the stack? I think it’s most informative to categorize by what pattern (or archetype) they match in the greater ecosystem. These are functions that can exist in any industry or stack tree: Platforms, protocols, etc.

I’ll follow up with another post including examples of different tech functions and stack patterns.

To be continued…


Thanks to Leo Polovets and Jake Singer for helpful feedback on this post. Header photo from veeterzy on Unsplash.


Footnotes

  1. Physical technologies are “methods and designs for transforming matter, energy, and information from one state into another in pursuit of [goals]“. There are also social technologies (organizational processes, culture, values, incentive systems, memes, etc.) that evolve and build off of each other over time. (Definitions from The Origin of Wealth, by Eric Beinhocker.) ↩︎