2 Comments

Cool metaphor!

I wonder if the better metaphor for the difference between the intuitive/inferential phases is the difference between sea and air, not the difference between different phases of the water column.

Consider: Moravec's Sea is a compelling metaphor because it helps explain the phenomenon of "AI getting really good at everything all at once"; all those things its getting good at are at the same height above sea level, so as the water rises, they are all flooded.

Is Inference (or the post-Inference steps you describe) like this? It seems more like you need a different series of explicit steps to achieve the desired result for each application; it's a bit like the AI has taken its first steps onto land, and is now trying to scale new heights. It wouldn't reach all peaks simultaneously; instead, it would plan its ascent, peak by peak, determining the right path to the top.

Expand full comment

That's an evocative alternate metaphor. I can see the appeal of keeping the surface of the sea as the baseline of capabilities, and imagining capabilities from greater spending of inference as like something exiting the sea and traversing up landmasses. That implies the capabilities from post-training enhancements are not equally level across the physical landscape, the way that water is.

But I do actually contend the capabilities from post-training enhancement are increasing capabilities across the landscape! It seems that spending inference compute to extract greater capabilities would to work across the entire topography, and you could raise or lower the top strata of the ocean layer by how willing you are to employ compute at inference for thinking or multiple agents collaborating together.

It also does seem like a less mixed metaphor if you stratify the sea, as opposed to keeping the sea one layer but then introducing a being (an amphibian evolving onto land? an adventurer/hiker scaling a peak) for the atmospheric layer of capabilities.

Expand full comment