Lumpiness
Lumpiness is a term that's often used by Robin Hanson to describe things like innovation, secrets, AI progress, citations.
The general idea is something like, if something is "lumpy" then there are a few things that really matter, rather than a bunch of small things that add up.
"Lumpy: Nope, the CEO isn’t talking about poorly cooked oatmeal whenever he or she says revenues or orders were lumpy. This term means that sales were uneven during the quarter, with some weeks having low order rates and others having high order rates. The key is finding out why sales were lumpy and whether lumpy sales are normal for the company."[1]
"Operating revenues and expenses can be smooth or lumpy. A smooth revenue or expense is evenly and reliably spaced out over time. Smooth revenues include interest earned on investments and perhaps fee income, while smooth expenses include wages, rent, insurance, utilities, and so on. In contrast a lumpy revenue or expense is not evenly spaced out over time. Lumpy revenues include government grants and large donations while lumpy expenses include property taxes. If one were to classify all non-profit revenues and expenses as smooth or lumpy one would quickly find that most revenues are lumpy while most expenses are smooth."[2]
"The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams."[3] -- I don't understand why hanson is saying that citations are not lumpy. Aren't power laws lumpy?
It's not clear to me how uneven a distribution must get before it gets calls "lumpy". For example, is a normal distribution lumpy, or must it be thick-tailed? is any non-uniform distribution lumpy? etc.
I don't think lack of lumpiness implies a continuous takeoff. I can imagine two kinds of "lack of lumpiness":
- you need lots of small insights/"content"/improvements, and each one makes your AI a little bit better
- you need lots of small insights/"content"/improvements, but your AI doesn't really get much better most of the time. A small number of times (maybe once), there are "thresholds" and when you cross a threshold, it's like adding a missing gear and your AI suddenly gets much better.
The first case seems to be what Robin Hanson imagines. Have people talked about the second case? In the second case, I think you do get a discontinuity even if insights are not lumpy.
In Robin Hanson's writings
See also
- Secret sauce for intelligence -- talking about "lumpiness" is not quite the same thing as saying there's a secret sauce for intelligence, but it's pretty similar. Lumpiness seems more general (it's talking about the distribution of insights/progress).
- Simple core of consequentialist reasoning -- lumpiness is also similar to this, but consequentialist reasoning is a subset of all AI capabilities.