Current forward looking types are spending significant time on the idea that the technological singularity is going to happen in the next five years. I'm not interested in trying to defend this notion so much as I want to describe and contextualize the thought process fueling the disagreements and to that end I would like to supply five scenarios which mathematically instantiate faster and slower versions of the AI progress curve. I am taking for a numerical rating that an artificial general intelligence that can conduct unsupervised tech development is worth 0 competence points and an artificial super intelligence that can solve any solvable problem faster than the whole of the human race is worth one million competence points. The variable x is the time in days since machines began learning on their own and the function f(x) is how many competence points it's acquired at a given time.

Scenario # 1 : Literal Singularity : Hyperbolic growth : f(x)=1/(-x+1)

AI gains the capacity to do AI development. AI creates a better AI which creates and even better AI. Effective capacity doubles in twelve hours, doubles again in six hours, doubles again in three hours, and doubles a fourth time in ninety minutes as the system becomes increasingly intelligent for the purpose of becoming increasingly intelligent. The growth curve suggests infinite gain in finite time but it will hit the limits of hardware or physics and stall out before that. If this doesn't happen before the system achieves super intelligence then it will experience a spike of capacity that exceeds any ability to measure it in a sliver of time.

Time from general intelligence to super intelligence: Less than twenty four hours. This is the nightmare scenario if you haven't solved AI goal/value alignment since not only does the super intelligence appear quickly but the most extreme growth is in the last hour. Even with the best possible human oversight we should expect to be blindsided.

Scenario # 2 : Hard Take Off : Exponential growth : f(x)=2x

Doubling of capability occurs at a constant rate for as long as there is sufficient computation. Capacity for self improvement and difficulty of self improvement are exactly balanced and this allows for a short trip from just sufficient to do independent development to vastly super human capacity. This is really fast but still comprehensible in a way that folks with STEM degrees can kinda, sorta, barely follow. Changing the base from two to say one point one will take the doubling time from a day to about a week but doesn't change that the doubling (tripling, quadrupling, or quintupling) time remains a constant number.

Time from general intelligence to super intelligence: Twenty days. Gains are visible day by day but almost certainly exceed the capacity of human researchers to measure and comprehend them in the second half of the development period. Course corrections are going to be haphazard and half blind but could still make for a better final outcome. This is probably the most common notion for the techno-futurist set and basically treats AI self development as an extension of Moore's Law.

Scenario # 3 : Soft Take Off : Geometric growth : f(x)=x2

AI development happens at a gently accelerating pace once automated. The amount of competence gained per week increases with time but the span between doubling does as well. Progress is mostly comprehensible and consequences of new abilities can be briefly considered even if they aren't fully understood. This is the scenario where big computing resources are likely to be the most consequential if two or more sides are racing to artificial super intelligence.

Time from general intelligence to super intelligence: One thousand days which is about two years and nine months. Cycles of releases still occur and new models can be compared to older models. People on the street talk about the technological singularity and intelligence explosion as the number of patents attributable to a AI increases and things that people didn't even consider problems are addressed by the plummeting price of intellectual labor.

Scenario # 4 : Slow and Steady : Linear growth : f(x)=10x

AI can improve itself and it can do it faster than humans but the process isn't accelerating. There is no intelligence explosion but rapid and disruptive technological and economic shifts from the drop in the cost of intellectual labor probably generate as much change as any technology in living memory. A person living through this might still call it the technological singularity.

Time from general intelligence to super intelligence: Almost two hundred seventy four years. AI out paces human capacity in most fields economically and AI remains the technological topic of the century but with a sense of continuity that makes it only feel slightly more disruptive than the internet. This is the scenario that most people probably imagine for the future. A continual disruption without discontinuity. AI does most of the mental grunt work and can be trusted to work unsupervised in many contexts if only because it knows when to ask for help or clarification. While this may look like an anticlimax it probably won't feel like one.

Scenario # 5 : Stall out: Sub-linear growth : f(x)=100(2√x)

What appears to be exponential or geometric progress now is just us pushing all human knowledge into predictive systems. Once all of the low hanging fruit gets pick development slows to a crawl. AI can do things that humans have done but it struggles with originality and the truly novel(just like the average human). Increasing intelligence turns out to be a hard problem and while it can be developed it can't be done quickly and the higher you go the harder it gets as each new paradigm requires even more cognitive scaffolding .

Time from general intelligence to super intelligence: More than twenty thousand years. Space colonies are doing archeology by the time that an AI reaches demigod status. The human race may still have been out maneuvered and exterminated or subjugated by a coalition of merely genius level AI, it could become the Borg, or we might be the masters of all with talking toasters to boot. Whatever happens in this case takes time.

Scenario # 0 : You are here : AI is here : f(x)= ???

Plans to have AI do AI research are underway and not in a vague, theoretical sense either. Papers have been published. Dates are set. We are likely to have eliminated at least three of the five cases by 2030. I don't know which three. In the first four cases AI is set to be the biggest economic, technological, and societal game changer since the introduction of agriculture. In the first three cases human labor is likely to be mostly redundant in a decade. In the first two it will happen practically over night. It slices, it dices, it does medical research, it files for patents, it understands you personally, it comprehends everything generally. A factory that builds robots that build factories that build robots is coming to a planet near you . . . Or not.

Hydrogen fuel cells weren't really a thing. The internet was much more important than the fax machine. Predicting technological trends is really hard unless you're Gordan Moore. Right now a lot of predictions are flying around and no matter what happens a lot of people are going to be wrong. Everything I wrote above is just extrapolation from trend lines. Fitting to different functions predicts wildly different futures. We are going to end up in a future that fits to a growth curve and I hope this provides some clarity on the different things that could be called a technological singularity and why people have wildly diverse takes on what is implied by the term.

SciFiQuest 3025: Revenge of the Deathborg