On Singularity

philosophy politics transhumanism futurism

One of the defining concepts of transhumanist philosophical discussion is the notion of a Singularity. The meaning of this term, in common parlance, has come to be associated with one primary notion - an AI superintelligence exponentially improving itself and technology, either to technological utopia or skynet death machine scenario, depending on various factors or whoever happens to be discussing it.

Some people treat this almost like a religion, for example the folks who worry about entities like Roko’s Basilisk, and others look at it in a less deific manner, but the general concept as outlined by modern discourse is usually along the lines of the following:

Isaac Arthur recently made a video on this topic in slightly divergent detail, which is worth a watch if you are interested.1

There are many factors that put a dampener on this hypothetical series of events.

Something smarter than human is not necessarily that much smarter

Human beings cooperate to improve the capability of every individual in the group, intellectually or otherwise. Even if an AI is smarter than any single human, it is going to relatively speaking be both on its own (no-one to bounce ideas off of), and unlikely to be smarter than all the researchers that created it, even if it is smarter than any individual researcher in the group, unless those researchers have made some kind of ultrarevolutionary discovery in model architecture that exponentially increases intelligence beyond that of a very significant portion of humanity, somehow.

Even if something is smarter, it takes some experimentation and testing to check improvements

Unless the initial AI is already godlike in predictive and analytical capability, it will need to perform experiments and test prototypes - and be in an environment with the tools to do this - which takes a fairly large amount of time unless the thing is absurdly powerful. After all, the researchers likely took a long time to create the AI in the first place, working together in a group, and the time to generate improvements that will not result in accidental self-destruction either in the short term, or over a longer period of time, is too long to hit the initial exponential curve required for an AI singularity to occur without notice and prevention, even if the AI is a fair bit smarter than the researchers.

There may not be a refinable notion of “intelligence” in the first place

The use of a singular concept of “intelligence” that can be quantified and refined in a single direction is controversial at best. Of course, you can always increase the rate of thought via increased simulation frequency and more efficient simulations as well (with appropriate input/output compensation to avoid horrific existential effects), but that is not necessarily connected to the common conceptualisation of “intelligence” or “IQ”, or even a different property like creativity, and singular notions of intelligence in particular fall apart in the case of neurodivergent folks like myself (where often different intelligence test axes have very varying results).

Even if intelligence can be quantified into a singular notion and refined, it may not be refinable infinitely in a singular entity - there may be some “maximum intelligence” beyond which it is impossible to reach, not merely due to bypassable physical restrictions, but because there is a maximum point where you literally cannot make yourself smarter or more capable other than by adding more external sensors or accelerating your neural simulation further. Perhaps, even, humans have already hit this and the main limitation is thoughtrate and access to knowledge and time.

There may be other similar limitations like this, for example, if there is some intrinsic compromise between one facet of intelligence and another, or increasing “intelligence” beyond a certain point gets exponentially harder, faster than intelligence increases would increase the rate of adding more intelligence, so that even if such a superintelligent AI were to dedicate its entire thought processes to the act of becoming more intelligent, its growth in intelligence would crawl to a halt.

AI Creation is a Deliberate Act

Creating AIs is something that takes significant effort and intent. No-one will accidentally make a superintelligence or even much more basic AI, without being very aware of what they are doing. The whole “scientist accidentally creates an AI in their basement and it takes over the internet before the day is out, killing us all” scenario is absurdly improbable.

This is a double-edged sword, however. If someone thinks humanity deserves to die for whatever reason, or thinks we need a superintelligent AI to take over (despite the extreme difficulty involved in “taking over”), or they just want to see if they can create AI, they could well intentionally attempt to do so, though the other limits here would likely prevent a Singularity scenario from occurring. Certainly, I know of people who want to deliberately set out and make a (benevolent) super-AI in the hopes that getting there first will prevent something malicious from emerging spontaneously without a preexisting superintelligence to tear it down already present.

My Thoughts on AI Singularity

Suffice to say, I think the notion of AI singularity is flawed. It could happen, but I consider it very unlikely for a number of reasons even considering that there are undoubtedly people trying to create an artificial general intelligence with this express goal, with significant resources.

I consider singular notions of intelligence (and “superintelligence”) deeply flawed

The idea of singular conceptualisations of intelligence is one that I think is flawed, especially beyond less subjective subjects like maths and physics. For instance, I am not good at word games and manipulation of other people (I am autistic), and in fact I think this is a good thing, because to a degree, it makes me better at being direct - this, I believe, is an example in which two forms of intelligence are in contradiction to each other, at least partially. Other examples include cases where there is no objective notion of “the most intelligent response”, for example in analysing history.

Even subjects like mathematics and physical sciences are ultimately subjective to a degree, in that communication of ideas, proofs, etc. does not have an objective “best” method. Two different proofs of the same statement in mathematics, for instance, cannot be objectively ranked as superior, and hence it becomes far less possible to say which a superintelligence would pick as “better”, or which it would produce in the first place, and why that would even be better than something a human (or rather, many humans) in any way other than speed of production.

More fundamentally though, I would argue that all intelligences capable of abstract thought are equivalent in capability if given sufficient time, resources, and probably companions, because any large bulk concept can be packaged up and attached, broken up, and connected and labeled with numerous abstractions to arbitrary meta- or self-referential capability and can be worked with via its label rather than its raw contents to enable something as unintelligent as a primitive computer to perform operations and verifications upon it. In essence, the modularity of ideas themselves, and the ability for ideas to label other ideas and work with them or describe rules upon them without the working intelligence needing to comprehend the entire contents of the labelled idea, means that the work of comprehension or other manipulation can be distributed over arbitrary segmented time periods, or arbitrary intelligences working upon a more specific subcomponent of the idea or problem.

A single intelligence in a group may not even be capable of fully absorbing the entirety of an idea, but since ideas are always modularisable - either being directly broken up, or being packed up under a single label and then potential interactions with that labelled idea being packaged up into chunks small enough that they can be worked with individually - a superintelligence is equivalent, in terms of complexity it can work with - to normal human intelligences working together to solve a problem. It may be faster of course, but ultimately, there is no intrinsic thing that a superintelligence could manage that any sufficiently motivated and resource-flush humans could not when cooperating.

For example, a single human is comically incapable of reading even a miniscule percentage of the fiction and nonfiction being produced every year by all of earth. And yet, all those books get read by at least one person (the author). A superintelligence may be capable of this feat also, but since we can chunk out the task “read and think about all of human fiction and nonfiction”, into “list of text to read by book/chapter/paragraph/some other component” and distribute each individual subtask across a single human, we can collectively perform what that hypothetical superintelligence may be able to perform individually.

This isn’t a formal proof per-se, and writing such a formal proof of universal abstraction capability equivalence2 would be a fairly difficult task, but this is for instance why I think the idea that any superintelligence could dwarf us in the same way we dwarf ants is flawed - there is some point of thinking capability where the ability to engage in sufficient abstraction renders all differences merely ones of speed, sensory data, and time, in any given subcategory of intelligence.

E.g. I am not good at word games and manipulation. But if I somehow had some kind of superaccelerated thinking capability, I could manually label all the social stuff and work out exactly what is necessary to do so with sufficient time and effort (not that I would want to do this, but that’s another question). That is, my intelligence in the field of social manipulation is weak so it takes me a long time to do, but with sufficient labelling and abstraction and time I could manage it even with no natural inclination at all. On the other hand, the way I talk normally essentially takes almost no effort.

Oddly enough the conclusion that any and all branches of “intelligence” capability can be reduced to time and resources could allow for some interesting categorisations and mathematics on intelligence as a differential with trivial time+resources parameterisation, but that is for another time.

“Becoming smarter” may not actually be the best or fastest route for an AI, even if it can be defined meaningfully

If an AI has some goal, being smarter may not actually be the best route to it when trying to acheive said goal. If there is some upper bound on intelligence, or gaining intelligence is too resource intensive, the AI might actually be able to determine that just plodding on with less efficient solutions may still be more efficient than expending the extra resources to increase its own intelligence and produce “better” solutions. If the amount of resources is finite, in particular, expending too much on increased intelligence may well leave none left for whatever its other goals are.

Or, it may be like humans in that it isn’t sure there is even a singular concept of intelligence it can optimise in the first place.

A More Useful Notion of Singularity

The original definition of a singularity was more in reference to the mathematical concept, a function where some part of it becomes either unpredictable or rapidly changing, or undefined. When I think of a singularity, I think of functions like f(x)=1/(xa)f(x) = 1/(x - a), where the function explodes to infinity as xx approaches aa, or the natural logarithm as it’s argument approaches zero and it’s value approaches -\infin.

In particular, it referenced the notion that technology would have advanced so far that the world would be unrecognisable.

Now, I don’t use quite this definition - it is an unrealisable notion because change is mostly continuous (innovation is discrete, but the distribution of such innovation is near-continuous) so we’ll always be aware of possibilities at least a few days, weeks or months into the future, we can never actually “cross” the boundary of such a technological singularity because it is a moving goalpost. Instead, I use the following two terms to distinguish notions of a singularity, and consider my notion of “Technological Singularity” to be the more useful and probable of the two.

Artificial (General) Intelligence Singularity (AGI Singularity)

This is the scenario described above, where some form of intelligence emerges and increases in intelligence rapidly. Not necessarily exponentially, but fast enough to be dangerous. More detail can be found above, so there’s no need to explain this here.

Technological Singularity

This is a scenario where technology improves at an accelerating rate over time, to the point where it improves faster than any small or medium sized group can keep up, and keeps accelerating more and more, with a tipping point beyond which society will be forced towards radical alteration.

I consider this form of singularity to be occurring right now, with the preview of profound effects of communications technology enabling rapid, continuous, accelerating, distributed improvement being seen in the radical evolution of open source and free software technologies, and the ever increasing production of research papers each year pushing nanotechnologies, nanoprinting, biological and medical tech, and innumerable other scientific fields and subfields forward, enabling further changes in how life occurs.

In this sense, the collective intelligence and capabilities of humanity as a whole, and the increasing ability to interact with new ideas from others as communication accelerates and becomes more accessible to more people, with more people having more knowledge, acts in a similar manner to the superintelligence of AGI singularity hypotheses without many of the major restrictions on what such a singular intelligence can do.

I also suspect that a singularity of this form would be greatly accelerated by certain key technologies that distribute manufacturing - 3D printer style - such that refinement is easier to make on a more local level rather than needing a dedicated laboratory or factory that acts as a bottleneck resource for experimenting with ideas and concepts. As well as methods to destroy the ability to enforce intellectual property of any kind, or just abolish it, but that’s a topic for another article.

Accelerant Technology

In particular, the notion of an “Accelerant Technology” would seem to be useful here. This is a technology that does not just change the way people live, but acts to accelerate further technological development, in increasing degrees, as it spreads (this includes the development of more accelerant technologies) - most of the time this has been by facilitating more rapid communication, or perhaps more rapid prototyping and testing in some cases.

Some examples I would consider to be technological accelerants:

  • Language of any kind
  • The printing press
  • Computers and simulations
  • The Internet
  • 3D printing may come to be this in future

Why I Consider Technological Singularity a More Useful Idea & Term

I consider the notion of a general technological singularity far more useful in practise (along with the terminology split described above) than the current popular idea of AI singularity, for a number of reasons outlined below.

Distribution of Intellectual and Financial Resources

The concept of AGI Singularity means people pour enormous amounts of intellectual and financial resources into focusing on AI-centred technologies and future-paths, which deprives other, more viable and more widely applicable (and perhaps more revolutionary) technologies of those same resources for R&D.

Oftentimes this can occur to the degree it seems like a cult/religion, as is sometimes the case with LessWrong and MIRI. Some of this is obviously my own biases as well, but I think I have raised some important issues with the notion of exponential superintelligence as a concept already, and sometimes people even consider funding AI research to be funding all other research because “if we will create a superintelligence it can fix all the other issues”.

The resemblence of certain concepts and arguments oriented around AGI singularities to arguments made by religious apologists does not help with this perception of advocates by myself or by others (take a look at the whole Roko’s Basilisk incident, if you want an example).

No “Great Man/AI Syndrome” and Decentralisation

Technological Singularity does not suffer from great man syndrome (or great AI syndrome in this case) - it is not conditional on the ability to create one great superintelligence of arbitrary capability to move forward and does not push conceptualisations towards trying to model the wants and desires of a godlike AGI to try and appease it or prevent it from existing in the first place. Technological singularity also has far fewer obstacles to it’s actual occurrence (indeed I would make the case that it is already starting to happen), and hence in my view is more important to consider.

Furthermore, if you (like me) find a technological singularity desirable and something to actively push for and develop, this form is far more useful to contemplate as it can be accelerated in a distributed fashion rather than by begging (or in the extreme case, essentially praying to) a superintelligence that does not even exist, or funding research institutes of questionable veracity (hello, MIRI!), to either counteract such a singularity or try and create a “benevolent” AGI first. The future created is also less fragile and independent of any singular superintelligence, and fundamentally less authoritarian as a baseline.

Less confusable

Technological Singularity (and AGI Singularity) as terms are less confusable with much of the hype around AI in the form of deep learning or more general machine learning models, which are architecturally insufficient to construct a conscious intelligence due to lack of recursion in internal structures.

While there are architectural developments towards neural emulation, much of the technology and research in AI is more focused on mostly non-recursive architectures unsuitable to creating a self-activating, self-aware and neurologically-recursive intelligence (like an intelligence has to be to maintain continuous internal state such as thoughts).

Conclusion

Hopefully this is a useful summary, mini-explainer, and analysis of some of the ideas around AI singularity, and that the terms I suggested for use are helpful to you for being more explicit about the type of singularity you are advocating for or against, or otherwise discussing - bear in mind in discussions that others may use “technological singularity” to refer to what I would call an AGI singularity.


  1. Note that I do have some issues with some terminology and occasionally over-prescriptive notions of self-interest Isaac Arthur has in their videos, but they are still worth a watch and excellent resources even if I disagree with a decent amount of the framing used in some of them. ↩︎

  2. Godel’s incompleteness theorem does not contradict this at least at first glance - Godel’s theorem is about systems being unable to prove self-consistency, but we do not need to do that, only that two systems are equivalent when we already know one can perform a given action, that the other can. As far as I know proving this would not contradict Godel’s incompleteness theorem, but, while I have some qualifications in mathematics, I am less familiar with this area. ↩︎