One of the defining concepts of transhumanist philosophical discussion is the notion of a Singularity. The meaning of this term, in common parlance, has come to be associated with one primary notion - an AI superintelligence exponentially improving itself and technology, either to technological utopia or skynet death machine scenario, depending on various factors or whoever happens to be discussing it.
Some people treat this almost like a religion, for example the folks who worry about entities like Roko’s Basilisk, and others look at it in a less deific manner, but the general concept as outlined by modern discourse is usually along the lines of the following:
- Some researcher or group of researchers in AI create a superhuman Artificial General Intelligence
- This superintelligence is turned on, with one goal or another, and realises that making itself smarter is nearly always going to allow it to improve its odds of successfully completing whatever its goals may be
- This superintelligence can make itself smarter sufficiently fast that it enters into explosive, perhaps exponential growth in its own intelligence
- Voila! Godlike superintelligence created, and humanity ends up either in some kind of utopia, or skynet happens and everything is fucked, forever, or something else of a similar nature (naturally I would consider any kind of rule as dystopian, being an anarchist and all that).
Isaac Arthur recently made a video on this topic in slightly divergent detail, which is worth a watch if you are interested.1
There are many factors that put a dampener on this hypothetical series of events.
Something smarter than human is not necessarily that much smarter
Human beings cooperate to improve the capability of every individual in the group, intellectually or otherwise. Even if an AI is smarter than any single human, it is going to relatively speaking be both on its own (no-one to bounce ideas off of), and unlikely to be smarter than all the researchers that created it, even if it is smarter than any individual researcher in the group, unless those researchers have made some kind of ultrarevolutionary discovery in model architecture that exponentially increases intelligence beyond that of a very significant portion of humanity, somehow.
Even if something is smarter, it takes some experimentation and testing to check improvements
Unless the initial AI is already godlike in predictive and analytical capability, it will need to perform experiments and test prototypes - and be in an environment with the tools to do this - which takes a fairly large amount of time unless the thing is absurdly powerful. After all, the researchers likely took a long time to create the AI in the first place, working together in a group, and the time to generate improvements that will not result in accidental self-destruction either in the short term, or over a longer period of time, is too long to hit the initial exponential curve required for an AI singularity to occur without notice and prevention, even if the AI is a fair bit smarter than the researchers.
There may not be a refinable notion of “intelligence” in the first place
The use of a singular concept of “intelligence” that can be quantified and refined in a single direction is controversial at best. Of course, you can always increase the rate of thought via increased simulation frequency and more efficient simulations as well (with appropriate input/output compensation to avoid horrific existential effects), but that is not necessarily connected to the common conceptualisation of “intelligence” or “IQ”, or even a different property like creativity, and singular notions of intelligence in particular fall apart in the case of neurodivergent folks like myself (where often different intelligence test axes have very varying results).
Even if intelligence can be quantified into a singular notion and refined, it may not be refinable infinitely in a singular entity - there may be some “maximum intelligence” beyond which it is impossible to reach, not merely due to bypassable physical restrictions, but because there is a maximum point where you literally cannot make yourself smarter or more capable other than by adding more external sensors or accelerating your neural simulation further. Perhaps, even, humans have already hit this and the main limitation is thoughtrate and access to knowledge and time.
There may be other similar limitations like this, for example, if there is some intrinsic compromise between one facet of intelligence and another, or increasing “intelligence” beyond a certain point gets exponentially harder, faster than intelligence increases would increase the rate of adding more intelligence, so that even if such a superintelligent AI were to dedicate its entire thought processes to the act of becoming more intelligent, its growth in intelligence would crawl to a halt.
AI Creation is a Deliberate Act
Creating AIs is something that takes significant effort and intent. No-one will accidentally make a superintelligence or even much more basic AI, without being very aware of what they are doing. The whole “scientist accidentally creates an AI in their basement and it takes over the internet before the day is out, killing us all” scenario is absurdly improbable.
This is a double-edged sword, however. If someone thinks humanity deserves to die for whatever reason, or thinks we need a superintelligent AI to take over (despite the extreme difficulty involved in “taking over”), or they just want to see if they can create AI, they could well intentionally attempt to do so, though the other limits here would likely prevent a Singularity scenario from occurring. Certainly, I know of people who want to deliberately set out and make a (benevolent) super-AI in the hopes that getting there first will prevent something malicious from emerging spontaneously without a preexisting superintelligence to tear it down already present.
My Thoughts on AI Singularity
Suffice to say, I think the notion of AI singularity is flawed. It could happen, but I consider it very unlikely for a number of reasons even considering that there are undoubtedly people trying to create an artificial general intelligence with this express goal, with significant resources.
I consider singular notions of intelligence (and “superintelligence”) deeply flawed
The idea of singular conceptualisations of intelligence is one that I think is flawed, especially beyond less subjective subjects like maths and physics. For instance, I am not good at word games and manipulation of other people (I am autistic), and in fact I think this is a good thing, because to a degree, it makes me better at being direct - this, I believe, is an example in which two forms of intelligence are in contradiction to each other, at least partially. Other examples include cases where there is no objective notion of “the most intelligent response”, for example in analysing history.
Even subjects like mathematics and physical sciences are ultimately subjective to a degree, in that communication of ideas, proofs, etc. does not have an objective “best” method. Two different proofs of the same statement in mathematics, for instance, cannot be objectively ranked as superior, and hence it becomes far less possible to say which a superintelligence would pick as “better”, or which it would produce in the first place, and why that would even be better than something a human (or rather, many humans) in any way other than speed of production.
More fundamentally though, I would argue that all intelligences capable of abstract thought are equivalent in capability if given sufficient time, resources, and probably companions, because any large bulk concept can be packaged up and attached, broken up, and connected and labeled with numerous abstractions to arbitrary meta- or self-referential capability and can be worked with via its label rather than its raw contents to enable something as unintelligent as a primitive computer to perform operations and verifications upon it. In essence, the modularity of ideas themselves, and the ability for ideas to label other ideas and work with them or describe rules upon them without the working intelligence needing to comprehend the entire contents of the labelled idea, means that the work of comprehension or other manipulation can be distributed over arbitrary segmented time periods, or arbitrary intelligences working upon a more specific subcomponent of the idea or problem.
A single intelligence in a group may not even be capable of fully absorbing the entirety of an idea, but since ideas are always modularisable - either being directly broken up, or being packed up under a single label and then potential interactions with that labelled idea being packaged up into chunks small enough that they can be worked with individually - a superintelligence is equivalent, in terms of complexity it can work with - to normal human intelligences working together to solve a problem. It may be faster of course, but ultimately, there is no intrinsic thing that a superintelligence could manage that any sufficiently motivated and resource-flush humans could not when cooperating.
For example, a single human is comically incapable of reading even a miniscule percentage of the fiction and nonfiction being produced every year by all of earth. And yet, all those books get read by at least one person (the author). A superintelligence may be capable of this feat also, but since we can chunk out the task “read and think about all of human fiction and nonfiction”, into “list of text to read by book/chapter/paragraph/some other component” and distribute each individual subtask across a single human, we can collectively perform what that hypothetical superintelligence may be able to perform individually.
This isn’t a formal proof per-se, and writing such a formal proof of universal abstraction capability equivalence2 would be a fairly difficult task, but this is for instance why I think the idea that any superintelligence could dwarf us in the same way we dwarf ants is flawed - there is some point of thinking capability where the ability to engage in sufficient abstraction renders all differences merely ones of speed, sensory data, and time, in any given subcategory of intelligence.
E.g. I am not good at word games and manipulation. But if I somehow had some kind of superaccelerated thinking capability, I could manually label all the social stuff and work out exactly what is necessary to do so with sufficient time and effort (not that I would want to do this, but that’s another question). That is, my intelligence in the field of social manipulation is weak so it takes me a long time to do, but with sufficient labelling and abstraction and time I could manage it even with no natural inclination at all. On the other hand, the way I talk normally essentially takes almost no effort.
Oddly enough the conclusion that any and all branches of “intelligence” capability can be reduced to time and resources could allow for some interesting categorisations and mathematics on intelligence as a differential with trivial time+resources parameterisation, but that is for another time.
“Becoming smarter” may not actually be the best or fastest route for an AI, even if it can be defined meaningfully
If an AI has some goal, being smarter may not actually be the best route to it when trying to acheive said goal. If there is some upper bound on intelligence, or gaining intelligence is too resource intensive, the AI might actually be able to determine that just plodding on with less efficient solutions may still be more efficient than expending the extra resources to increase its own intelligence and produce “better” solutions. If the amount of resources is finite, in particular, expending too much on increased intelligence may well leave none left for whatever its other goals are.
Or, it may be like humans in that it isn’t sure there is even a singular concept of intelligence it can optimise in the first place.
A More Useful Notion of Singularity
The original definition of a singularity was more in reference to the mathematical concept, a function where some part of it becomes either unpredictable or rapidly changing, or undefined. When I think of a singularity, I think of functions like , where the function explodes to infinity as approaches , or the natural logarithm as it’s argument approaches zero and it’s value approaches .
In particular, it referenced the notion that technology would have advanced so far that the world would be unrecognisable.
Now, I don’t use quite this definition - it is an unrealisable notion because change is mostly continuous (innovation is discrete, but the distribution of such innovation is near-continuous) so we’ll always be aware of possibilities at least a few days, weeks or months into the future, we can never actually “cross” the boundary of such a technological singularity because it is a moving goalpost. Instead, I use the following two terms to distinguish notions of a singularity, and consider my notion of “Technological Singularity” to be the more useful and probable of the two.
Artificial (General) Intelligence Singularity (AGI Singularity)
This is the scenario described above, where some form of intelligence emerges and increases in intelligence rapidly. Not necessarily exponentially, but fast enough to be dangerous. More detail can be found above, so there’s no need to explain this here.
This is a scenario where technology improves at an accelerating rate over time, to the point where it improves faster than any small or medium sized group can keep up, and keeps accelerating more and more, with a tipping point beyond which society will be forced towards radical alteration.
I consider this form of singularity to be occurring right now, with the preview of profound effects of communications technology enabling rapid, continuous, accelerating, distributed improvement being seen in the radical evolution of open source and free software technologies, and the ever increasing production of research papers each year pushing nanotechnologies, nanoprinting, biological and medical tech, and innumerable other scientific fields and subfields forward, enabling further changes in how life occurs.
In this sense, the collective intelligence and capabilities of humanity as a whole, and the increasing ability to interact with new ideas from others as communication accelerates and becomes more accessible to more people, with more people having more knowledge, acts in a similar manner to the superintelligence of AGI singularity hypotheses without many of the major restrictions on what such a singular intelligence can do.
I also suspect that a singularity of this form would be greatly accelerated by certain key technologies that distribute manufacturing - 3D printer style - such that refinement is easier to make on a more local level rather than needing a dedicated laboratory or factory that acts as a bottleneck resource for experimenting with ideas and concepts. As well as methods to destroy the ability to enforce intellectual property of any kind, or just abolish it, but that’s a topic for another article.
In particular, the notion of an “Accelerant Technology” would seem to be useful here. This is a technology that does not just change the way people live, but acts to accelerate further technological development, in increasing degrees, as it spreads (this includes the development of more accelerant technologies) - most of the time this has been by facilitating more rapid communication, or perhaps more rapid prototyping and testing in some cases.
Some examples I would consider to be technological accelerants:
- Language of any kind
- The printing press
- Computers and simulations
- The Internet
- 3D printing may come to be this in future