The Singularity Is Always Near (2006)

(kk.org)

38 points | by rmason 1 day ago

13 comments

  • somat 1 hour ago
    I think of it like time dilation, such as near a black hole(see what I did there, tying the two singularities together).

    From the perspective of one experiencing time-dilation nothing appears unusual, everything appears normal, it only from the outside perspective that things are strange.

    As far as I can tell the singularity happened in the late 1700's. For thousands of years the collective economic growth of the world was effectively a straight shallow line, it grew, but slowly and linearly, then in the late 1700's something changed, it went exponential and everybody was along for the ride, and from the perspective of being caught up in this exponential growth it appears flat, normal even. but you look at history and wonder why every advance was so slow. and you look ahead and say the singularity is almost there. But we will never actually reach it. by the time we get there it is the new normal.

    • Retric 31 minutes ago
      Humanity has been rapidly advancing throughout recorded history. We just gloss over advancements in outdated technology. Who cares about when exactly the stirrup was invented when we have cars. Medieval armor was vastly better than what was available in the Roman Empire, but it didn’t just suddenly jump to better there was a host of minor innovations.

      The amazing complexity in rigging seen in the age of sail is built on a long line of innovation, but engines rendered it largely irrelevant etc. As such ~1700 isn’t some clear tipping point, just the horizon before which innovation seems less relevant.

      • TacticalCoder 1 minute ago
        > As such ~1700 isn’t some clear tipping point, just the horizon before which innovation seems less relevant.

        GP wrote "late 1700's". He's probably referring to the industrial revolution.

    • randfur 9 minutes ago
      It's like how talking about the "hockey stick curve" in an exponential growth line is nonsensical. It's everywhere and the curve angles you see depend entirely on what scale you view it from.
    • davnicwil 58 minutes ago
      I think the classic definition of the singularity though (per Kurzweil's book) is precisely when the curve actually doesn't look flat on human comprehensible timescales any more.

      I.e. One day there are significant overnight changes. Then the very next day hourly changes, soon thereafter every minute, second, millisecond, etc.

  • sempron64 1 hour ago
    I think the mistake here is that there is a certain rate of progress where humanity can no longer even collectively process the progress and it is equivalent to infinite progress. This point is the singularity and requires non-human driven progress. We may or may not reach that point but full automation is a requirement to reach it. We may hit a hard wall and devolve to an s-curve, hit a maximum linear progress rate, hit a progress rate bounded by population growth and human capability growth (a much slower exponential), or pass the 1/epsilon slope point where we throw up our hands (singularity). Or have a dark age where progress goes negative. Time will tell.
  • crystal_revenge 28 minutes ago
    The problem with the concept of "the singularity" it is has a hidden assumption that computation has no relationship to energy. Which, once unmasked, is a pretty outlandish claim.

    There is a popular illusion that somehow technological progress is a pure function of human ingenuity, and that the more efficient we can make technology the faster we can make even better technological improvement. But history of technology has always been the history of energy usage.

    Prior to the emergence of homo-sapiens, "humans" learned to cook food by releasing energy stored in wood. Cooking food is often considered a prerequisite for the development of the massive, energy consuming, brain of homo-sapiens.

    After that it took hundreds of thousands of years for Earth's climate to become stable enough to make agriculture feasible. We see almost no technological progress until we start harvesting enormous amounts of solar energy through farming. Not long after this we see the development of mathematics and writing since humans now had surplus energy and they could spend some of it on other things.

    You can follow this pattern though the development and extraction of coal, oil etc. You can look at the advancement of technology in the last 100 years alongside our use of fossil fuels and expansion of energy capabilities with renewables (which historically only been used to supplement, not replace non-renewables).

    But technological progress has always been a function of energy, and more specifically, going back to cooking food, computational/cognitive ability similarly demands increasingly high energy consumption.

    All evidence seems to suggest that we increasingly need more energy for incrementally smaller return on computation.

    So for something like the singularity to happen, we would also need incredible changes in available energy (there's also a more nuanced argument that you also need smooth energy gradients but that's more discussion than necessary). Computation is not going to rapidly expand without also requiring tremendously large increases in energy.

    Further it's entirely reasonable that there is some practical limit to just how "smart" a thing can be based on the energy requirements to get there. That is, you can't reasonably harvest enough energy to create intelligence on the level we imagine (the same way there is a limit to how tall a mountain can be on earth due to gravity).

    Like most mystical thinking, ignoring what we know about thermodynamics tends to be a fundamental axiom.

    • rbanffy 10 minutes ago
      There are hard limits for how much energy we can provide to computation, but we are not even close to what we can do in a non-suicidal way. In addition to expanding renewables, we could also expand nuclear and start building Thorium reactors - this alone ensures at least an extra order of magnitude in capacity compared to Uranium.

      As for the compute side, we are running inference on GPUs which are designed for training. There are enormous inefficiencies in data movement in these platforms.

      If we play our cards right we might have autonomous robots mining lunar resources and building both more autonomous robots so they can mine even more. If we manage to bootstrap a space industry on the Moon with primarily autonomous operations and full ISRU, we are on our way to build space datacenters that might actually be economically viable.

      There is a lot of stuff that needs to happen before we have a Dyson ring or a Matrioska brain around the Sun, but we don’t need to break any laws of physics for that.

    • wwweston 19 minutes ago
      Don’t forget the practical ability to dissipate waste heat on top of producing energy. That’s an upper limit to all energy use unless we decide boiling ourselves is fine, or find a way to successfully ignore thermodynamics, as you say.
      • rbanffy 6 minutes ago
        We can always build a sunshade in the Earth-Sun L1. Make it a Sun facing PV panel pointing radiators away from us and we can power a lot of compute there (useful life might be limited, but compute modules can be recycled and replaced, and nothing needs to be launched from Earth in this case).
      • CorrectHorseBat 10 minutes ago
        If we'd ever get so far that would be the most compelling argument for datacenters in space
  • Legend2440 1 hour ago
    I think this is accurate. However, this does not mean that the exponential isn't real, it just isn't sudden. We have been living through continuously accelerating technological and economic growth our whole lives, and things really do happen much faster now than they did in the past.

    For example it took centuries for indoor plumbing to be widely adopted, and less than a decade for smartphones. It took hundreds of thousands of years to get the first billion people (~1800), but the eighth billion happened in eleven years (2011-2022).

    • javcasas 1 hour ago
      The initial part of an S-curve looks a lot like an exponential. The final part doesn't.

      Finding the second and the third antibiotic for non resistant bacteria may be fast and easy, finding another three antibiotics for resistant bacteria decades later is now crazy hard, as bacteria evolved to resist everything that doesn't also kill humans.

      • Legend2440 1 hour ago
        Eh, sure, we'll hit limits eventually. We appear to be pretty far off from hard limits like thermodynamics though, and the world after we hit those limits could look very science-fiction.

        For antibiotics specifically, we will probably find other ways to fight bacteria even if we never discover another chemical antibiotic. As one technology S-curves, another technology replaces it.

        • rbanffy 0 minutes ago
          > As one technology S-curves, another technology replaces it

          Even if for no other reason than us abandoning a diminishing returns approach looking for other alternatives.

          We have been kind of at the end of the rope for silicon for quite some time now and we found increasingly heroic ways to protect our investment in silicon based semiconductors, but silicon is not the only option - it’s just the one we have a lot of supply chains already set up.

    • Zigurd 54 minutes ago
      Population is a particularly good example: just decades ago it seemed like we were barreling toward an overpopulation crisis. I'm aware that some people think we're beyond the carrying capacity of the Earth long-term. But there seems to be a broad consensus that birth rates are declining everywhere without a food crisis or other immediately visible calamity.

      Population appears to be on a droopy S curve. The preposterousness of those space data centers and the fact that we don't have a theory of consciousness makes it seem plausible that AI could also not continue to rocket ahead.

      • Legend2440 37 minutes ago
        >The preposterousness of those space data centers and the fact that we don't have a theory of consciousness makes it seem plausible that AI could also not continue to rocket ahead.

        The rate of datacenter construction in the last few years exceeds Moore's law and is almost certainly unsustainable. 'Only' 2x improvement every 2 years would seem relatively slow compared to what's happened recently.

        However, I expect AI will continue to advance over the coming decades even once the bubble pops. They're clearly on to something with neural networks.

    • jiggawatts 55 minutes ago
      My example is lighting technology:

      Wood fires were the only option for something like a few hundred thousand years.

      Oil lamps for millennia.

      Tallow or beeswax candles are modern technology, appearing after the fall of the Roman Empire.

      Gas lighting was widespread for less than a century.

      Incandescent lightbulbs for another century, but were starting to get replaced by fluorescent tubes just decades later.

      Cold cathode fluorescents saw mainstream use for about two decades.

      LEDs completely displaced almost all previous forms of lighting in less than a decade.

      I recently read about a new form of lighting developed and commercialised in just a few of years: https://www.science.org/doi/10.1126/sciadv.adf3737

      • sempron64 38 minutes ago
        This is an excellent example to illustrate an S-curve. There is a certain amount of energy in a photon. It cannot be emitted with less energy. There is 100% efficiency barrier that cannot be surpassed no matter how smart you are.
  • DavidSJ 33 minutes ago
    A mistake in this critique is it assumes an exponential: a constant proportional rate of growth. It is true that, in some sense, an exponential always seems to be accelerating while infinity always remains equally far away.

    But this is a bit of a straw man. Mathematical models of the technological singularity [1], along with the history of human economic growth [2], are super-exponential: the rate of growth is itself increasing over time, or at least has taken multiple discrete leaps [3] at the transitions to agriculture and industry, respectively. A true singularity/infinity can of course never be achieved for physical reasons (limited stuff within the cubically-expanding lightcone, plus inherent limits to technology itself), but the growth curve can look hyperbolic and traverse many orders of magnitude before those physical limits are encountered.

    [1] https://www.nber.org/system/files/working_papers/w23928/w239...

    [2] https://docs.google.com/document/d/1wcEPEb2mnZ9mtGlkv8lEtScU...

    [3] https://mason.gmu.edu/~rhanson/longgrow.pdf

  • agentzed 56 minutes ago
    It is something that will or will not happen on an individual level.

    You are fools to think you personally are a part of or will be present at the zenith of human ascendancy.

    One, all, and the world will go on as though another day. Those who become or go beyond their “full self” will merely have a new level. Like a base conversion.

    Besides, there are notes of singularities flitting in and out of your very minds. You get the the bottom of those and you will find whichever part is yours will come by your acquiring it for yourself.

    The singularium will be your own place in the ascendency of Man, through technology or personal development. The self is the ultimate technology.

    • Nevermark 51 minutes ago
      I have noticed a pattern where one thing that is not understood gets tied to another thing that is not understood, one perceived mystery to another, one unsolved problem to another, one misunderstanding with another, and then this is declared to be an answer.

      Quantum mechanics and consciousness.

      Pyramids and aliens.

      Looking forward, it is a great opportunity for random mashup "explanations". The urge will be great for some people.

      • agentzed 45 minutes ago
        Too true.

        Quantum mechanics as understood is flawed, consciousness is universal potential subjectively bound to particulate, animated by living biotechnology, and squares of this day still refuse to wink at “magic.”

        Pyramids are human engineering. “The Greys” are our Earth mates. America’s nuclear suicidal tendencies have revoked your right to deny. I speak only for the ascendency of Man.

        Have fun flat landing stoic, I know you’re really a bleeding heart.

  • chasil 1 hour ago
    Someone pointed out to me a few days ago that mine is the last generation who were able to lose touch and reconnect after long periods of time.

    In the days of rotary & pay telephones the loss of communication was possible.

    That is no longer the case.

    • ramesh31 52 minutes ago
      We will carry the memory of the old ways into the 22nd century, just as our grandparents did with their 19th century inclinations. I, for one, look forward to writing out a complete essay in cursive with a pencil as the grandkids stare in awe.
  • zqna 42 minutes ago
    If singularity premise is correct, then i think it should must already had happened in our cluster of the universe. Since it hasn't yet, then there are 3 options. 1st: earth and earthlings are special, which is too egocentric notion to be taken seriously. 2nd: we are being observed for entertainment by high conciousness. That could explain a lot, though this removes agency and prevents us from reaching that moment on our own (but maybe the observers are curious to find that out). 3rd: the extinction and annihilation of so-called intelligence once it obliterates all the resources in vicinity. Of course there is option number 4, but ultimately the question, what is the point of that?
  • Terr_ 55 minutes ago
    > the Kurzweilian version of singularity

    Cynical take: Kurzweil's predictions follow a predictable pattern which suggests something about how and why they are being generated.

    Namely, it's whatever increasingly-improbable new advances and discoveries are needed to ensure achieve practical immortality is achieved just in time for a particular human named Ray Kurzweil to escape the icy grip of death.

    • zqna 32 minutes ago
      And we are running out of time
  • ck2 20 minutes ago
  • paulpauper 1 hour ago
    I see parallels with AGI/takeoff. "It's just 2 years away" every year. KK agues that the process is continuous, but AI optimists argue the inflection will be abrupt , like a step-function.
    • CrazyStat 1 hour ago
      The premise of the singularity concept was always superhuman intelligence, so it’s not so much a parallel as a renaming of the same thing.

      > In Vinge’s analysis, at some point not too far away, innovations in computer power would enable us to design computers more intelligent than we are, and these smarter computers could design computers yet smarter than themselves, and so on, the loop of computers-making-newer-computers accelerating very quickly towards unimaginable levels of intelligence.

      • d_silin 58 minutes ago
        Would never work in reality, you can't optimize algorithms beyond their computation complexity limits.

        You can't multiply matrix x matrix (or vector x matrix) faster than O(N^2).

        You can't iterate through array faster than O(N).

        Search & sort are sub- or near-linear, yes - but any realistic numerical simulations are O(N^3) or worse. Computational chemistry algorithms can be as hard as O(N^7).

        And that's all in P class, not even NP.

        • direwolf20 12 minutes ago
          https://en.wikipedia.org/wiki/Computational_complexity_of_ma...

          The n in this article is the size of each dimension of the matrix — N=n^2. Lowest known is O(N^1.175...). Most practical is O(N^1.403...). Naive is already O(N^1.5) which, you see, is less than O(N^2).

          • d_silin 3 minutes ago
            Well, but still superlinear.
        • dekhn 20 minutes ago
          We don't need to optimize algorithms beyond their computational complexity limits to improve hardware.
          • d_silin 14 minutes ago
            Hardware is bound by even harder limits (transistor's gate thickness, speed of light, Amdahl's law, Landauer's limit and so on).
            • dekhn 3 minutes ago
              But that doesn't disprove the hypothesis that in principle you can have an effective self-improvement loop (my guess is that it would quickly turn into extremely limited gains that do not justify the expenditure).
              • d_silin 1 minute ago
                Any self-improvement would have a natural ceiling, though. From both algorithmic complexity and hardware limits of underlying compute substrate.
        • razodactyl 18 minutes ago
          You're measuring speed not intelligence. It's a different metric.
          • d_silin 16 minutes ago
            It is exactly the same metric. Intelligence is not magic, be it organic or LLM-based. You still need to go through the training set data to make the any useful extrapolations about the unknown inputs.
  • guelo 1 hour ago
    This isn't right, the inflection point happens when computers/software can self-improve at a level where humans can't keep up. It isn't just that progress is continuously exponential, it's that tech becomes a magic box that spits out advances while even the smartest humans can only pray to it, like a (hopefully benevolent) god.
  • d_silin 1 hour ago
    A good reminder that every technological exponent is a sigmoid.