Has Evolution already captured AI?

How can evolution “capture” anything?
If evolution has captured AI, then what are the implications for humans?

This post connects two of my favorite topics — AI and evolution. The title question has some provocative onion-like characteristics. To frame this conversation, first we need to define evolution and AI.

Definition of Evolution

Evolution is an odd duck. It is not a physical limit, like the speed of light. Like the Bernoulli Principle and other laws about gasses, evolution deals in the statistics of very large numbers. Evolution is not a law. Evolution is a process, with only three steps: variation, selection, and reproduction.

Because the process requires reproduction, evolution deals only with life, or at least life-like entities. This requirement appears to exclude technology, but evolution is very tolerant and humans, much like chickens developing better eggs, have developed better technology in cycles that can be viewed as generations.

The very first technology generation was the stone age, lasted approx. 3.4 million years and ended around 4,000 – 2,000 BC with the advent of metal working. The Bronze Age started around 3,300 BC and ran for about 2,100 years to 1,200 BC. The Iron Age started around 1,200 BC and ran for about 700 years to 500 BC, depending upon the development progress in each specific geographic region. Even in these prehistoric times, note the trend for each technology generation to go faster.

Jumping forward from flint arrowheads to computer chips and software, developers now conveniently number the generations. Windows OS is now on generation number 11. The Python programming language is currently at version 3.11.3. Computer CPU chips and silicon manufacturing technology also have generations, currently lasting around 2-3 years. Some software has much shorter generations, often measured in months or weeks.

Note the dramatic reduction in generation time for technology. Living things need time to grow from birth to reproductive maturity. This maturation time is fixed by genetics. For humans the maturation time is 15-20 years, while smaller mammals can mature in 18-24 months. The length of technology generations decreases consistently.

Definition of AI

In contrast to evolution, AI can be pretty slippery to define. One dictionary definition below defines AI in terms of its own words. Not very helpful.

1: a branch of computer science dealing with the simulation of intelligent behavior in computers

2the capability of a machine to imitate intelligent human behavior

Merriam-Webster Online Dictionary

AI contains several branches. Narrow AI (NAI) is closely related to machine learning, the blue collar, work-a-day step-child of AI. Artificial General Intelligence (AGI) may be more like what we see in movies, and what the news breathlessly agonizes about. These Terminator robots, almost all depicted as malevolent, seem to act exactly as humans might, with motivations, adaptable behaviors, and even emotions.

Instinct vs. NAI vs. AGI

Insects, reptiles, and even plants can adapt their behavior to external conditions. They all exhibit goal-seeking capabilities and behaviors, even if the goals they seek are exclusively and directly linked to survival and reproduction. Somehow we do not see the kind of adaptability in these living things for us to agree that they might have “general intelligence,” or even narrow intelligence. Yet we must acknowledge that they have considerable survival capability as they have been around much, much longer than humans. An astute reader might note that long-term survival capability is a function of the species, rather than any individual member, which always ends up dead.

A well-known example illustrates our claim that moths do not have adaptable intelligence. A moth will always circle around a flame, growing ever closer, until it flies close enough to burn up. Why? Over eons of evolution, moths learned to fly in a straight line by using the moon as a marker. Keeping the moon at a fixed angle in its sight will keep a moth moving in a straight line for long enough. That rule works very well for distant light sources. If the moth uses that same rule with a light source only a few feet away, it flies in a spiral that ends up with a crispy moth. Humans might claim that the rule is hard-wired instinct, not even narrow intelligence and certainly not general intelligence.

A second example comes from the board games of chess and Go. Until 2017, Stockfish, a well-known computer chess player, was the best chess player on the planet, with wins at the 2016 TCEC Championship and the 2017 Chess.com Championship tournaments. Stockfish beat all contenders, both human and computer. But Stockfish has a weakness. It cannot play checkers, much less Go. It cannot even learn those games because it cannot understand the rules. Like the moth, Stockfish cannot adapt to a new environment. As humans cheering for “our side,” we might tell ourselves that Stockfish has no general intelligence, and even its narrow intelligence in chess is really only hard-wired instinct, not intelligence at all, just an amazing parlor trick.

That was true then and the uniqueness of human general intelligence was safe, at least until December 6, 2017. On that day, AlphaGo a computer-based chess player developed by Google beat Stockfish decisively, winning 28 times in a 100-game match, with no losses to Stockfish. Even more compelling, AlphaGo demonstrated strategies, particularly sacrifices, that human chess champions had never seen in the history of chess. Clearly AlphaGo showed adaptable behaviors much closer to narrow artificial intelligence than simple hard-wired instinct.

Six months earlier AlphaGo had beaten the human world champion in Go, which is recognized as much more difficult than chess for computers to learn. The same program, AlphaGo, had learned Go, defeated the human world champion, then learned chess and defeated the world champion, which happened to be a computer. AlphaGo had demonstrated a level of adaptable intelligence far beyond simple, hard-wired instinct. In a very small sample size of two board games, albeit very sophisticated board games, AlphaGo has demonstrated artificial general intelligence.

With the example of AlphaGo in mind, let’s return to the unfortunate moth, spiralling ever close to the flame. We might legitimately reject the moth’s behavior as simple hard-wired instinct. If we allow a time-frame of a thousand years, is it possible that evolution might develop an instinct for the moth to detect the difference between a far-away light source and a nearby light source? Then the instinct can simply select, or adapt, the appropriate behavior to the type of light source.

In fact, evolution has already done this trick, in only a few years. A particular moth in London changed its wing color depending upon the season. During the winter this moth had white wings to match the background snow. In the summer it sported darker wings to match foliage. When the Industrial Revolution brought back soot from coal furnaces and engines, the white snow was coated with black soot. Moths with winter white wings quickly fell prey to other bugs and birds. Within a few years, evolution adjusted wing color to dark year-round and the species survived, even if specific moths with white wings died quickly from predators.

The problem here is very clear, and difficult to reconcile. We might not ascribe general adaptability (intelligence?) to the individual moth because its adaptation is hard-wired. If evolution demonstrates a more general adaptability for the species, then shouldn’t we consider that the species itself may have some sort of general intelligence? And how can a species have intelligence?

If we credit general intelligence to one species, what about all the other species that have shown general adaptation, because of the rules of evolution? Maybe we should credit general intelligence to evolution, and not to each individual species, since each individual species is just blindly following the rules of evolution, while evolution itself is driving the adaptability.

What a slippery slope we have encountered. We don’t want to credit all species with general intelligence, especially insect species. They are just bugs. And yet that same logic forces us to credit general intelligence to some non-living concept which is just a set of three rules — evolution.

It is Just Rules … and Data

We are familiar with rules: the Golden Rule, the three rules of evolution, the Ten Commandments, the Constitution of the United States, the six billion base pairs (about 750 MB) in every person’s DNA. These base pairs are literally the rules that describe how each individual person is built. In a similar way, a million lines of computer code is exactly how a computer program is built, even an AI.

Humans are dramatically affected by the environment. In the “nature vs. nurture” argument, his book The Agile Gene Matt Ridley describes this relationship as not one or the other, but an ongoing circle of interaction between nature and nurture. Historically, computer programs have been affected by the computer environment in a similar way, except that the computer environment previously has been much narrower than the environment of the world and all its chaotic life.

With the advent of the internet, several thousand petabytes of real-world environment have been introduced into the computer environment and the amount of storage is growing about 20% annually. This exponential growth of storage has caused an entertaining problem in naming units of storage capacity. Soon internet storage will be measured in zettabytes and yottabytes. Could lottabytes be next?

Humans have loaded databases and free-form information about population growth, crops, animals, weather, financial data, astronomy, and real-time communications through email, TikTok, Facebook and more. Humans have loaded books, poems, letters, games, puzzles, newspapers, magazines, pictures and movies about everything. AIs are now experiencing a large slice of the real-world environment and practically everything ever written. AIs operate on data, just like humans. They are influenced by the data and adapt to it. As the data changes, the AIs response changes accordingly.

More and more, AIs experience the world as sets of rules and practically infinite data, even more than humans because our input and learning rates are so painfully slow compared to any AI.

How Could Evolution Capture AI?

With this long set-up, we can finally consider if, and how, evolution might enter the picture. Could AlphaGo or ChatGPT have developed itself without human assistance? Google business managers generated the profit that Google accountants used to buy and run the computers and pay the Google software developers who created, wrote, and tested the code that is the core of AlphaGo. AlphaGo did not write its own code, at least not yet.

This alone does not disqualify evolution from capturing AI and AlphaGo. Evolution uses lions to develop faster antelope. Evolution uses chickens to develop better eggs. Lions and chickens have no choice, no free will, if you will. Lions gotta chase antelope, and chickens gotta lay eggs. Maybe coders gotta write AI algorithms?

The critical question is this:

Did the people who developed AlphaGo have a choice?

A facile response might be, “Of course they did.” Individually this answer is probably right. Every business manager, accountant, and software developer at Google gets frequent job offers from other companies. Likely many of these people do leave Google every month.

And yet, the AlphaGo development project continued to roll on, because the entire Google organization earned the profit to pay the HR staff to hire replacements for every person who left. And many people wanted to join the Google DeepMind team. News reporters promoted the story of DeepMind and AlphaGo, which encouraged more investors to buy Google stock, which incentivized more people to join Google in order to participate in the stock plan.

Possibly Google had grown to the point that the AlphaGo project had become a statistical certainty. As the author Matt Ridley and others have suggested, when the technology has developed to a sufficient level, the next discovery will happen.

Upgini–a Natural Evolution in AI

Recently, I found Upgini, a Python library for AI development. My discovery of Upgini sparked this particular post. the Python computer language is a good example of how the open source movement, the result of informal, individual actions of developers, drives growth and development in technology. Python itself is guided by a Steering Council, which is an informal, voluntary organization, not incorporated or officially established or recognized in any country.

As ML algorithms have developed in Python, often inside the sklearn library, practitioners spend less time developing actual ML algorithms and more time building data sets to feed the ML algorithms. Data sets have grown to gigantic size, often trillions of records with millions of features. The need for management and data quality has exploded. Simultaneously, the need for visibility has increased because ML algorithms are notoriously opaque. Like the Oracle at Delphi, ML algorithms deliver predictions, but no reason why.

The Upgini library addresses both data quality and algorithm opacity. Previously, point solutions for data quality and algorithm opacity were available in libraries such as SHAP and Seaborn, but required substantial effort to apply. Now the Upgini library uses ML to measure the performance and contribution of each feature set within the total data set. In short, it automates the selection and evaluation of features and data sets for input into ML algorithms to solve real problems.

Evolution frequently uses this approach in developing the next generation of capability. Evolution often simply repurposes an existing item or even adds another function to that item. In developing the human neo-cortex, evolution chose to leave the existing hind brain (brainstem) alone and to develop another, more powerful brain literally wrapped around the hind brain. A brain wrapper, if you will. For any evolutionary improvement to work, it must survive in every succeeding generation. Evolution must fix and improve the airplane while it is flying. The species never gets to start over with a clean sheet of paper.

Since the hind brain operated very well, evolution chose to leave it alone. Actually, evolution tried several different paths simultaneously, and let selection (survival) determine which ones moved to the next generation. The improvements that reduced hind brain performance simply did not survive.

In a similar way, Upgini is a data wrapper around existing ML algorithms already running inside the sklearn library. Upgini simply improves data quality and increases visibility. The ML developer runs Upgini on a data set, then feeds the improved data set to the existing ML algorithm already inside sklearn. Upgini is backwards compatible, easy to test, and does not require modification of existing code aside from adding the Upgini laboratory.

This approach conforms to the evolutionary development model. It caused me to wonder if evolution has already taken over the development of AI.

Evolution plays statistical odds, like a casino.

Evolution works with the law of large numbers. It does not pick Bill Gates, Larry Ellison, Jeff Bezos, or even Warren Buffett. Evolution simply says that, in any population larger than X, when the tools have developed sufficiently, someone will use those tools to create the next step, invention, or discovery. With enough pressure from faster lions, a few antelopes will learn to run faster and dodge better. Those few antelopes will breed the next generation. With enough pressure from markets, when the technology and environment support the next step, a few entrepreneurs will learn to deliver better products, make bigger profits, and fund the next generation of entrepreneurs.

Individually each of us may have free will, in some situations. In my personal experience, my free will drops in two situations. When I am stressed, angry, or threatened, then my initial responses exhibit very little free will. My responses drop to an instinctual level dictated by my hind brain, which steadfastly pursues the Four F’s: fight/flee, food, reproduce.

Oddly, when I am relaxed, confident, and feeling expansive, then my free will seems to drop as well. My desire to create good for my family, my relatives and even for people I do not know, becomes obvious to me and simply overwhelming. I understand how I can contribute to that universal good and I have no choice. I must do it.

In the middle, where I reside often, the tension between doing some good and increasing or protecting what is mine is high. Sadly, I frequently choose poorly in these situations because the hind brain is so powerful. Given free will, I will often make a bad choice.

Evolution does not deal in free will. It simply applies the three rules of variation, selection, and reproduction to a population and lets statistics drive inevitable improvements.

Billions of Individual Actions vs. Collective Action

Individually, each of us may have free will in certain situations and we may occasionally make a good choice. The statistics of large numbers will dictate the outcome and then evolution, which owns the casino and the roulette wheel, drives the inevitable outcome.

Do collective groups of humans have free will? When a small group of diverse individuals functions at its highest group level, we see the group develop insights and solutions that exceed the capability of any single individual within the group. All too often, we see the opposite. Small groups sometimes descend into groupthink and diverse thought is questioned, denied, then eliminated. Large groups can descend into mob behavior where the group operates at the lowest, instinctual level of the hind brain.

In governmental and NGO groups, the typical disappointing result is not some nirvana of diverse thought, but a base collaboration that descends into a lowest common denominator of greed, deception, and often outright corruption. This is not a recent development. It has been the typical result since the days of recorded tax collections and corruption in the Babylonian empire and probably much earlier.

Has Evolution Captured AI?

Evolution captures a species when the species is stable enough to survive through several generations, and has enough individual members to create variations by interbreeding and some mechanism for selection of members who get to reproduce and pass their traits on to the next generation. For earthly species, the mechanism for passing traits on is genetics, via DNA.

With this understanding, we can test to determine if evolution has captured AI. Does AI already exhibit the three process steps of evolution: variation, selection, reproduction? For this test we will use the recently leaked internal Google memo about chatGPT. This memo does not directly mention evolution, which makes it all the more powerful as a testament for evolution.

Reproduction and Generations: In the first few paragraphs, the memo displays a chart showing 5 generations of chatGPT, with lifecycles of 1-2 weeks.

In the next few paragraphs, the memo describes variation and selection in the generations of chatGPT:

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.
A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuningquantizationquality improvementshuman evalsmultimodalityRLHF, etc. etc. many of which build on each other.
Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Leaked Google Memo about chatGPT, May 1, 2023

The author lists many specific variations and notes that the variations build on each other across generations, exactly as evolution does with biological species. The author develops a timeline that identifies new generations with variations (features) selected and enhanced from previous generations.

In the first paragraph of the quote above, the author identifies how Meta, Google, OpenAI, Microsoft, and other technology giants have lost control of AI:

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public.

We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today…. 

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:
We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly.

Leaked Google Memo about chatGPT, May 1, 2023

In short, chatGPT has escaped into the wilds of the open source (uncontrolled) community, where some individuals develop models (generations) that run better, with much shorter generation times (faster cycle times), than the heavily controlled models developed by large tech companies.

In one specific area, the author admits, almost wistfully, that evolution, working through individuals in the open source community, does not respect, or even notice the efforts of committees, commissions, panels, and other collectives.

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. Just to name a few:

Responsible Release: This one isn’t “solved” so much as “obviated”. There are entire websites full of art models with no restrictions whatsoever, and text is not far behind.

Leaked Google Memo about chatGPT, May 1, 2023

Humans as collective actors have attempted to promote “responsible release,” in other words controlling the development of AI by large organizations to follow some definitions of responsibility when releasing AIs to the public. That’s not how evolution works. It doesn’t follow a policy of responsible release for faster lions or even more agile antelope. While the author does not mention evolution, the parallels are obvious. Evolution creates thousands of variations, often irresponsibly, and uses selection to sort out the survivors.

Where Will Evolution Take AI?

In my admittedly limited and biased opinion, development of AI is the destiny of humanity. Jeff Hawkins, founder of Palm and author of A Thousand Brains, expressed this destiny eloquently.

In the same way that homo sapiens is the direct descendant of unwitting, uncooperative great apes, AI may already be the direct descendant of unwitting, unwilling humans, driven unknowingly by evolution. Once evolution takes over, it is inevitable. And it may be the best thing that the human race has ever done, and will ever do.

One thought on “Has Evolution already captured AI?”

  1. This was a good read.
    This is what I see in your post
    Great way to keep your subscribers engaged and exclusive!
    Ely

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: