This article was originally posted on Forbes.com
By now, it’s a truism that automation will replace certain careers while leaving others intact. Experts believe the most vulnerable are jobs that require routine, rote tasks: a bookkeeper, a secretary or a factory worker. Each of these involve highly repetitive and predictable duties easily taught to machines.
By that logic, roles that require abstract thinking should be safe. This includes graphic designers and software programmers, who must think deeply (and creatively) in order to solve problems.
Unfortunately, what was true several months ago may no longer be the case today. The rise of machine learning and self-replicating artificial intelligences (AI) has jeopardized many more professions, notably programmers. Ironically, some of their best work may be their downfall: As developers make ever-more powerful and intelligent algorithms, they risk coding themselves into obsolescence.
In all fairness, it is doubtful that the experts intentionally set out to make themselves (or anyone, for that matter) redundant. Machine learning, however, skews that equation.
Essentially, machine learning is just gathering data, identifying patterns and making decisions based on said patterns. A self-driving car algorithm can train itself to avoid obstacles like highway dividers, slow down at red lights or stop for pedestrians (though not always successfully). Amazon’s powerful recommendations engine is renowned for its spot-on accuracy — and responsible for significant sales increases over the years.
The most powerful subset of machine learning is deep learning, which models computer frameworks after the structure of the human brain — known as a neural network. The concept of a neural network isn’t new, having been in existence for decades. Thanks to increasingly capable computers and mathematical improvements, neural networks can finally cross the boundary from unwieldy theory to fully functioning prototype.
At its most basic, a neural network contains layers of inputs and outputs, each with a specific weight. For example, an image recognition program could adjust notice a certain shade of a color when analyzing pictures; any changes or adaptations would require the weights of each individual input and output to be adjusted. In the past, this led to elementary mistakes, such as a program confusing a cat face for a human one.
The key to the rise of the neural network was automating this adjustment, specifically in programming each layer of “neurons” to train themselves. Given that one of Google’s neural networks contains close to one billion connections, using human intervention to adjust each individual weighting would have been impossible. But the ability of neural networks to learn and adjust on their own opens up a whole new world: Google’s systems, for example, made quantum leaps in areas like translating languages or transcribing speech-to-text.
In many ways, machine learning is a logical progression. Constant human intervention, reprogramming various rules (if this, then that, if that, then do this) is time-consuming and expensive. Even a brute-force approach, in which networks constantly test thousands of different combinations of inputs and outputs at once, is far more efficient and economical than having developers butt in. Just look at DeepMind’s AlphaGo Zero, which used brute force to teach itself to play the notoriously abstract, open-ended game over three days — and without human intervention, to boot.
Like a prodigal child who returns home to compete with the family business, AI no longer needs human minds to fine-tune it. In 2017, Google announced that its AutoML project, originally intended as a highly capable AI assistant to machine learning developers, surpassed its human counterparts. In essence, AutoML built programs that scored far better than those built by humans; its image-locating program (essential to technologies like robots) scored 43% to a human-made system’s 39%.
Currently, most experts agree that AI cannot completely replace humans, even in the field of programming. For one, deep learning requires significant resources, especially in terms of energy and computing power, making it a hefty investment. For another, applying deep learning to self-replicating AI is still limited to very narrow functions, like image recognition or classification.
But there’s no telling how long this will last — and once the genie is out of the bottle, it’ll be impossible to put it back in. After all, there are only about 300,000 AI engineers worldwide (out of 7.3 billion humans globally). Further, only about 10,000 people from this pool are capable of carrying out intense research to push the boundaries and development of AI. Clearly, the next logical step would be to scale up algorithms that can program other algorithms, if only to make up for this devastating shortfall in talent.
Self-replicating AIs also make sense from a business perspective. Because of their scarcity and importance, AI programmers command significant salaries: New hires, from Ph.D.s fresh out of school to a specialist with a few years of experience, can expect to receive anywhere from $300,000 to $500,000annually, usually a combination of salaries, benefits and company stock. Celebrity researchers can even earn millions.
But the gravy train cannot last forever. Just this year, researchers at Columbia created a self-replicating neural network that can predict its future growth path — not unlike a human planning their career and learning new skills. Even if complex programming projects will still require humans, there’s a chance that database experts and lower-level, AI-related jobs will be phased out. Microsoft and the University of Cambridge recently released an algorithm that could solve simple equations, such as Excel formulas. Uniquely, for such a compact program, it could augment its abilities by using the brute force approach on a smaller scale: trying different chunks of code until it finds the winning solution.
Either way, it’s past time to rethink our previous complacency about AI. Though information workers may consider themselves above the reach of automation, the truth is, no one will be untouchable. A lethal combination of supercharged deep learning abilities and the capacity to self-reproduce will create the greatest job disruption of our age. No one is safe — not even the masterminds who created the AI in the first place.