Artificial Intelligence - Today's Future Challenge


In January 2017, an A.I. model developed by IDST/Alibaba outperformed a human in reading comprehension skills. In January of this year, Ford unveiled an A.I. car capable of contacting other vehicles in order to stop and fine lawbreakers. 

Every day ‘Siri’ assists millions in their searches and organization tasks. “Watson” is preparing taxes at H&R Block, and participating in sports games’ broadcasts as much as human commentators. And as of February of this year, the NHS Mental health Services is offering virtual reality treatments as a cheap alternative to human therapist consultations.

Once science fiction, now a reality - Artificial Intelligence is here to stay.

As a field of study, Artificial Intelligence has been around since the late 1940's with the development of the Turing Machine, the predecessor of modern computers. Its advancement has had its upswings and ebbs through time, depending on available financing. 

Now A. I. is finally making serious progress thanks to a series of technological developments in recent years, such as smaller chips, Cloud technology, Neural Networks, 5G technology, etc.

All of these have signified an achievement of historical proportions for human invention. Yet no other technological advance has arisen as much excitement and concern as A. I., which many see as a threat to humanity as a whole.

But, what is exactly an ‘artificial intelligence’, and how much of our fears are justified and  not just mere speculation?


In computer science, Artificial Intelligence is referred to any device that perceives its environment and takes actions to maximize its chance of successfully achieving its programmed goals.

It is called ‘Artificial Intelligence’ because it mimics the human ability to learn and take decisions, based on algorithmic calculations that rely on statistics. Every choice taken by the mechanism is derived from a list of possibilities, from where it plucks out the one with the highest rate of success.

If the choice taken is incorrect (for instance, declare that a subject is ‘sad’ when in fact is ‘tired’), the AI mechanism archives this correction in its database. Thus, the next time it's faced with a similar situation, it will be incorporated within the possible options and the probability of giving the correct answer will be much higher. 

Just like children, machines ‘learn’ through a series of successes and mistakes.

However, although this method is faster that coding every single possible outcome for every instance, A. I. can’t be applied to all situations. For certain decisions, (i.e. in the face of a life or death emergency) we can’t wait for the A. I. mechanism to learn the right choice through experience.


Now A.I. engineers are focused on imitating human behaviour, to the point where it can’t be recognizable from a real human being. But, is this a good idea?

A recent article published by The New York Times highlights that A.I. technologies copy human responses even in their lewd language and social prejudices, arising moral objections in both users and developers.

Fortunately, so far humans have proven hard to trick, quickly spotting the falsehood. But as the rapid advance of technology catches up with us, the day when we may fail to recognize a human response from one generated by an A.I. mechanism may not be too far, arising the fear of ‘human replacement’.


Whenever a new technology comes into the market, there are predictions of displacement - out with the old, in with the new. 

MP3s and streaming would mark the end of older music formats, e-books and e-readers would be the end of paper books, ATMs would mean the end of bank tellers, internet would be the demise of libraries, online classes would mean the end of real classrooms and teachers would be replaced by machines, etc.

However, most of these new technologies coexist side by side and some have even caused the reverse effect: they have made people crave for the analogue and stick more firmly to it, either due to pure custom, nostalgia, convenience or unforeseen qualities (for instance, LPs sound better than compressed digital music files). 

Certain artisanal jobs, such as shoemakers and tailors, have even become associated with exclusiveness and higher quality, making them a symbol of social status.

As anyone who has been the victim of ‘autocorrect’ knows, implementing a new technology is a long process of trial and error that depends on the willingness of the public to be part of it, and to adapt to it. 

Because not everyone is comfortable or confident with technology.


Recently, my neighborhood supermarket replaced half of their checkout cashiers with self-service machines. It was a predictable change, and one that represented a considerable speed up of the shopping process. But in spite of this efficiency, I was alarmed to see that the cashiers I had known for years would have been booted - and was glad to see them on a different shift. 

When I told one of them about this, she wagged her head. “I don’t think we’ll be replaced.” she said with assurance. “You should see us at peak hour - people prefer to make long lines than going to the machines!”

Maybe it’s because my neighborhood is rather traditional in spite of its young population, but it still provides an example of the unpredictability of implementing new technologies. 

And this also includes the job market.
                       
Just as roll-o-dexes and fill-o-faxes were swapped for databases, architects changed their drawing tables and T-squares for AutoCAD, and paper route maps were replaced by GPS, we have evolved along with technology and the new tools developed to ease out our lives. 

And just as the Industrial revolution and the advent of the PC revolutionized the workforce, A. I. and the new technologies are also causing a similar upheaval, but a much faster pace. This time the grounds between human skills and technology are more leveled, thanks to the rise of neural networks and deep learning processes.


So, How true is that machines will soon be replacing humans in the workforce? If we are to believe in the latest global studies, the possibility is very real and present, and it will affect a large portion of the current workforce on a global scale.

Early in 2018, the McKinsey Global Institute produced a report stating that around 70 million people in the world are at a high risk of being replaced by machines. In the same report, they determined that by 2030 the demand for office workers would drop a 20%.

A report presented at the International Economic Forum in Davos in February of this year, stated that an investment in A.I. could increase a company’s earnings on 38% in the next 2 years. 

And while the report also states that this investment should also boost employment in a 10% in the same period, it is not clear how this would compensate for the job losses generated by the inclusion of A.I. in the workforce.

In the past, we’ve seen human jobs been replaced by machines, specially in the manufacturing industry. 

While this meant massive laydowns at it time, in the long run it prevented the human workforce from exploitation (as it still happens in textile factories and other cheap labour camps, the so-called ‘sweat shops’ banned by most countries), increasing the production while keeping production costs at a minimum, without sacrificing the quality of the finished product. 

In hindsight, we can see the benefit of this replacement. But technology doesn’t always operate for our common benefit.

So far, technology has been convenient, but now we’re feeling threatened by it.

Part of the alarm resides in the fact that, while in the past technology has replaced menial jobs (such as phone operator, elevator operator, laundry washer and the such), now the threat of job replacement looms at a wider scope and on a larger scale. 

Both qualified and unqualified professionals will be replaced by these new technologies. Once-prestigious jobs such as ‘lawyer’ and ‘pharmacist’ may soon become obsolete.

Image: Fish4Jobs

The 2013 Oxford University study, ‘The Future of Employment’ cites Telemarketers, Insurance Underwriters, Cashiers and Tax preparers as some of the jobs at highest risk of being replaced by newer technologies, with Recreational Therapists, Physicians, Surgeons, Pre-School Teachers and Athletic Trainers as some at the lowest risk of replacement.

The functioning basis of A. I. are statistical data processed at great speed. This is why A.I. is, at first instance, easily applicable to repetitive or monotonous jobs which do not require abrupt changes of parameters and conditions (computers work better with what they can predict) or that require subtle judgment (specially in the finance sector).

However, even creative jobs aren’t entirely safe from replacement. 

With machines creating ‘works of art’ in the lab, new programs producing written articles, and algorithmic programs creating commercial music, there’s also a relative chance of replacement in the arts.


As YouTuber pioneer Taryn Southern stated at the TNW Conference in 2017, “There’s a tsunami of change coming, and the most successful storytellers won’t necessarily be the people who are best at telling stories, but those who are best at identifying and adapting new tools, platforms and technologies for creation.”

But, is a financial reports-writing program a defining proof that A.I. will replace journalists? One thing is writing a dry numbers report, and another quite different is to write a critique on a cultural event, for instance.

Image: Fish4Jobs

Specialists insist that A. I. will help in the creation of new jobs, but this will only happen through adequate learning of the new technologies, something which currently is not available to everyone or to every job. 

Parts of the workforce will inevitably fall behind

Although these studies strive to provide an approximate date, it’s truly impossible to predict when will certain jobs cease to exist. This will depend on the curve of development of these new technologies, associated with each country’s socio-economics and local regulations. 

For instance, a Law firm in Australia has already opened a fully machine-operated law office to attend public in the creation of wills. 

Still, some jobs will not be so easily replaceable, even when having the available technology, simply because it will still be more convenient to have a human employee, the replacing technology is too cumbersome or costly to justify the change, the job requires a specific set of skills that are hard to replicate through technology (i.e. the sense of smell and taste required in Chefs and cooks), or simply because we will still need the human touch.


Inevitably, some jobs will be replaced by A. I. But just as it happened with the rise of other technologies in the past, we can expect that both sides will eventually balance out, and both humans and A. I. will integrate and come to coexist harmoniously side by side.

Most A. I. mechanisms will need to work in tandem, or in close cooperation with human specialists, supervising their correct functioning, in order to continue their learning. At the same time, workers will need to diversify and learn new skills and technologies in order to keep current and employable.

But if the inclusion of A. I. arises concern in the human work force, so it does in terms of data and global security. A recent study at Oxford University determined that A.I. is vulnerable to exploitation by hackers, rogue-countries and terrorists.


A. I. defenders state that global parameters and laws are needed to regulate the appropriate use of this new technology. But, just as it has happened with nuclear weapons, laws alone won’t be enough to stop A. I.  to fall into the wrong hands and be used against humanity. It is a pressing issue that governments should discuss and decide on in the months and years to come.

Fortunately, we’re still far from our worst fears becoming a reality. As Yann LeCun, Facebook’s A.I. Research director, recently stated: at this point, machines “have less common sense than rats.” However, given the rapid curve of learning of A.I. mechanisms, this may change in the near future.

History is rife with unfulfilled predictions, and if one thing remains truly human in this world is our utter complexity and our ability to make mistakes.

As Google Cloud A. I. And Machine learning Chief Scientist, Fei-Fei Li, stated in her lecture at the New York Times’ 2018 New Work Summit, “for this technology to play a positive role in tomorrow’s world, we must put humanity back at it’s center.”


Perhaps the greatest threat of A. I. is that people may grow used to robots and technology taking the place of humans. Once we lose our wish of connecting to other humans, once we stop discerning what is human from what is an algorithm, and once we accept this as the norm - then the battle will be lost. Today we see it as impossible but, just as the epidemics of Smartphone addiction has proven, it is not.

Preserving our connection to nature and to real humans is our best, if not perhaps our only bet for the future.

Comments

Popular Posts