Machines have achieved an intense growth in intelligence over the past several years, they may exceed human like intelligence by the year 2040. What will be the social and ethical consequences if machines design and build new machines with a higher intellectual level than themselves? How smart can they be? Take the Hollywood movie Her as an example. A fictional story that shows the world in a future state, where software can be a great conversationalist, deeply caring, and a gentle friend that “learns” how to develop feelings and form relationships through human interaction. Whether you believe this is a progressive step in human history or not, this is the future. Although there are concerns about the technological possibilities and social consequences of achieving this kind of strong/general Artificial Intelligence (AI), the weak/specific and largely functional versions of AI are already transforming our lives.
According to Delft University of Technology in the Netherlands, drone ambulances flying 100 km per hour can increase survival after a heart attack from 8% to 80%. A smart ‘assistant doctor’ may be more accurate and faster than IBM’s Watson who uses millions of pages of structured and unstructured medical literature with a special technology of hypothesis generation about the causes of symptoms, massive evidence gathering, analysis and diagnosis. These are considered as positive developments that facilitate human living.
As AI becomes pervasive in every aspect of industrial and ICT productivity, a few important concerns are highlighted by experts. One issue is a hotly contested debate: unemployment versus reemployment effects of new technologies. Historically, new technologies have been automating some jobs while creating new ones. Rainer Strack states an important point about the limits of our imagination on future jobs created by new technologies, such as “cognitive systems engineers who optimizes the interaction between driver and electronic system” and in 1980 “no one had the slightest clue”. Kevin Kelly debates in Wired Magazine that robot replacement of most jobs is inevitable and necessary for a more human existence.
An important concern surrounding AI is the ethics and values of using this technology, as there are potentials as well as perils of letting “Her” change certain operations by “herself”. From one standpoint, it will be rewarding to witness smart machines discover cures for illnesses; solve environmental problems; and make critical services accessible to larger segments of society. From the other standpoint, super intelligent machines may create other intelligent machines or software that could alter the fate of humanity through an intelligence explosion, offering an existential risk. Recently, CBC’s The National reported that “artificial intelligence fear is escalating”, and suggested we should wake up and be concerned about the potential unwanted consequences. The Future of Life Institute in Boston published an open letter in January and called on researchers and industry to set the research priorities for AI, because “it is important to research how to reap its benefits while avoiding potential pitfalls.” This states that it is a concern, what we should and should not allow with this critical technological capacity.
Big players in ICT (e.g., Facebook, Google, Apple, I.B.M., and Microsoft) are already making strategic and commercial investments in AI technologies. Leading economies are developing AI for digital competitiveness and productivity and for solving critical problems and facilitating human life. ICTC has recently published a white paper (Artificial Intelligence in Canada: Where Do We Stand?) assessing Canada’s readiness for a highly competitive global future. Developing an AI roadmap to plan our future, investing in AI technologies and strategically considering the challenges in advance might be our first steps.
Click to read Artificial Intelligence in Canada: Where Do We Stand?