The Rise of Artificial Intelligence

Katherine Heires

|

May 1, 2015

artificial intelligence

Artificial intelligence—AI for short—refers to technology that aims to bring a greater level of autonomy to computers, ­giving them the ability to assess, learn, predict, infer and then make real-time decisions without user input. Developed as an area of study in the 1950s and used as the inspiration for many a science fiction film, it is currently riding a wave of renewed activity and innovation.

Its resurgence in the world of business has brought with it the arrival of new services and startups that are incorporating state-of-the-art AI to help automate and speed up a range of activities, including financial risk management, identification of cyberthreats and fraudulent activity, security threat tracking, and medical diagnosis procedures and analysis. Other AI-powered innovations include self-driving cars, smarter drones, more dexterous robots, and wristbands and watches that can track, measure and diagnose health conditions.

“There is definitely a big buzz about artificial intelligence right now,” said Akli Adjaoute, founder and CEO of Brighterion, a provider of AI-powered data breach and fraud detection software used by firms like MasterCard. “It’s akin to all the enthusiasm we saw during the dotcom era.”

Dennis Mortensen, CEO and founder of x.ai, a New York-based AI startup that raised $10 million in venture capital funding in January, believes we have entered the third major wave of enthusiasm for AI technology, which has seen highs and lows since the 1950s. “We have had several AI winters where investors and technologists lost interest, but this time around, we could see another hundred companies enter the space in the next few years,” he said.

Yet with all the new excitement about AI, there have also been renewed warnings from notable technologists including Bill Gates and Elon Musk about the potential dangers that could come with the proliferation of advanced artificial intelligence technology. They have also called for the establishment of public policy or development guidelines to better manage the potential risks of unfettered AI tech.

Thus far, AI enthusiasts and active service providers have garnered the most attention. This is evident in the recent acquisition and innovation activity of major firms. For example, Google acquired four AI startups last year—most notably, their $500 million purchase of DeepMind Technologies, a developer of AI algorithms that can learn new skills without additional programming. Google also launched Google Now virtual assistant service, and has continued its foray into AI-powered self-driving cars.

Meanwhile, Microsoft recently launched Cortana, a virtual assistant and AI-powered email filter for its phone system. Apple, already known for its AI-powered, voice-based question-and-answer service, Siri, appears to be expanding into the AI-powered car business as well. Other active players in this arena include Amazon, Netflix and IBM, whose AI-powered Watson computer beat humans on the television game show Jeopardy! and is currently being used to improve and automate cancer diagnosis and treatment at Memorial Sloan-Kettering Cancer Center. In addition, the firm recently partnered with Softbank, the maker of a four-foot-tall household robot called Pepper that can read human emotions through facial-recognition software.

Forward-thinking entrepreneurs and investors, however, are not sitting on the AI sidelines. Venture capitalists poured more than $300 million in funding into 16 AI startups in 2014, up from spending only $14.9 million on two companies in 2010, according to research firm CB Insights.

Notable startups funded last year include the aforementioned x.ai, which is building a human-like email scheduling and negotiation service; Clarifai, which focuses on visual recognition and tracking technology; Entrupy, which uses AI for counterfeit detection; Metamind, which aids financial and healthcare firms in decision-making and has software that helps databases self-correct; Vicarious, which aims to build machines that have the ability to perceive shape, texture, color and depth; and Sentient Technologies, which aims to apply AI techniques to complex problems like predicting financial market behavior and analyzing vast quantities of medical data. The company raised $103.5 million in venture capital funding in November for a total of $143 million in venture capital investment.

“People are very hopeful today that all kinds of problems can be solved by applying AI technology to massive data sets, and AI systems certainly have the potential to cut down risk in a variety of applications,” said Ernest Davis, an AI researcher and professor in the computer science department at New York University.

According to industry analysts, a confluence of factors has resulted in the recent revival of AI interest and activity. These include falling technology costs; the ability of state-of-the-art tech to process large quantities of data; cheaper, faster and ever more powerful computer chips; and a price drop in data storage systems that make it far easier to store and access large quantities of data.

Coupled with renewed interest in a subset of AI research known as neural network models or deep learning techniques, these factors have led to vast improvements in AI’s capacity to solve a range of business problems. Deep learning techniques involve the creation of AI-powered computer models that operate in a manner akin to the neural network structure of the human brain. This allows computers to learn and make smarter choices by observing data and, thus, better recognize images and understand natural language. Indeed, many of the startup firms attracting venture capital funding today are employing these techniques.

“Utilizing deep learning techniques, we are seeing breakthroughs in a range of AI fields,” said Devi Parikh, a professor at the Virginia Center for Autonomous Systems at Virginia Tech who recently won a grant from the Paul G. Allen Foundation’s Distinguished Investigator Program in Artificial Intelligence. “What’s exciting about this is the promise of computers that can learn by themselves, as opposed to requiring an expert designer every time you want a computer to do a different task,” said Dave Sullivan, CEO of deep learning platform provider Ersatz Labs, and an advocate for use of AI tech by a broader range of companies, not just technology giants.

But with the new advances in AI tech has also come speculation about the existential risks that AI could introduce. One of the most vocal critics about the future of AI is Nick Bostrom, a professor of philosophy at Oxford University and director of the Future of Humanity Institute. In his book, Superintelligence: Paths, Dangers, Strategies, Bostrom argues that current advances in AI technology are continuing at a fairly rapid pace and could lead to the creation of an artificial intelligence technology that far surpasses human-level intelligence. Shortly thereafter, he says, we could see the emergence of an almost omnipotent superintelligence that would have the ability to psychologically manipulate people and could evolve to be disastrous for society as we know it. Ultimately, this superintelligence could take over and make decisions that may make sense to machines but are anathema to humans. Thus, he believes a top priority for society will be to figure out how to imbue this superintelligence with a system of human-like ethics and morality, which he realizes will be quite difficult to achieve.

Although Bostrom’s scenario sounds like the far-fetched plot of a science fiction film, other respected technologists have echoed his concerns. Bill Gates publicly stated that he is worried about superintelligence and that its evolution should be a concern for society at large. Celebrated physicist Stephen Hawking also warned that, over time, the development of artificial intelligence could spell the end of the human race. Elon Musk, the founder of Tesla Motors and Space X, likened the danger of building out artificial intelligence to “summoning the devil” and said that the threat posed by AI could be greater than that of nuclear weapons.

Both Hawking and Musk have signed an open letter created by members of the Future of Life Institute, a global research organization that aims to ensure that artificial intelligence remains beneficial to humanity (see p. 42), which Musk has given $10 million in funding to date.  The letter calls for greater study on the part of the business and scientific community to “maximize the societal benefit of AI” while also avoiding potential pitfalls. “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do,” it reads. Additional signatories include representatives from GE, Google, Microsoft and academic institutions such as Harvard, MIT and Oxford.

Of course, other experts have a more moderate view of the potential risks and outcomes of ongoing AI development. Murray Shanahan, professor of cognitive robotics at Imperial College in London, warned that AI development faces two possible outcomes. One is that a potentially dangerous AI is developed without moral reasoning and based on ruthless optimization processes. In the other scenario, AI will be developed to more closely mirror the human brain, borrowing from our moralistic psychology and training, and will be used to help manage risks and enhance society. “Right now, my vote is for option two in the hope that it will lead to a form of harmonious coexistence [between humans and machines],” Shanahan said.

Not all experts fear the current trajectory of AI development and its possible risks, however. “I don’t think that the appearance of AI technology in our world is going to be that dramatic,” Davis said. “We will continue to maintain control over the situation because we have control. We were here first and we can decide to build or not to build what we want.” While Bostrom fears an artificial intelligence takeover, Davis does not see why it should be all that difficult to inject simple ethical guidelines into AI-powered computers. “If robots have to make ethical decisions at some point, let’s make them according to the same rules that guide people,” he said. “It’s not that hard to do.”

Parikh agreed that the risks of artificial intelligence may be overstated. She pointed out that all of the AI technologies currently in the works are very specific in their goals and are not fully autonomous. “They are all like Google Maps, where there is a great deal of sophisticated AI behind the program, but it’s very niche and really quite specific,” she said.

Parikh added that it is far too early in the AI development process for the technology to reach a level of superintelligence. “Everyone is aware of the possible risks,” she said. “As we move forward, the right precautions will be in place, so I am not seriously concerned.”  Her current research involves injecting common sense guidelines into artificial intelligence technology and helping AI-powered machines to “fail gracefully,” meaning that machines would raise a warning before they fail, alerting human operators to any problems or risks.

“What will be exciting,” she said, “will be to see how these different, niche technologies—natural language processing, visual recognition and others—come together over time to create something useful and make our lives more efficient and far more entertaining.”
Katherine Heires is a freelance business journalist and founder of MediaKat LLC.