Artificial intelligence (AI) comes up constantly right now, so we should step back for a moment and examine what exactly we mean by the term. As a concept, it’s already reached the level of hype that digital transformation achieved several years ago.
It turns out that killer robots that are out for our jobs or ready to take over the planet make effective clickbait. But the unfortunate result is that the term becoming a watered-down version that’s standing in for a number of technologies.
What Is Human Intelligence?
So before we examine these technologies, let’s go back to basics. Essentially, the word intelligence is in AI because it refers to humans. AI is the simulation of human intelligence and our brains’ abilities.
You may think of human intelligence as the ability to simply “know” something. Of course, it’s more than that. We also have the ability to learn, calculate, deduce, reason, problem-solve, plan, infer, and offer an appropriate emotional response to any given situation. All of these “ways to be smart” also manifest themselves in AI.
Super Computers that Are Super Human
We know that computers can process information in the form of data. That’s why we call their “brains” the processor. But now we also want computers to hold onto the resulting information that comes from that processing and manipulate it to do more with it. AI is the activity beyond basic processing, where computers start to think as well as just do.
Our goal in the development of AI is not to simply replicate and automate actions that humans can perform (although it is very much that), our wider and perhaps longer-term goal with AI is to prepare computers to do things that humans can’t do.
That’s not just doing things faster. We already know that computers can process decisions faster than human beings can. It’s about being able to handle complex decisions that involve increasingly sentient and perceptive levels of reasoning.
The Road to Greater Knowing
For advances in AI to start preparing machines to perform more sentient and perceptive tasks, the AI brain (the data analytics and processing engine at the heart of the AI system) needs to be exposed to as wide a variety of data sources and “events” as possible. This is the learning part, much like humans start to learn about their environment as children.
As techopedia explains, “Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties, and relations between all of them to implement knowledge engineering. Initiating common-sense, reasoning, and problem-solving power in machines is a difficult and tedious task.”
The Turing Test
British mathematician Alan Turing developed his Turing Test in 1950 to measure a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. His test is widely respected, but also widely controversial. This has meant that it has arguably become more of a discussion point than an industry standard.
The disagreements go on: Some argue that machine learning (ML) is the act of computation that builds the synapses inside an AI brain. Equally, others argue that machine learning is the result and manifestation of AI. The two terms shouldn’t be used in equal measure of as switchable replacements for themselves, but they often are
Opening Pandora’s Box of AI Ethics Questions
One important aspect that has surfaced as machines get smarter is AI ethics. As we build machine brains capable of reasoning and making decisions, we need those decisions to be positive ones that are good for humans and that don’t hurt or offend people.
Going deeper here, Natural Language Understanding (NLU), speech recognition, synthetic speech, and Human Computer Vision (HCV) are all part of AI. So we need to make sure that computers say the right things to the right people with the right level of professional and cultural sensitivity. This is just one example of why AI needs some sort of framework for ethics. Another more-critical example is creating self-driving cars that are able to avoid hitting pedestrians.
On SAP’s AI pages, there is a statement by Luka Mucic, chief financial officer at SAP who has said, “SAP considers the ethical use of data a core value. We want to create software that enables the intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent.”
Will AI Replace Humans?
We can see that AI is on the road to getting a whole lot smarter. But we can also see that there are a lot of different elements of AI. The many ways that it’s applied to our business systems have yet to be refined and finessed. In its next stage of development, AI will become a more embedded and implicit part of the technologies around us.
Despite some scaremongering here and there, it is widely argued that AI will not replace human beings and the jobs that we do. Instead, it will free our time up to do more valuable tasks that machines are still not capable of doing. For now, right?
Want to bring artificial intelligence and machine learning into your business? Join us on December 11 at the NVIDIA headquarters for an ASUG Executive Exchange event.