About a decade ago if you mentioned anything about artificial intelligence, you probably be laughed out of whatever room you were in at the time.
Back then, most peoples knowledge of AI would remind them of what they saw in films. Like Data from Star Trek or the Terminator robots from Terminator.
Today though, it’s a buzzword in both business and the industry. People have heard of it and some consider it to be a crucial part to have businesses evolve on a digital level. Those that embrace this technology are believed to be the ones who’ll capitalize the most from this shift.
But how did we get to this point? And what exactly is AI? Let’s explore that.
How AI Became Mainstream
While people’s understanding of AI is still growing, people are still pushing forward with this technology. Where this push for AI came from stems from the Big Data revolution. It was a time where a lot of information was gathered, processed, analyzed and then acted upon.
This is of course is still happening on a larger scale, but initially the idea was for humans to build machines that were as “smart” as possible. Machines that could focus on those kinds of tasks.
And over the years the machines became more robust as an interest in this process increased. A lot of it stemmed from people who are in the center of this. The academics, open source communities, and other industry leaders. This group of people were the people that led advances and breakthroughs of this technology and showed vast change and potential.
For example, this group was the group behind self-driving cars being more realistic. There have been also advances in health care and even legal processes thanks to AI in recent years.
So What Exactly Is AI?
While the concept of what defines an AI has changed, the core concepts have been the same:
An AI is merely making a machine that is capable of thinking like humans.
This core concept works when you think more about it.
Humans have proven time and again that we are able to interpret the world around us via information. Our brain alone is similar to a super computer in that we can absorb and process information and prompt an action.
So by that logic it makes sense that when we build a machine, we would immediately gravitate to using ourselves as a blueprint.
It’s why even in those old films AI was depicted as human in most cases. Think back to the examples I mentioned above of Data and Terminator. Even if those characters were before AI was realistic, when people thought highly intelligent robots they thought to make them look like people.
Another way we can think of AI is a system that is capable of simulating capacity to be creative, and to form deductions or predictions. All of this is possible thanks to the digital binary logic that computers use. That very logic is also behind the computer being able to learn.
Today, research and development in AI is split between two distinct branches:
- Applied AI – The use of principles of simulating human thought in order to carry out a specific task.
- Generalized AI – Looks to develop machine intelligence that is capable of fulfilling any human task, similar to a person.
Research in either of these fields have provided extensive breakthroughs year after year and in various industries. Outside of the other examples I mentioned above there is an AI doctor that can diagnose patients using genomic data.
Another example is AI being able to detect fraud and improve customer service by predicting what they need before customer service gets a call.
The list is extensive and is continuing to grow in terms of the applied AI.
As far as generalized AI we haven’t gotten to the point of having robotic humans yet. While recent movies (i.e. Blade Runner 2049, A.I. Rising, Replicas) and games (i.e. Fallout 4, Neir: Automata) have definitely had these concepts in their game as major themes, that’s about as far as it goes right now.
That being said, with the rapid growth of technology, we might not be as far off as we think. There has been talk and development in a neuromorphic processor being integrated in new computers. These processors basically would be able to run brain-simulator code.
An example of this at work is IBM’s Watson. Thanks to this processor, the cognitive computing platform can carry out all kinds of various tasks flawlessly without being taught how to do them.
What Is The Future Of AI?
The future of AI is honestly up in the air and the answer will vary depending on the person.
Some real fears that revolve around AI stem from the fantasy worlds of some films actually becoming reality. Like the apocalyptic futures of The Matrix or The Terminator, where AI is vastly intelligent and exceeds human intelligence.
Even being conservative and thinking robots won’t turn us into living batteries, a less dramatic scenario is a dramatic shift in everything that we do. After all, robots today are handling the physical aspect and we’re developing them to work mental as well. AI could easily replace the various jobs and work that we do today. Some think optimistically about it, or this scenario could be worse.
Regardless, there are some real world concerns and big tech companies noticed this. Fortunately Google, Microsoft, Amazon, IBM and other companies joined forces to form a foundation. It’s called the Partnership in AI which is a non-profit foundation meant to overlook everything AI. From the ethical implementations of AI to the development of robots and AI. And so far they are doing a great job at that.
What’s also nice about the foundation is they host seminars talking more about AI.
Other Basic Terms To Know
Outside of the term artificial intelligence, there are other terms to keep in mind. Understanding these terms will be essential to you moving forward as tech companies will be using this type of jargon for years to come.
Think of these as the driverless cars or the self-driving cars. When someone mentions autonomous, it basically means that AI can do something without needing people. In the case of driverless cars, there are varying degrees of autonomy which are described by levels. We might be seeing this with more consumer goods as well. Basically the levels they use indicate the extent of autonomy. Right now a car with no wheels or pedals is considered level four autonomy. It doesn’t need a human in order to operate at full capacity. A level five autonomy would be reached if could operate without a driver nor needs you to connect to a grid, GPS, server, or other external source.
The crucial part to any AI system. These are the zeros and ones, the math formulas and programming commands. These are the things that allow non-intelligent computers to solve problems. Algorithms are the things that allow computers to figure things out without needing a person.
While machine learning and artificial intelligence today are used interchangeably, both terms mean completely different things. It’s that they’re so familiar that people use them interchangeably.
As a recap, AI is the act of making computers able to have human thought.
Machine learning on the other hand is the process by which an AI uses the algorithms to perform whatever you’re programming it for. In other words, machine learning is whatever the result is when applying the rules you gave to the AI in the first place.
Not to be confused with the black box in airplanes. Naturally, AI computes a lot of complex math. So much so that the math is too complex or it would take way too much time for people to figure out.
When we have these scenarios we call it black box learning. It’s basically that the computer used such a complex formula or process that we don’t really care. We don’t care because by the end of the day the AI followed the rules we set it to do in the first place. In other words black box learning is the ethical equivalent of not showing our work in math class.
When we want an AI to perform better, the next stem is making a neural network. The purpose of these networks is to behave similar to our own nervous system and brain. Think of it as an artificial brain that an AI can access.
Thanks to these neural networks, an AI develops stages of learning and has the ability to solve complicated problems by breaking them down into smaller portions of data or levels.
For example, if you gave the AI a picture to analyze, through the neural network, the first level of this network may focus on a few pixels of that image file. After that initial stage, the neural network would pass its findings to the next level where that level would analyze more pixels and maybe a portion of metadata before moving to the next level and so on.
This is what happens when neural networks get to work. With each new level, the AI gains more understanding. Another way to look at it is that deep learning is allowing the AI to begin to learn “why” rather than simply understanding what something is.
Natural Language Processing
For AI to learn and process human language, it needs an advanced neural network. When an AI fully understands human speech it’s called natural language processing. You’ll find this with chat bots, translation services. Another example of this is Siri, Alexa, Cortana, and Google Home.
While AI isn’t exactly like humans they are still pretty close. Part of that comes to their capability to learn in similar ways to us. This is where reinforcement learning comes in. The idea behind reinforcement learning is to give a vague command and then to judge and evaluate the results. An example is to tell the AI to “improve efficiency” or “find solutions”. Instead of thinking of one particular answer, this learning prompts multiple answers which you can judge and give feedback to in order to get better results.
Similar to reinforcement learning, the big difference with this learning is that you don’t give an AI a question. It’s basically presenting a series of data and letting the AI come to it’s own conclusions.
For example, instead of asking “why people prefer one brand over another,” you’d give a lot of data and see what patterns come up.
Once an AI has successfully learned something, the AI may take the initiative to build more knowledge in other areas. The spooky part of this learning is that you don’t even need to ask them to do this task. But regardless of the instance this learning tends to improve it’s other tasks.
For example, if you wanted your AI to spot differences between any two pictures of cats you give it you can set it up with ease. However after a week of the AI learning about cats, it may use transfer learning to learn about differences between clothing or the shoes people are wearing. This learning, while beyond the scope of what you gave it, may be able to help the AI’s accuracy for future comparison tasks you give it.
A lot of AI experts today are cautiously optimistic about the future of AI. And while it’s safe to have reservations, there have been people who have inadvertently placed some measures to assure people.
One such person is Alan Turing who passed away in 1954. Although his legacy lives on in two ways. The first is being the man behind cracking Nazi codes and helping the Allies win WW2. The second is the creation of the Turing test.
While the test initially was used to determine if a human could be fooled by text display, the test has come to apply to AI now with that development. After all, we are trying to make systems similar to people. Thanks to this test, it’s used to determine whether an AI can fool a person into believing they are seeing or interacting with a real person.
Not Science Fiction Anymore
For a long time, AI was believed to be something in science fiction. But with technology growing more and more, artificial intelligence is becoming more of a reality. We’re at a point where AI today represents a fundamental change in all of us as a species. As such, it’s important for us to be familiar with AI.