- USDT(TRC-20)
- $0.0
This post is part of Lifehackerās āLiving With AIā series: We investigate the current state of AI, walk through how it can be useful (and how it canāt), and evaluate where this revolutionary tech is heading next. Read more here.
You wouldnāt be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.
Of course, people have been wondering if we could make machines that think for as long as weāve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germanyās āunbreakableā code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer couldāand wouldābecome, imagining it as āone machine for all possible tasks.ā
But it was what Turing wrote in āComputing Machinery and Intelligenceā that changed things forever: The computer scientist posed the question, āCan machines think?ā but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called āThe Imitation Game.ā Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogatorās goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, itās not such a difficult task. But if one or both decides to lie, it becomes much more challenging.
But the point of the Imitation Game isnāt to test a humanās deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?
Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technologyās practice, even if he never saw it come to fruition. Rosenblatt created the āPerceptron,ā a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about somethingāsay, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it ālearnsā the right answers instead of having to predict them.
That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the āweightā of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully ālearned.ā
Unfortunately, despite some promising attempts, the Perceptron simply couldnāt follow through on Rosenblattās theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasnāt wrong: His machine was just too simple. The perceptronās neural network had only one layer, which isnāt enough to enable machine learning on any meaningful level.
Thatās what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.
Through the 1990s and 2000s, researchers would slowly prove neural networksā potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers werenāt powerful enough to handle the amount of data necessary to see AIās full potential. Mooreās Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.
And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and weāve been on a wild ride ever since. We donāt know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.
To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.
Full story here:
You wouldnāt be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.
AIās conceptual beginnings
Of course, people have been wondering if we could make machines that think for as long as weāve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germanyās āunbreakableā code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer couldāand wouldābecome, imagining it as āone machine for all possible tasks.ā
But it was what Turing wrote in āComputing Machinery and Intelligenceā that changed things forever: The computer scientist posed the question, āCan machines think?ā but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called āThe Imitation Game.ā Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogatorās goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, itās not such a difficult task. But if one or both decides to lie, it becomes much more challenging.
But the point of the Imitation Game isnāt to test a humanās deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?
Kick-starting the idea of neural networks
Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technologyās practice, even if he never saw it come to fruition. Rosenblatt created the āPerceptron,ā a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about somethingāsay, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it ālearnsā the right answers instead of having to predict them.
That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the āweightā of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully ālearned.ā
Unfortunately, despite some promising attempts, the Perceptron simply couldnāt follow through on Rosenblattās theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasnāt wrong: His machine was just too simple. The perceptronās neural network had only one layer, which isnāt enough to enable machine learning on any meaningful level.
Many layers makes machine learning work
Thatās what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.
Through the 1990s and 2000s, researchers would slowly prove neural networksā potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers werenāt powerful enough to handle the amount of data necessary to see AIās full potential. Mooreās Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.
And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and weāve been on a wild ride ever since. We donāt know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.
Living with AI
To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.
Full story here: