How Machine Learn

Ezra Lazuardy
11 min readJul 27, 2020

A beginner guide to understand the essential of Machine Learning.

Photo by Andy Kelly on Unsplash

Nowadays, all human needs have been greatly helped by the digital systems. Communication, knowledge, health, and many other field have been more advance along with the development of technology.

The ability of technological development has also grown since the discovery of several inventions and investments, one of which is the concept of machine learning. Basically, this concept mainly explains a “machine” or also means “computer” that have ability to learn to solve some problem or predicting new data that may appear in future.

Some example of it’s implementation can already be seen in the health care sector.

AI-assisted radiology and pathology

These days, electronically-stored medical imaging data is plentiful and Deep Learning algorithms can be fed with this kind of dataset, to detect and discover patterns and anomalies.

Machines and algorithms can interpret the imaging data much like a highly trained radiologist could — identifying suspicious spots on the skin, lesions, tumors, and brain bleeds.

This approach solves a critical problem in the health care domain because, throughout the world, well-trained radiologists are becoming hard to come by. In most circumstances, such skilled workers are under enormous strain due to the deluge of digital medical data.

An average radiologist, as per the article above, needs to produce interpretation results for one image every 3–4 seconds to meet the demand.

Another examples of machine learning implementation in the health care sector can be read in the following literature:

The usage of AI/ML tools/platforms for assisting radiologists is, therefore, primed to expand exponentially.

The Theory

Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence studies.

Machine learning algorithms build a mathematical model based on sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to do so. This algorithms implementation are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.

Computer Scientist and machine learning pioneer Tom M. Mitchell Portrayed | Source: Machine Learning, McGraw Hill, 1997, Tom M. Mitchell

The scientific field of Machine Learning (ML) is a branch of artificial intelligence, as defined by Computer Scientist and machine learning pioneer Tom M. Mitchell:

“Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.”

An algorithm can be thought of as a set of rules/instructions that a computer programmer specifies, which a computer is able to process. Simply put, machine learning algorithms learn by experience, similar to how humans do.

For example, after having seen multiple examples of an object, a compute-employing machine learning algorithm can become able to recognize that object in new, previously unseen scenarios.

The Gimmick

Recently, a report released regarding the misuse from companies claiming to use artificial intelligence on their products and services.

According to the Verge, 40% of European startups that claimed to use AI don’t use the technology. Last year, TechTalks, also stumbled upon such misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine thousands of users’ data to enhance user experience in their products and services.

Unfortunately, there’s still much confusion within the public and the media regarding what truly is artificial intelligence, and what truly is machine learning. Often the terms are being used as synonyms, in other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue.

ML vs AI

Previously mentioned that Machine Learning (ML) is a branch of Artificial Intelligence (AI).

Quoting Tom M. Mitchell, a scientific field is best defined by the central question it studies. The field of Machine Learning seeks to answer the question:

“How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?”

ML is one of the ways we expect to achieve AI. Machine learning relies on working with small to large datasets by examining and comparing the data to find common patterns and explore nuances.

For instance, if you provide a machine learning model with many songs that you enjoy, along with their corresponding audio statistics (dance-ability, instrumentality, tempo, or genre). It ought to be able to automate (depending on the supervised machine learning model used) and generate a recommender system as to suggest you with music in the future that (with a high percentage of probability rate) you’ll enjoy, similarly as to what Netflix, Spotify, and other companies do.

Artificial intelligence, on the other hand, is vast in scope. According to Andrew Moore, Former-Dean of the School of Computer Science at Carnegie Mellon University,

“Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

That is a great way to define AI in a single sentence. However, it still shows how broad and vague the field is. Fifty years ago, a chess-playing program was considered as a form of AI, since game theory, along with game strategies, were capabilities that only a human brain could perform.

Nowadays, a chess game is dull and antiquated since it is part of almost every computer’s operating system (OS). Therefore, “until recently” is something that progresses with time.

Assistant Professor and Researcher at CMU, Zachary Lipton clarifies on the term AI,

“is aspirational, a moving target based on those capabilities that humans possess but which machines do not.”

AI also includes a considerable measure of technology advances that we know. Machine learning is only one of them. Prior works of AI utilized different techniques.

For instance, Deep Blue, the AI that defeated the world’s chess champion in 1997 used a method called tree search algorithms to evaluate millions of moves at every turn.

Example of solving the Eight Queens puzzle using Depth First Search

AI, as we know it today, is symbolized with Human-AI interaction gadgets by Google Home, Siri, and Alexa, by the machine learning powered video prediction systems that power Netflix, Amazon, and YouTube.

These technological advancements are progressively becoming essential in our daily lives. They are intelligent assistants that enhance our abilities as humans and professionals — making us more and more productive.

In contrast to machine learning, AI is a moving target, and its definition changes as its related technological advancements turn out to be further developed. Possibly, within a few decades, today’s innovative AI advancements ought to be considered as dull as flip-phones are to us right now.

How does it work?

How does machine learning work? ~ Yann LeCun, Head of Facebook AI Research | Source: Youtube

In the video above, Head of Facebook AI Research, Yann LeCun simply explains how machine learning works with easy to follow examples. Machine learning utilizes a variety of techniques to intelligently handle large and complex amounts of information to make decisions and/or predictions.

In practice, the patterns that a computer (machine learning system) learns can be very complicated and difficult to explain. Consider searching for dog images on Google as seen in the image below.

Query on Google Search for “dog” | Source: Google Search

Google is incredibly good at bringing relevant results, yet how does Google search achieve this task? In simple terms, Google search first gets a large number of examples (image dataset) of photos labeled “dog”. Then the computer (machine learning system) looks for patterns of pixels and patterns of colors that help it guess (predict) if the image queried it is indeed a dog.

Machine learning system looking for patterns between dog and cat images | Source: Google

At first, Google’s computer makes a random guess of what patterns are reasonable to identify an image of a dog. If it makes a mistake, then a set of adjustments are made in order for the computer to get it right.

Trained machine learning system capable of identifying cats or dogs. | Source: Google

In the end, such collection of patterns learned by a large computer system modeled after the human brain (deep neural network), that once is trained can correctly identify and bring accurate results of dog images on Google search, along with anything else that you could possibly think of — such process is called the training phase of a machine learning system.

Imagine that you were in charge of building a machine learning prediction system to try and identify images between dogs and cats. The first step, as we explained above, would be to gather a large number of labeled images with “dog” for dogs and “cat” for cats. Second, we would train the computer to look for patterns on the images to identify dogs and cats, respectively.

Once the machine learning model has been trained, we can throw at it (input) different images to see if it can correctly identify dogs and cats.

Why is it important?

Andrew Ng | Source: Stanford Business Graduate School

Machine learning its incredibly important nowadays. First, because it can solve complicated real-world problems in a scalable way.

Second, because it has disrupted a variety of industries within the past decade and continues to do so in the future, as more and more industry leaders and researchers are specializing in machine learning, along taking what they have learned in order to continue with their research and/or develop machine learning tools to impact their own fields positively.

Third, artificial intelligence has the potential to incrementally add 16% or around $ 13 trillion to the US economy by 2030. The rate in which machine learning is causing positive impact is already surprisingly impressive which have been successful thanks to the dramatic change on data storage and computing processing power — as more people are increasingly becoming involved, we can only expect it to continue with this route and continue to cause amazing progress in different fields.

Why do tech companies tend to use AI and ML interchangeably?

“… what we want is a machine that can learn from experience.” ~ Alan Turing

The term “artificial intelligence” came to inception in 1956 by a group of researchers, including Allen Newell and Herbert A. Simon. Since then, AI’s industry has gone through many fluctuations.

In the early decades, there was much hype surrounding the industry, and many scientists concurred that human-level AI was just around the corner. However, undelivered assertions caused a general disenchantment with the industry along with the public and led to the AI winter, a period where funding and interest in the field subsided considerably.

Afterward, organizations attempted to separate themselves with the term AI, which had become synonymous with unsubstantiated hype and utilized different terms to refer to their work. For instance, IBM described Deep Blue as a supercomputer and explicitly stated that it did not use artificial intelligence, while it did.

During this period, a variety of other terms, such as big data, predictive analytics, and machine learning, started gaining traction and popularity. In 2012, machine learning, deep learning, and neural networks made great strides and found use in a growing number of fields. Organizations suddenly started to use the terms of machine learning and deep learning for advertising their products.

Deep learning began to perform tasks that were impossible to do with classic rule-based programming. Fields such as speech and face recognition, image classification and natural language processing, which were at early stages, suddenly took great leaps. And on March 2019–three the most recognized deep learning pioneers won a Turing award thanks to their contributions and breakthroughs that have made deep neural networks a critical component to nowadays computing.

Hence, to the momentum, we see a gearshift back to AI. For those who used to the limits of old-fashioned software, the effects of deep learning almost seemed as “magic”. Especially since a fraction of the fields that neural networks and deep learning are entering were considered off-limits for computers, and nowadays machine learning and deep learning engineers are earning high-level salaries, even when they are working at non-profit organizations, which speaks to how hot the field is.

Conclusion

Artificial Intelligence includes Machine Learning which in turn includes Deep Learning.

Artificial Intelligence

The effort to automate intellectual tasks normally performed by humans.

Machine Learning

An algorithm to discover data representation rules to execute a data-processing task, given examples of what’s expected.

Deep Learning

A type of machine learning algorithm which uses hierarchical and layered data representations.

--

--

Ezra Lazuardy

“An idiot admires complexity. A genius admires simplicity.” — Terry A Davis