Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Glossary for AI, Machine Learning and Autonomous Driving

This site may earn commission on affiliate links.
I know we often use terms that not everyone on this forum may be familiar with. So I made this handy glossary for AI, Machine Learning and Autonomous Driving:

Autonomous Vehicle (AV)
A vehicle with the hardware and software to drive without human input.

Advanced Driver Assist System (ADAS)
The hardware and software that are collectively capable of driving a vehicle in limited conditions with a human driver. This term is used specifically to describe a Level 2 driving automation system.

Automated Driving System (ADS)
The hardware and software that are collectively capable of driving a vehicle without human input. This term is used specifically to describe a Level 3, 4, or 5 driving automation system.

Artificial Intelligence (AI)
Artificial intelligence - or AI for short - is technology that enables a computer to think or act in a more 'human' way. It does this by taking in information from its surroundings, and deciding its response based on what it learns or senses.

Artificial General Intelligence (AGI)
A form of AI that has the same cognitive abilities as a human being across a range of domains. It can understand, learn, and apply knowledge to solve any kind of complex problem. It also has self-awareness, consciousness, and general problem-solving skills. AGI is also known as strong AI or deep AI.

Artificial Super Intelligence (ASI)
Also known as superintelligence, would surpass the intelligence and ability of the human brain.

Cameras
A sensor used for perception that collects natural ambient light like the human eye. Cameras provide very rich information about the environment including colors and shapes. Processing the data from cameras can also provide distance, velocity of objects as well as classify the type of object.

Camera Vision (aka Computer Vision)
The hardware and software that allows a computer to “see” with cameras.

Classical Approach
The approach of designing and training perception, prediction and planning components separately. The approach can use a combination of machine learning and classical algorithms. Sensors feed data into perception which creates a model of the world around the vehicle. The perception then sends that model to prediction which determines what the other objects will do in the future. Prediction then sends its results to planning which determines the action and path the autonomous vehicle should take.

Deep Learning
Machine learning that involves training deep neural networks (3 or more layers).

Deep Neural Network
A neural network with 3 or more layers.

Driver Monitoring System (DMS)
A system that monitors the attentiveness of the driver in order to ensure that they are ready to intervene in a driver assist or autonomous vehicle. A DMS typically includes a driver facing camera.

Driving Controls
The component that controls the steering, acceleration and braking of the vehicle.

Driving Policy
The rules that govern the driving behavior of the autonomous vehicle.

Dynamic Driving Tasks (DDT)
All of the real-time operational and tactical functions required to operate a vehicle in on-road traffic, excluding the strategic functions such as trip planning. DDT includes lateral vehicle motion via steering, longitudinal vehicle control via accelerating and braking, monitoring the environment for events and obstacles and responding, maneuver planning, and signaling.

Dynamic Driving Tasks Fallback (DDT-Fallback)
The response by the user to take over when the autonomous vehicle encounters a failure or exits its ODD.

End-to-End Approach
The approach of designing autonomous driving with deep neural networks from sensor input to control output where perception, prediction, planning and control are all trained simultaneously.

Generative Models
Deep-learning models that take raw data (images, audio, texts, video) and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.

Imitation Learning
A type of machine learning that trains a system by imitating expert behavior.

Levels of Autonomy
Developed by the Society of Automotive Engineers (SAE), the levels of autonomy are a taxonomy that describe different classes of autonomous driving:
Level 0 (no driving automation): The human is manually driving.
Level 1 (driver assistance): The vehicle features a single automated system for driver assistance, such as steering or accelerating (cruise control).
Level 2 (partial driving automation): The vehicle can control both steering and accelerating/decelerating but not all driving tasks. It requires a human in the driver’s seat to take control of the car at any time.
Level 3 (conditional driving automation): The vehicle can perform all driving tasks on-road under limited conditions with a human driver as the fallback to intervene when prompted by the automated driving system.
Level 4 (high driving automation): the vehicle can perform all driving tasks on-road without a human in the driver seat but only under limited conditions.
Level 5 (full driving automation): the vehicle can perform all driving tasks on-road without a human in the driver seat on all roads and conditions that a typical human would be expected to drive.

Level 2+ (L2+)
An unofficial classification of an advanced driver assist system that can drive from point to point, similar to autonomous driving, usually permitting hands off the steering wheel but still requiring eyes on the road.

Lidar
A sensor used for perception that emits lasers and measures the return to determine distance, velocity and shape of objects. Lidar provides very accurate distance and velocity measurements. Lidar is also effective at creating high definition maps.

Machine Learning (ML)
The process of using mathematical models of data to help a computer learn without direct instruction. It’s considered a subset of artificial intelligence (AI). Machine learning uses algorithms to identify patterns within data, and those patterns are then used to create a data model that can make predictions. With increased data and experience, the results of machine learning are more accurate—much like how humans improve with more practice.

Maps
A representation of the static world used for navigation or to aid in driver assist or autonomous driving. There are three types of maps based on their level of detail and precision. Standard definition maps have low accuracy (2D, +/- 10 meters) and low detail (about 50 attributes). They are used mostly for routing and navigation. Medium definition maps have moderate accuracy (2D, +/- 3 meters) and more detail (100-500 attributes) including rich lane and rules-of-the-road data. They are a middle ground between standard and high-definition maps and may be used in advanced driver assist or autonomous driving. High-definition maps have the most accuracy and detail (3D, +/- 10 cm) with over 3000 attributes. HD maps are used in level 4 autonomous driving for increased safety.

Minimum Risk Condition
A stable, stopped condition to which a user or an ADS may bring a vehicle after performing the DDT fallback in order to reduce the risk of a crash when a given trip cannot or should not be continued.

Multi-task Learning
Multi-task learning (MTL) involves jointly performing several related tasks based on a shared representation through separate branches/heads.

Neural Networks
A type of machine learning that uses a network of functions to process data. They are inspired by the human brain, mimicking the way that biological neurons and synapses connect and communicate. Neural networks are able to learn by a process of trial and error and recognize relationships between vast amounts of data.

Object Classification
The task of classifying what an object is with perception. This is often accomplished with machine learning.

Object Detection
The task of detecting an object with perception. This is often accomplished with machine learning.

Offline Evaluation (Open loop)
Open-loop evaluation involves assessing a system’s performance against pre-recorded expert driving behavior.

Online Evaluation (Closed loop)
Closed-loop evaluation involves constructing a simulated environment that closely mimics a real-world driving environment

Object, Event, Detection & Response (OEDR)
The subtasks of the DDT that include monitoring the driving environment (detecting, recognizing, and classifying objects and events and preparing to respond as needed) and executing an appropriate response to such objects and events (i.e., as needed to complete the DDT and/or DDT fallback).

Operational Design Domain (ODD)
Operating conditions under which a given driving automation system or feature thereof is specifically designed to function, including, but not limited to, environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics.

Perception
The task of detecting and understanding the world and objects around the vehicle.

Planning
The task of deciding the action and path the autonomous vehicle will take at any given moment and in the future.

Prediction (or Behavior Prediction)
The task of determining what moving objects (vehicles, pedestrians, cyclists etc) will do in the future.

Radar
A sensor used for perception that relies on emitting radar waves and then measuring the return to determine distance and velocity of objects. Radar is particularly effective in adverse weather since radar waves pass through rain, ice and snow.

Reinforcement Learning
A type of machine learning that trains a system by rewarding correct behavior.

Sensor Fusion
The act of combining sensors of different types on the vehicle (ex: cameras, radar and lidar).

Strong AI
Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence(ASI).

Supervised Learning
A type of machine learning that trains a system based on labeled training data.

Unsupervised Learning
A type of machine learning that trains a system based on unlabeled training data.

Vision-only
An autonomous vehicle that relies on only cameras to drive.

Weak AI
Also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks.

42271822770_6d2a1d533f_b.jpg

"Machine Learning & Artificial Intelligence" by mikemacmarketing is licensed under CC BY 2.0.
Admin note: Image added for Blog Feed thumbnail
 
Last edited:
I know we often use terms that not everyone on this forum may be familiar with. So I made this handy glossary for AI, Machine Learning and Autonomous Driving:

Autonomous Vehicle (AV)
A vehicle with the hardware and software to drive without human input.

Advanced Driver Assist System (ADAS)
The hardware and software that are collectively capable of driving a vehicle in limited conditions with a human driver. This term is used specifically to describe a Level 2 driving automation system.

Automated Driving System (ADS)
The hardware and software that are collectively capable of driving a vehicle without human input. This term is used specifically to describe a Level 3, 4, or 5 driving automation system.

Artificial Intelligence (AI)
Artificial intelligence - or AI for short - is technology that enables a computer to think or act in a more 'human' way. It does this by taking in information from its surroundings, and deciding its response based on what it learns or senses.

Artificial General Intelligence (AGI)
A form of AI that has the same cognitive abilities as a human being across a range of domains. It can understand, learn, and apply knowledge to solve any kind of complex problem. It also has self-awareness, consciousness, and general problem-solving skills. AGI is also known as strong AI or deep AI.

Artificial Super Intelligence (ASI)
Also known as superintelligence, would surpass the intelligence and ability of the human brain.

Cameras
A sensor used for perception that collects natural ambient light like the human eye. Cameras provide very rich information about the environment including colors and shapes. Processing the data from cameras can also provide distance, velocity of objects as well as classify the type of object.

Camera Vision (aka Computer Vision)
The hardware and software that allows a computer to “see” with cameras.

Classical Approach
The approach of designing and training perception, prediction and planning components separately. The approach can use a combination of machine learning and classical algorithms. Sensors feed data into perception which creates a model of the world around the vehicle. The perception then sends that model to prediction which determines what the other objects will do in the future. Prediction then sends its results to planning which determines the action and path the autonomous vehicle should take.

Deep Learning
Machine learning that involves training deep neural networks (3 or more layers).

Deep Neural Network
A neural network with 3 or more layers.

Driver Monitoring System (DMS)
A system that monitors the attentiveness of the driver in order to ensure that they are ready to intervene in a driver assist or autonomous vehicle. A DMS typically includes a driver facing camera.

Driving Controls
The component that controls the steering, acceleration and braking of the vehicle.

Driving Policy
The rules that govern the driving behavior of the autonomous vehicle.

Dynamic Driving Tasks (DDT)
All of the real-time operational and tactical functions required to operate a vehicle in on-road traffic, excluding the strategic functions such as trip planning. DDT includes lateral vehicle motion via steering, longitudinal vehicle control via accelerating and braking, monitoring the environment for events and obstacles and responding, maneuver planning, and signaling.

Dynamic Driving Tasks Fallback (DDT-Fallback)
The response by the user to take over when the autonomous vehicle encounters a failure or exits its ODD.

End-to-End Approach
The approach of designing autonomous driving with deep neural networks from sensor input to control output where perception, prediction, planning and control are all trained simultaneously.

Generative Models
Deep-learning models that take raw data (images, audio, texts, video) and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.

Imitation Learning
A type of machine learning that trains a system by imitating expert behavior.

Levels of Autonomy
Developed by the Society of Automotive Engineers (SAE), the levels of autonomy are a taxonomy that describe different classes of autonomous driving:
Level 0 (no driving automation): The human is manually driving.
Level 1 (driver assistance): The vehicle features a single automated system for driver assistance, such as steering or accelerating (cruise control).
Level 2 (partial driving automation): The vehicle can control both steering and accelerating/decelerating but not all driving tasks. It requires a human in the driver’s seat to take control of the car at any time.
Level 3 (conditional driving automation): The vehicle can perform all driving tasks on-road under limited conditions with a human driver as the fallback to intervene when prompted by the automated driving system.
Level 4 (high driving automation): the vehicle can perform all driving tasks on-road without a human in the driver seat but only under limited conditions.
Level 5 (full driving automation): the vehicle can perform all driving tasks on-road without a human in the driver seat on all roads and conditions that a typical human would be expected to drive.

Level 2+ (L2+)
An unofficial classification of an advanced driver assist system that can drive from point to point, similar to autonomous driving, usually permitting hands off the steering wheel but still requiring eyes on the road.

Lidar
A sensor used for perception that emits lasers and measures the return to determine distance, velocity and shape of objects. Lidar provides very accurate distance and velocity measurements. Lidar is also effective at creating high definition maps.

Machine Learning (ML)
The process of using mathematical models of data to help a computer learn without direct instruction. It’s considered a subset of artificial intelligence (AI). Machine learning uses algorithms to identify patterns within data, and those patterns are then used to create a data model that can make predictions. With increased data and experience, the results of machine learning are more accurate—much like how humans improve with more practice.

Maps
A representation of the static world used for navigation or to aid in driver assist or autonomous driving. There are three types of maps based on their level of detail and precision. Standard definition maps have low accuracy (2D, +/- 10 meters) and low detail (about 50 attributes). They are used mostly for routing and navigation. Medium definition maps have moderate accuracy (2D, +/- 3 meters) and more detail (100-500 attributes) including rich lane and rules-of-the-road data. They are a middle ground between standard and high-definition maps and may be used in advanced driver assist or autonomous driving. High-definition maps have the most accuracy and detail (3D, +/- 10 cm) with over 3000 attributes. HD maps are used in level 4 autonomous driving for increased safety.

Minimum Risk Condition
A stable, stopped condition to which a user or an ADS may bring a vehicle after performing the DDT fallback in order to reduce the risk of a crash when a given trip cannot or should not be continued.

Multi-task Learning
Multi-task learning (MTL) involves jointly performing several related tasks based on a shared representation through separate branches/heads.

Neural Networks
A type of machine learning that uses a network of functions to process data. They are inspired by the human brain, mimicking the way that biological neurons and synapses connect and communicate. Neural networks are able to learn by a process of trial and error and recognize relationships between vast amounts of data.

Object Classification
The task of classifying what an object is with perception. This is often accomplished with machine learning.

Object Detection
The task of detecting an object with perception. This is often accomplished with machine learning.

Offline Evaluation (Open loop)
Open-loop evaluation involves assessing a system’s performance against pre-recorded expert driving behavior.

Online Evaluation (Closed loop)
Closed-loop evaluation involves constructing a simulated environment that closely mimics a real-world driving environment

Object, Event, Detection & Response (OEDR)
The subtasks of the DDT that include monitoring the driving environment (detecting, recognizing, and classifying objects and events and preparing to respond as needed) and executing an appropriate response to such objects and events (i.e., as needed to complete the DDT and/or DDT fallback).

Operational Design Domain (ODD)
Operating conditions under which a given driving automation system or feature thereof is specifically designed to function, including, but not limited to, environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics.

Perception
The task of detecting and understanding the world and objects around the vehicle.

Planning
The task of deciding the action and path the autonomous vehicle will take at any given moment and in the future.

Prediction (or Behavior Prediction)
The task of determining what moving objects (vehicles, pedestrians, cyclists etc) will do in the future.

Radar
A sensor used for perception that relies on emitting radar waves and then measuring the return to determine distance and velocity of objects. Radar is particularly effective in adverse weather since radar waves pass through rain, ice and snow.

Reinforcement Learning
A type of machine learning that trains a system by rewarding correct behavior.

Sensor Fusion
The act of combining sensors of different types on the vehicle (ex: cameras, radar and lidar).

Strong AI
Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence(ASI).

Supervised Learning
A type of machine learning that trains a system based on labeled training data.

Unsupervised Learning
A type of machine learning that trains a system based on unlabeled training data.

Vision-only
An autonomous vehicle that relies on only cameras to drive.

Weak AI
Also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks.

View attachment 977980
"Machine Learning & Artificial Intelligence" by mikemacmarketing is licensed under CC BY 2.0.
Admin note: Image added for Blog Feed thumbnail
I’ve always wondered what SC stood for? Just when I think it‘s Service Center someone talks about stopping at Baker SC for a 25 minute charge.