Here are important AI terminology and definitions:

  • - A set of rules or instructions given to an AI, a machine, or a computer to help it learn on its own. Algorithms are the basis for machine learning models.
  • - A form of AI that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a human-like level. Unlike narrow AI, AGI can perform any intellectual task that a human can do.
  • - the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
  • - A computational model inspired by the way biological neural networks in the human brain process information. It consists of interconnected processing elements (neurons) working together to solve specific problems.
  • - Bias in AI refers to systematic errors in AI systems that result in unfair outcomes, such as privileging one arbitrary group of users over others. Bias can result from biased data, biased algorithms, or biased human decisions.
  • - Large and complex data sets that traditional data processing applications cannot handle. Big data involves various data types and is often characterized by the three V's: volume, variety, and velocity.
  • - is an artificial intelligence chatbot developed by OpenAI, based on the Generative Pre-trained Transformer (GPT) architecture, featuring different version options. ChatGPT is designed to engage in conversational dialogue with users, providing responses that can be informative, creative, or conversational in nature. It utilizes advanced natural language processing (NLP) techniques to understand and generate human-like text. ChatGPT can be used for various applications, including customer support, content generation, tutoring, and entertainment.
  • - A field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects — and then react to what they see.
  • - The process of discovering patterns and knowledge from large amounts of data. The data sources can include databases, data warehouses, the internet, and other data repositories.
  • - In practical terms, deep learning is just a subset of machine learning. In fact, deep learning technically is machine learning and functions in a similar way (hence why the terms are sometimes loosely interchanged). However, its capabilities are different. While basic machine learning models do become progressively better at whatever their function is, but they still need some guidance. If an AI algorithm returns an inaccurate prediction, then an engineer has to step in and make adjustments. With a deep learning model, an algorithm can determine on its own if a prediction is accurate or not through its own neural network.
  • - The study of the ethical implications and moral considerations related to the design, development, and deployment of AI technologies. It includes topics like fairness, accountability, transparency, and privacy.
  • - Techniques and methods that make the behavior and predictions of AI systems understandable to humans. Explainable AI aims to provide clear and interpretable explanations of how AI models make decisions.
  • - A method of training machine learning models across multiple decentralized devices or servers holding local data samples, without exchanging their data. This approach enhances privacy and reduces data transfer.
  • - refers to unsupervised and semi-supervised machine learning algorithms that enable computers to use existing text, audio and video files, images, and even code to create new possible content. Generative AI allows computers to abstract the underlying patterns related to the input data so that the model can generate or output new content.
  • - GPT is a type of artificial intelligence model developed by OpenAI that uses deep learning techniques to generate human-like text. The model is based on the transformer architecture, which is designed to handle sequential data and allows for parallel processing of data, making it highly efficient for training on large datasets. The "generative" aspect refers to its ability to produce coherent and contextually relevant text based on the input it receives. The "pre-trained" aspect means that the model is initially trained on a large corpus of text data from diverse sources before being fine-tuned for specific tasks or applications. GPT models are known for their ability to understand and generate natural language, making them useful for a wide range of applications, including chatbots, language translation, content creation, and more. The most well-known versions of GPT include GPT-3 and GPT-4, which have billions of parameters and are capable of generating highly sophisticated and human-like responses.
  • - The process of optimizing the parameters that govern the training process of a machine learning model. Hyperparameters are set before training and affect how the model learns from the data.
  • - Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.
  • - A branch of artificial intelligence that helps computers understand, interpret, and respond to human language in a valuable way. NLP draws from many disciplines, including computer science and computational linguistics, to fill the gap between human communication and computer understanding.
  • - The process of producing meaningful phrases and sentences in the form of natural language from some internal representation. It is a subfield of NLP.
  • Prompt Engineering - is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform.
  • - A type of machine learning where an agent learns to make decisions by performing certain actions and receiving rewards or penalties. The goal is to maximize the cumulative reward.
  • - is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response.
  • - An interdisciplinary field that integrates computer science and engineering to design, construct, operate, and use robots. The goal is to create machines that can assist humans in various tasks.
  • - In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own.
  • - A machine learning technique where a model developed for a particular task is reused as the starting point for a model on a second task. Transfer learning is useful when there is limited data available for the second task.
  • - A test proposed by Alan Turing to determine whether a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

AIPRM has additional definitions for generative AI terms

Resource type