A type of neural network that learns to summarize, interpret and generate content, in response to various types of prompts. The output might consist of text, images, audio, or video.
Added Perspectives
A trained LLM produces textual answers to natural language questions, often returning sentences and paragraphs faster than humans can speak.
An LLM is a huge calculator that predicts content, most often strings of words, based on what it learned from other words. It relies on an “attention network” whose parameters quantify how tokens—i.e., words or punctuation marks—relate to one another in a large corpus of existing text.
LLMs’ fast, articulate answers to expert questions can help data engineers discover datasets, write and debug code, document procedures, and learn new techniques as they build data pipelines. But the fear is real too. LLMs can derail projects and undermine governance programs by giving answers that contain errors, bias, or sensitive data.