Knowledge representation (KR) aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems. Knowledge representation and reasoning (KRR) aims to also understand, reason, and interpret knowledge. KRR is widely used in the field of artificial intelligence with the goal to represent information about the world in a form that a computer system can use to solve complex tasks, such as diagnosing a medical condition or having a natural-language dialog. Early KRR systems included the [[expert system]] and knowledge-based systems. [[ELIZA]] was an early expert system that could attempt the Turing test. A knowledge representation framework can be analyzed by these five roles. A knowledge representation (KR) is [^1] 1. a surrogate, a substitute for the thing itself 2. a set of ontological commitments: an answer to the question: In what terms should I think about the world? 3. a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends 4. a medium for pragmatically efficient computation 5. a medium of human expression One of the most active areas of knowledge representation research is the [[Semantic Web]] and other [[semantic technology]]. [^1]: Davis, Randall; Shrobe, Howard; Szolovits, Peter (Spring 1993). ["What Is a Knowledge Representation?"](http://www.aaai.org/ojs/index.php/aimagazine/article/view/1029/947). _AI Magazine_. **14** (1): 17–33. [Archived](https://web.archive.org/web/20120406094445/http://www.aaai.org/ojs/index.php/aimagazine/article/view/1029/947) from the original on 2012-04-06. Retrieved 2011-03-23. ## history of Early on developers recognized that the human capacity to reason through tasks depends on a wealth of learned experience that might seem, at first, unrelated to the domain. In some ways, LLMs have encoded this baseline as parametric knowledge. Language stores more than just words. Even if you grant only the stochastic parrot analogy, global facts, relationships, etc. are encoded in which words come after the other. The ability to break down a problem into subproblems (Chain of Thought, Tree of Thought) enriches the ability to reason along the lines of language. However, Anthropic in recent research has revealed that there might be something more going on in these models. ## topology of knowledge Shape of knowledge. Inductive link prediction is a way to generate new facts given an existing KG. Like the periodic table of elements, the structure of existing knowledge can point to gaps and suggest facts.