These questions are on chapters 4–9 on the Emotion Machine.
Problem 1. One critical component of problem solving is having a good representation of the problem. This entails representing the relevant objects, properties and relations at the right level of detail1. You'll also need to represent a goal state in order to determine progress, and possibly some anti-goal states that you wish to avoid2.
When knowledge is shared among members of a community, it is called commonsense knowledge and it consists of generalizations that are "true" by default. Because there are often exceptions to generalizations, it is indirect to talk about knowledge being true or false without first explaining the problem solving context: other assumptions (commonsense knowledge) that are left implicit. This problem is pervasive in natural language, which is the form of knowledge you will be dealing with for this problem. Instead of using the vague notion of truth, we can characterize knowledge as useful for solving a particular problem or class of problems, it is consistent within its local context, but not necessarily consistent with all of the agent's knowledge.
The goal of this assignment is to get you thinking about knowledge: how it can be used, represented, and organized within a resourceful commonsense reasoning architecture. Good answers will show ingenuity and comprehension of a detailed problem solving architecture.
Take a look at the first 500 entries in the OpenMind project and pick three assertions from the corpus. These statements all depend on other hidden common sense assumptions that you should try to unpack. For each, answer the following questions:
- What are some problems/goals that this assertion would be useful for solving?
- What are some problems/goals that appear, on the surface3, to be related to the knowledge but are not?
- Describe a computational (procedural or representational) mechanism for distinguishing between using this knowledge for a problem in (1) from using it in a irrelevant problem in (2).
- Imagine that you have acquired new information (ie by perception) that is related to this assertion that supports or disagrees with your assertion. Write this new knowledge down.
- Combine your two assertions to derive a new piece of useful knowledge (by induction, deduction, abduction, or analogy4) or other information (eg a contradiction).
- Combine your two assertions to derive a useless/absurd conclusion.
- Describe a way to avoid drawing the useless/absurd conclusion (6). For example use and describe a reflective critic, structural knowledge, or an invention of your own.
Problem 2. For each of the layers of Model–6, give a concise description of the layer's function in your own words along with a supporting example.
Problem 3. Self-Models
- What are some reasons why you would need to switch between many small models rather than just using one complete model for all problems?
- What are the advantages of having a self model? What are some disadvantages?
- Given an example of some software or hardware artifact that uses self models to some extent.
Problem 4. In the section on Learned Reactions, Minsky describes how IF DO rules alone are not enough because there are always exceptions for each rule, which would make the assertion false. What benefits does adding THEN to IF DO rules provide?
1If the description is too vague/ambiguous, it may match too many items; however, if it is too specific or precise, it may never match anything!
2See: Minsky, Marvin. "Negative Expertise." International Journal of Expert Systems 7 no. 1 (1994): 13-19.
3For example, they share some of the same objects, properties or relations.
4If you don't understand these terms, see: Sowa, John. "The Challenge of Knowledge Soup" (PDF). jfsowa.com. Explain any additional missing background knowledge that was relevant to your inference.