The top is a simple diagram, the bottom is more representative of the final graph. There are some mistakes that I would fix (sometimes I'm missing an AND gate), but just wanted to put this up tonight.
As you can see there are many ways to obtain coffee, and the desired path will depend on many factors in the environment - is the store closed today? How much do grounds cost? Do I want to grind the beans myself?
Diagram Key: I - instance. These are physical instances of an object in the environment. In this case, I'm still debating on whether I want to have the instance node be a precondition for actions, or if I should abstract it more.
A - action. If it's on an edge, that means that that action can be performed for the concept it is coming from. If it's a node, it means it's an action.
P - precondition.
Parameter - think of parameters like templates in C++ or generics in Java. They are a way for the activation to carry information about the current path and make sure that a given action still fulfills the preconditions with an instance. For example, we want to say that to buy an object, the store must sell that same object. All instances must have an IS-A relationship to the parameter type to fill that parameter.
S- state. These represent possible states of a concept. For example, a store can be open or closed. You can enforce that only one state is active in a particular group by specifying strong inhibitory edges between states that can never be active at the same time (like open and closed).
While this may look like a lot for a simple case, since each concept is part of a hierarchy, it should extend reasonably well (fingers crossed). Also, the graph will soon be populated mostly from natural language input and experience, rather than programming it by hand.
I promise I'll use pencil next time.
One more thing: I just received "Integrating Marker-Passing and Problem-Solving" today. It's more than 20 years old, but still relevant. I paged through the "Design Challenges" and it looks like I've covered my bases design-wise, so I'm going to go full-steam ahead on programming this weekend.
ReplyDeleteSo would memory information be attached to the "states"
ReplyDeleteHow do you specify the preconditions, is in Norm's terms 'priming' or taking care of the notion of 'standing orders'?
Also in terms of yesterday's paper what are your entities ... agents, agents and objects, or a more general definition.
States do form a kind of memory, but they can be queried from the environment. They are not states as in behavior states in a FSM. Those kinds of states are implicit in the procedure graph that arises from the root-goal activation.
ReplyDeleteI haven't covered memory yet, because it has multiple layers.
First, activations will leave a residual, long term activation. Thus nodes that activate often will activate more easily.
Second, activations that occur simultaneously will have implicit edges between them, emulating the Hebbian theory of long-term potentiation. So if "Sunday" always activates when "Closed" is active, the agent will implicitly believe that "Closed" implies "Sunday" and vice versa. Note that these inferences are not always accurate, but people suffer from the same bug.
Third, memory can be specified explicitly through language.
Finally, I may have some memory which would consist of small chunks instances connected to the root node, representing previous events that could be referenced after the fact. This isn't essential to the project though, I have it under future work.
Preconditions are specified in the graph using Precondition edges. A precondition edge specifies a possible precondition, so only one has to be fulfilled for the next action to be carried out. If you want multiple preconditions, you use an AND gate.
There is no priming as Norm refers to it. An agent is able to act according to any of its possible actions at any time. A firefighter will always be a firefighter, but his/her firefighting actions will only arise from a goal activation involving fire (or kittens stuck in trees).
There is priming in the traditional psychology sense, however. Causing the agent to think about a particular concept, whether through language or because the agent just completed a similar action, will activate corresponding nodes, which then will be more accepting of new activations.
Standing orders will be accomplished through the goal generator nodes. These nodes will repeatedly activate (like something in the back of your head saying "Do this, do this, do this...", but will usually be stemmed by conditional gates that activate only when the environment changes such that the order should be carried out.
I just use Agent for anything that has this cognitive model, and object for anything else. Instances can be other agents or objects. It may be interesting to have the Agent think that certain objects are agents even if they are not, but that's not important for this project. The idea would be predicting the actions of certain objects by ascribing similar motivations to the agent's own (as we do with animals).
Oh man, those diagrams remind me of 320 (Floyd-Warshall, anyone?).
ReplyDelete