Thursday, April 21, 2011

Poster Finished!

Short blog post today - but I've finished my poster and Joe hung it up in the lab so you can check it out at your leisure.

I finally got parameters working. The QueryIntersector finds the activated parameters that are unsatisfied, and then assigns them the most activated properties that match the parameter type. What's nice is that the concept nodes start to have a purpose - in this case, matching parameter and property types. The algorithm right now is probably too greedy - I will have to make some adjustments. I'll have to decide if I want to polish it up or work on another feature that will make a nice presentation.

Thursday, April 14, 2011

Nothing Big Today

I've mostly been working on implementing parameters, but it's been tougher than I expected. Mostly conceptual problems. My last problem is inheriting values to be filled for parameters based on what the parameter is. For example, a destination should be a position, and so it should take a vector as a value. But we should only have to say that position is a vector, not that a destination has a vector value as well. The next issue is how to store the parameter value only for a particular instance. It's possible that different paths to the same goal would have different values along the way. Do I store each possibility, or just compute the values once I've already decided on a path of action? I'm thinking the latter, but this removes the possibility of finding the best path for a sequence of actions.

I have a Chem exam tomorrow so I haven't been able to do much on this. After tomorrow I can get back to doing serious work on it.

Friday, April 8, 2011

Thinking Out Loud

This post is mostly for me to organize my thoughts, but I thought it might be interesting to share.

One aspect that isn't covered very well in my research is the cross-over from long-term memory to working memory. That is, I've read a lot about general knowledge representations, but much less about instantiating those concepts when it comes to actual objects in the environment. The book "Explorations in Cognition" talks a bit about the difference between the "Mental World" and the "Real World", which is nearly analogous - the difference is that the mental world in that case is what is expected, whereas I treat the "mental world" as common-sense long-term knowledge.

Objects in the environment pose a couple of challenges. First, how do we determine whether they are in the environment? We can have certain senses, but individual senses may not be enough to completely identify an object. For the sake of this project, we can assume that all required information is made immediately available in the small world. But even then, there's an important property of properties - whether they are satisfied or not.

In a knowledge base, an unsatisfied property means that an object has a range of possible values for a particular property. For example, an apple can be red, green, or yellow. Therefore, in our knowledge base, apple has the unsatisfied property of color, with a range of values - red, green, yellow. These properties can be satisfied by specifying the type of apple. We say a Granny Smith IS-A apple, and it satisfies the color property by setting its value to green. We could also specify that we're talking about a "red apple", and that instance would have its property satisfied.

But what if the object that could satisfy a property isn't satisfied itself? Well then the property isn't satisfied, technically. A car is a vehicle that has wheels - that seems to satisfy the method of propulsion (or whatever you want to call it). But we don't know what kind of wheel it is, what it looks like, etc. So we can't give a full representation of what a car is without this extra information.

So say we have a box in our small environment. We know a box is an object, and it satisfies the property of "shape" with the value "cube", and it will have a color that is unique to that instance. The shape property will be connected to the concept node of "box", while the color property will be connected to the instance node of that particular box. If we want to get a property of a particular instance, we first look at the instance itself for the property, then we can move up the conceptual hierarchy (following IS-A edges) to continue looking.

I think I'm going to revise my plan from my last post. Instead of looking for a property that can fulfill the argument and trying to satisfy it immediately, I'm going to find the instance node that corresponds to the object in question. From there, I can run the ArgumentIntersector with an activator at the concept and the property that is required.

I think that's it for now.

Thursday, April 7, 2011

Parameters


The Beta review went well. The main thing I want to get working next is parameters. This will allow the user to say "pick up the blue box", for example. Here, blue box is the parameter for the action "pick up". I've scanned in my current plan with a diagram. It took me a while to come up with the solution, so I haven't had time to implement it yet.

I tried looking into some of the books I have to see how others have dealt with it, but none of the implementations seemed to deal with specifying actions to be done on objects in the environment. They all seem to be focused on a disembodied theory of cognition.

Thursday, March 31, 2011

Actions

My visit to CMU was great, but I've decided on the University of Rochester. Unfortunately I didn't have any time over the weekend to work on this, but I still managed to make some great progress.

First, my interface is considerably more robust (although some things will probably break as I add more features). You can save and load scenes without having to worry about losing information. This was kind of tricky because static variables, dictionaries, and hash sets can't be serialized by Unity, which means they won't be saved, and it also means certain things will break when you go from Edit mode to Play mode. I had to do a few workarounds to fix this. You can also delete nodes without messing anything up, but I haven't added a feature to delete edges yet (won't be hard).

The major breakthrough for this week was getting actions working. You can assign a RigidBody object to the Root node's "Agent" parameter. This means that any actions that get performed in a sequence will be called on that RigidBody. Using this system means you theoretically could have multiple agents responding at the same time, but that would require multiple Root nodes, and I haven't thought of a good way of doing that yet.

The picture shows a block that has just been instructed to "Jump". It's a simple action, but there's a whole lot that went into it. I'll make some more complicated scenarios with different paths for the beta review.

I'm technically a little behind on my schedule, but now since my environments will be much easier to make, it works out that I'm doing ok.

Wednesday, March 23, 2011

Progress Update and Beta Review

First, my latest update, in picture form. I have intersectors working, which are integral to the operation of the algorithm. In this picture, only the spherical nodes were created by me. There's the Root node, the yellow action node GetCoffee, and the red concept node Coffee. The rest are generated from the text input "Get coffee". The cubes are Activators - the Root Activator, the Goal Activator, and two Search Activators - one for "Get" and one for "Coffee". The Search Activations are red (the one for getcoffee is hidden by the yellow Goal Activation). Finally, there is the Query Intersector "Get Coffee" which is finding the intersection of the "get" and "coffee" intersections, and the Goal Intersector, which finds the path between the Root node and the Goal node.

The "Step" button at the bottom should go through the procedure to get to the goal node, but there's a small bug in it right now. (I would fix it, but I'm leaving for Pittsburgh tomorrow afternoon and might not have time to make this post later).

Now the "Self-Evaluation". I'm really happy to have the Unity interface up to my original back-end progress now. It seemed a little slow, but I think this interface will help me find a lot of bugs more quickly. It has already helped me see some errors. When dealing with graph networks like this, it can be very difficult to catch small things like missing edges. And while it has taken away some time that I would have liked to work on the underlying system, I think it is both an innovative and useful tool for this application - I don't know if I've seen any visualization of cognitive models before, let alone a nice looking one like this.

I'll have to scale back some of my previous goals, but for the Beta review, I plan to have a small test environment connected to the network. A simple case would be a cube that could do something like jump, given a user's input saying "jump". If I have my Beta review on the 1st, then getting this working and making the interface more robust would be my two main goals. (Since I'll be at Carnegie Mellon this weekend I don't foresee getting much work done). If it's on the 4th, I'd like to have a more complicated environment (maybe multiple objects).

By the Final Poster session, I plan to have an environment where an agent has multiple actions available to it, and can do things like "pick up the blue box" or "walk to the green sphere". Being able to take in information from the user and store relationships would be a nice added touch. Other related dates would probably be any cool interesting features I can add. In general, however, I expect the major functionality to be done by the presentation date.

Also, Joe wanted me to submit a paper to IVA 2011, and that deadline is April 26th. I'm not sure how compelling my system can be by that point, but we'll see how it goes. I may focus more on the benefits of a visualization system for a language interface (since the theme of IVA 2011 is language).

Thursday, March 17, 2011

Activations in Unity

Have some progress to show tonight - I've got Activations working in Unity! I had to restructure my code a surprising amount to get this to work. In general, I had to adopt a more component-based approach. For example, instead of having specific Activation groups, I switched to an Activator object that takes a particular Activation (a prefab) as a parameter. The upshot to this is that it's making my structure more modular and visualization-friendly.

Also, I realized that you can't use the virtual and override keywords with Awake and Update - Unity uses their own method searching to call these methods, and if you mark a method as virtual or override, it will skip over it. Very frustrating. If you want to make your Awake or Update function semi-virtual, you can use the "new" keyword, but that method instance will only be called if the method is called from the correct type (not in an array of parent types, for example). Thought this insight might be helpful to anyone else doing Unity coding.

Anyway, here are the results. The cube is the Activator object, connected to two Concept nodes, and the purple Activations are shown after the Activator object was activated.

Wednesday, March 2, 2011

Not too much this week

Kind of a slow week, with midterms and all. I'm still working on Unity, haven't gotten activations done yet. Getting them to display in Unity is taking some adjustments that I didn't forsee. I'm also doing a slight architecture restructuring - I'm making everything a Node. This might seem kind of strange, but the idea is that I would like to have the network be able to build pieces itself. I've always had a superficial interest in metacognition (http://en.wikipedia.org/wiki/Metacognition), and it seems to me that any architecture with plans to support it should support it in the beginning. I don't know if I'll even get to anything complicated during this semester, but the changes aren't drastic and I'd prefer to get it out of the way.
I'm leaving to go to the University of Rochester tomorrow, so I won't be able to have my typical Thursday coding sprint. But at least I'll have time over break to get work done.

Friday, February 25, 2011

Alpha Review

I'm late with the blog update with the alpha review and all. But now I can post the video link if anyone wants to look at it again.


I am not taking a break (that would be dangerous :) ), but rather continuing on with my Unity integration. I'd like to get back to working on the underlying system ASAP, but having the interface done will make it easier to work on it. Right now I'm smoothing out the edges of my interface, and making sure it's both intuitive for anybody to use and robust enough for nobody to break (without effort). Soon I'll get Activations working displaying, if not tonight, then tomorrow.

Thursday, February 17, 2011

Integration into Unity

So I feel like I've made some pretty good progress this week. I have ported all of my code into Unity, which wasn't a terrible ordeal. I just had to get rid of my constructors for all of my Nodes, make my Nodes into GameObjects, and replace my properties with public variables.

At first I thought I needed to make a runtime GUI to manipulate my network in-game. However, Unity has a very nifty GUI editor for actually creating your own windows and widgets. I'm working on one that will allow a user to create the network nodes without having to even run the game.

You can see in the bottom left corner the Network Editor tab (that's mine). You can see the properties of a selected Node and add edges to other Nodes by dragging them into the Object field and specifying the edge type. It was really simple to do, and I couldn't imagine it being any easier to set up. Plus, I will be able to make a really cool-looking interface. I was thinking something like this from Ghost in the Shell:
Someday :)

For my alpha review, I plan on having done:
1. Saving and loading scenes of networks (this is already done theoretically, just need to check a few things).
2. Visualization of Activations.
3. Setting up and triggering activation groups.

This will put me a little behind on my schedule, but I think the time spent working with Unity will shorten the time needed for test scenarios. Plus, now that I know how simple it is to use Unity, I think my estimates were a little generous anyway.

Monday, February 14, 2011

This is why I need to make a GUI in Unity

Here you can see all the debug information for my system. The node labeled '0' is the Root node. (I forgot to add the edge from grounds to "get grounds", sorry).

As you can see, this is difficult to parse, even for me. When this system becomes larger, a text-based interface will not be sufficient for debugging. A visual interface will not only make it easier to create the knowledge base, but could also allow for debugging by creating activation traces or passing in a sentence and seeing the activation. Starting this week, I'm going to start working my code into Unity, and continue with the code from there. I hope to have the interface done by the Alpha review.

My plan for the interface is to use Unity spring joints to connect spheres representing the nodes. This should provide a tidy representation for navigating the graph in 3D space. Eventually I will implement a 2D projection for the 3-dimensionally challenged.

Some features I will have to implement:
  1. Camera control to center on nodes and activations
  2. Visualization system to analyze activations (will probably use colored lights)
  3. Graph editing system (add/remove/edit nodes and edges)

I got AND gates working. You can see the results at the bottom: makeCoffee is dependent on both having grinds and water, and so the sequence of actions incorporates both.

Also, I solved my problem of specifying constraints. Instead of trying to saturate the activation of the specified constraint (which doesn't work when one method of completing the task is much easier than the other), I use a message-passing system. If the input sentence specifies a constraint, such as "make coffee with water", I'll add a constraint activation on the water node. This is a short range activation, but it's enough that the action "Get Water" now has a constraint activation. Any root activations (which determine the path the agent will take) passing through a constraint activation will pick up a constraint message.

Activations with constraint messages take priority over activations with fewer satisfied constraints. This means that activations that pass through the constraint will be chosen for the final sequence of actions. This system could easily be extended to allow for priorities (I've noticed agent planning systems seem to like priorities).

The great thing about this method is it can be reversed to indicate impossible or failed procedures. If an agent gets a failure state from a function, that node will be marked with a Failure message, and any activations passing through that node will receive a lower priority than those with fewer failures. Recomputing the possible paths will provide an alternate low cost path for the agent to take.

Once I get the Unity editor working, I hope to start making my system more state-based rather than procedure-based. This means that instead of having actions as preconditions, there will more often be states that are preconditions, and actions that trigger those states.

Thursday, February 10, 2011

Basic Algorithm finished

I did some programming this weekend and my basic algorithm is working, with a few bugs. The algorithm is presented in the scan of my notes, except I don't have any AND gates yet (they require some special programming which theoretically works, but hasn't been tested).

I can pass the input "make coffee" and it finds a path from the root to the goal node (which was activated by the command). I have the actual structure lying around here somewhere, I'll scan it later tomorrow.

The problem I am currently facing is increasing the weight of context from input sentences. Theoretically, if someone says "Get some water from the faucet and make coffee", the agent should take the hint that the user wants the agent to actually make coffee and not just buy it, even though buying the coffee would be easier. It's a delicate balancing act - how do I differentiate priming through simply mentioning something from a command that implies a particular path of action? I'm still working on it. Perhaps it will just be adjusting some parameters, or maybe I need some "must fulfill" nodes that are required to be on a final procedure path.

I need to make some adjustments, but at least I have something to work off of.

Another thing I was thinking about was spatial representations. At first I was planning on off-loading pathfinding to the game engine - a function would return a curve providing a path through the environment. But then I realized I already have a shortest-path algorithm doing my work in my cognitive model, so why not use that? It also allows me to do certain things like remembering common paths, integrating obstacles and location descriptors such as room numbers, cardinal directions, etc. Once I have the planning aspect finished I suspect that will be my next goal.

I'm also wondering if I should start using Unity as a visualization for my network. I can do some debugging simply with console output, but it will become more difficult as the network grows. I'm going to see how easy it will be to implement a basic visualization GUI. It'll help me get used to using Unity too.

Thursday, February 3, 2011

Test Experiment

Here you can see some of my notes. I'm envisioning a text environment to test my algorithm and some of the natural language features without going into Unity yet. The goal will be to obtain coffee in some manner.

The top is a simple diagram, the bottom is more representative of the final graph. There are some mistakes that I would fix (sometimes I'm missing an AND gate), but just wanted to put this up tonight.

As you can see there are many ways to obtain coffee, and the desired path will depend on many factors in the environment - is the store closed today? How much do grounds cost? Do I want to grind the beans myself?

Diagram Key: I - instance. These are physical instances of an object in the environment. In this case, I'm still debating on whether I want to have the instance node be a precondition for actions, or if I should abstract it more.
A - action. If it's on an edge, that means that that action can be performed for the concept it is coming from. If it's a node, it means it's an action.
P - precondition.
Parameter - think of parameters like templates in C++ or generics in Java. They are a way for the activation to carry information about the current path and make sure that a given action still fulfills the preconditions with an instance. For example, we want to say that to buy an object, the store must sell that same object. All instances must have an IS-A relationship to the parameter type to fill that parameter.
S- state. These represent possible states of a concept. For example, a store can be open or closed. You can enforce that only one state is active in a particular group by specifying strong inhibitory edges between states that can never be active at the same time (like open and closed).

While this may look like a lot for a simple case, since each concept is part of a hierarchy, it should extend reasonably well (fingers crossed). Also, the graph will soon be populated mostly from natural language input and experience, rather than programming it by hand.

I promise I'll use pencil next time.

Saturday, January 29, 2011

Fleshing Out My Algorithm

Starting to write out some code helped ground my concepts a bit. The following is a more detailed description of the basic planning algorithm. It's starting to deviate from Spreading Activation because of the gates and because activation passes information as it goes. With these plans in place I'm going to go back to coding and make changes as I go.

Directed Graph – contains nodes and directed edges

Node – contains activations. Types of nodes are as follows:

  • Concept – a Node that contains a concept, such as “color” or “coffee”
  • Gate – a Node that takes in one or more activations and outputs activation
  • Action – a Node that contains an action to be performed, as well as a cost function for that action

o Actions can be designated as goals

  • Instance – a Node that is a clone of a concept with attributes able to be filled in
  • Parameter slot – a Node to be filled in with the appropriate parameter from an activation
  • Root – a generic Node that serves as the parent for all Instances in the current environment (working memory)

Edge – contains a weight to constrain search through spreading activation decay and a distance to localize inference about concepts

  • Inhibitor/Activator – an Edge where activation directly affects a Node’s activation or cost function. Under this algorithm, inhibition/activation is only supported if at least one of the connected nodes is a conceptual node, since they would be activated or inhibited by a reasoning process or verbal input. For example, an inhibitor from an action node to other action nodes would make those possible actions less likely simply by considering the possibility of the inhibiting action, which is not desired. To encode behavior that makes other actions more difficult, we would have to store all possible paths and compare combinations of them to find the shortest path.

o Example: Assume a café is closed on Sundays (I never said it was a successful café). On Sundays, the current state of the environment would activate “Sunday”, which would activate the “Closed” state node of the café. “Closed” would inhibit the action “Buy coffee” that would usually be activated as a possible action under the goal of “Get coffee”.

  • Goal generator – activation along this edge makes the target action node a goal, and initiates the spreading algorithm
  • Conceptual Edges

o Is-A – an Edge that represents an IS-A relationship between two concepts

o Attribute – an Edge that represents that one concept is an attribute of another

o Instance-Of – an Edge that represents that one concept is an instance of a more abstract concept. Provides the link between long term memory (conceptual) and working memory (instance based)

  • Parameter – the target node is a parameter

Parameters- Parameters are passed through spreading activation to fulfill action or concept nodes that accept them. Parameters can be instances or concepts. Preconditions can be dependent on a particular parameter, so that a node is only activated by an activation carrying the requisite parameters. Parameters are passed from one node to the next until they reach the end of the activation.

Activation – consists of an activation value and a source. Activation spreads along the graph to activate connected nodes, with a falloff dictated by edge weights. Different activations have different impedances for various edge types. High impedance means the activation will not spread easily over that edge, whereas low impedance means it will easily spread. Activation impedance is also affected by the existing activation of a Node – activated nodes will pass on activations more readily.

  • Current State Activation – the activation with a source at the root state

o Travels along forward edges

o Low impedance edges:

§ Preconditions

§ Instances

o High impedance edges:

§ Attributes

  • Goal Activation – the activation with a source at a goal state

o Travels along backward edges

o Low impedance edges:

§ Preconditions

§ Instances

o High impedance edges:

§ Attributes

  • Knowledge Query Activation (asking about concepts)

o Travels bi-directionally

o Low impedance edges

§ Is-A

§ Attributes

o High impedance

§ Preconditions

§ Instances

Bi-directional information spreading algorithm:

A goal state is given as a command or a desired action, and a root state represents the current state of the environment. The algorithm will find a low cost sequence of actions that will satisfy the goal condition without an exhaustive search of all possibilities.

  1. Goal activation starts at the given goal nodes. This spreads across incident (backward) precondition edges and bi-directionally across conceptual edges and defines a goal set – the set of goals and preconditions that need to be completed by the agent.
  2. Current state activation starts at the root node. This spreads across forward precondition and inhibitor/activator edges and bi-directionally across conceptual edges.
  3. The series of strongest activated precondition node forms the sequence of actions to be taken by the agent. A backward greedy search from the goal state forms a graph consisting of nodes with the greatest activation (AND gates will traverse all incident paths). The actions will be performed according to a depth-first traversal starting from the root node. AND gates will stop the traversal until all paths have reached that gate.

Additional thoughts:

Looping – looping is important for actions that need to be repeated a number of times, as well as standing orders, such as, “Tell me if you see a red car” (thanks for reminding me Norm :) ). This can be controlled using a Loop Gate, with an input for activation, one output to direct the activation back to the beginning of the “block” (in this case an action node), and one output that activates when the loop is finished. Another possibility is to reroute activation back to a Goal Generator node. To keep repetitions reasonable, I would probably need a “refractory period” implemented using a delay between activations.

Language - I haven't mentioned much about language so far, but it's always in the back of my mind. To start, I'm going to use preplanned phrases to build the network and have the program ask about anything needed to complete a task. I hope later to use activation of concepts combined with semantic frames to improve the language aspect.

Thursday, January 27, 2011

Clarification of my Spreading Activation Algorithm

I haven't found much in the way of detailed spreading activation algorithms, aside from the "Task Planning Under Uncertainty Using a Spreading Activation Network" paper. I've also found that many details of the implementation are dependent on the application. In my model, I've realized the need for a few things that complicate the simple spreading mechanism.

First of all, the spreading activation model I'm using is bi-directional. One activation from the source, which represents the current state of the environment and all possible actions from that state, and the goal, which is the command that should be fulfilled. The source activation travels along the outgoing edges, and the goal activation travels backward across incident edges. When they meet, I'll have a possible sequence of actions to go from the current state to the goal state. However, the first meet of the two activations would only be optimal if all the edges were equally weighted (they aren't in this case). That's complication #1, and so in my case the activation will continue until all of the activations have decayed past a certain threshold.

Edge decays range from 1.0, representing an effortless connection or action (like breathing), to 0, representing an impossible connection or action (like pulling my cat off a carpet). An activation is some empirically determined value, and represents the initiative the agent has in making the connection or achieving the goal. It can be thought of as a kind of radius that spreads outward from the goal and current state nodes.

The backwards propagation establishes the goal set - all the preconditions that are part of possible solutions to the problem. This can either simply be a flat structure of all possible preconditions, or it could be organized as a dependency graph to indicate possible sequential orderings of actions. The forward propagation then fills in the weights of the preconditions to find the best sequence of steps. I'm still working on the design of this part of the implementation, and it will be the main focus of this week's work.

The next complication is differentiating between AND and OR when it comes to preconditions and other relationships. If we assign a cost to a particular node in the graph corresponding to an action, say, "Make coffee", the cost will be some combination of all the requirements. So we need to get water, get coffee grounds (or get a grinder and beans for purists), and turn the coffee maker on. If any one of these tasks is particularly difficult for some reason, then the entire task will be difficult - this is an AND grouping of preconditions. Furthermore, if there are many tasks to be done, the difficulty increases.

To represent this, I'm going to say that the activation of an AND gate is the multiplication of the incident activations. This creates the effect that many effortless actions are still effortless, but even a small number of difficult actions make the action difficult.

For OR, it's the opposite case. If "Get coffee" can be satisfied by either "Making coffee" or "Buying coffee", then it doesn't matter how difficult the more difficult task is, it only matters how easy the easy task is. We'll take the path of least resistance - therefore the activation of an OR gate is the highest of the incident edges.

These two gates are essential, but other gates can be added with their own properties. If you've taken any intro Neuroscience class, you'll notice that these gates simulate the connections between neurons. This analogue will prove useful for other features as the project progresses as well.

I've decided to switch to C# from C++ after talking with Ben. This will make programming faster, and it also means I can easily use Unity for my virtual environment.

Tonight I finished my graph data structure and have some debugging info displayed. It's only rudimentary, and doesn't incorporate any of the features I've mentioned here yet.

Friday, January 21, 2011

Project Proposal

Natural language is the most common form of communication for us, but because of its complexity, it is rarely used for human-computer interfaces. However, there are many advantages to natural language interfaces - unlike code, everyone can already communicate with it, and it improves the user experience by making it seem as if a person is interacting with the user rather than a computer.
Furthermore, there are advantages to using language not only to communicate, but also to store knowledge. Storing knowledge in natural language makes it natural to produce sentences in response to the user.
My project is an attempt to combine these two concepts in a virtually embodied agent. A virtually embodied agent is a program that can manipulate and respond to changes in a virtual environment, such as a game or simulation. A virtually embodied conversational agent is an agent that can respond to natural language input from a user.

Abstract:

Intelligent Virtual Agent cognitive models often use a series of abstractions to split different tasks into manageable and solvable problems. For example, language is translated from a sentence to a parse tree, and then to a semantic representation. The semantic representation is then used with a knowledge base to transform the semantics into a temporal logic, and then the logic is transformed into statements which can be evaluated. However, such a pipeline has limitations because each of the constituent parts could aid in evaluating other parts for pronoun reference, disambiguation, prepositions, and pragmatics, yet are kept separate in a pipeline model.
I propose a cognitive model that consists of a cross between a semantic spreading activation network and finite state machine, which is embodied in a virtual world by means of callback functions expressed as nodes in the network. Each node in this network represents a concept that is mapped to other nodes with a relationship. This system allows for conceptual relationships found in a semantic network to coexist with and fill in the information needed for the functional callback nodes associated with particular actions. Gates are used to control shortest path and spreading activation calculations when nodes are queried. Learning can take place through the addition of connections either from language input or through automatic learning (such as Long-Term Potentiation - adding connections between nodes that activate together). The FSM aspect is used to model sequences of actions while maintaining conceptual information at each step of the process.





FinalProjectProposal