Friday, February 25, 2011

Alpha Review

I'm late with the blog update with the alpha review and all. But now I can post the video link if anyone wants to look at it again.


I am not taking a break (that would be dangerous :) ), but rather continuing on with my Unity integration. I'd like to get back to working on the underlying system ASAP, but having the interface done will make it easier to work on it. Right now I'm smoothing out the edges of my interface, and making sure it's both intuitive for anybody to use and robust enough for nobody to break (without effort). Soon I'll get Activations working displaying, if not tonight, then tomorrow.

Thursday, February 17, 2011

Integration into Unity

So I feel like I've made some pretty good progress this week. I have ported all of my code into Unity, which wasn't a terrible ordeal. I just had to get rid of my constructors for all of my Nodes, make my Nodes into GameObjects, and replace my properties with public variables.

At first I thought I needed to make a runtime GUI to manipulate my network in-game. However, Unity has a very nifty GUI editor for actually creating your own windows and widgets. I'm working on one that will allow a user to create the network nodes without having to even run the game.

You can see in the bottom left corner the Network Editor tab (that's mine). You can see the properties of a selected Node and add edges to other Nodes by dragging them into the Object field and specifying the edge type. It was really simple to do, and I couldn't imagine it being any easier to set up. Plus, I will be able to make a really cool-looking interface. I was thinking something like this from Ghost in the Shell:
Someday :)

For my alpha review, I plan on having done:
1. Saving and loading scenes of networks (this is already done theoretically, just need to check a few things).
2. Visualization of Activations.
3. Setting up and triggering activation groups.

This will put me a little behind on my schedule, but I think the time spent working with Unity will shorten the time needed for test scenarios. Plus, now that I know how simple it is to use Unity, I think my estimates were a little generous anyway.

Monday, February 14, 2011

This is why I need to make a GUI in Unity

Here you can see all the debug information for my system. The node labeled '0' is the Root node. (I forgot to add the edge from grounds to "get grounds", sorry).

As you can see, this is difficult to parse, even for me. When this system becomes larger, a text-based interface will not be sufficient for debugging. A visual interface will not only make it easier to create the knowledge base, but could also allow for debugging by creating activation traces or passing in a sentence and seeing the activation. Starting this week, I'm going to start working my code into Unity, and continue with the code from there. I hope to have the interface done by the Alpha review.

My plan for the interface is to use Unity spring joints to connect spheres representing the nodes. This should provide a tidy representation for navigating the graph in 3D space. Eventually I will implement a 2D projection for the 3-dimensionally challenged.

Some features I will have to implement:
  1. Camera control to center on nodes and activations
  2. Visualization system to analyze activations (will probably use colored lights)
  3. Graph editing system (add/remove/edit nodes and edges)

I got AND gates working. You can see the results at the bottom: makeCoffee is dependent on both having grinds and water, and so the sequence of actions incorporates both.

Also, I solved my problem of specifying constraints. Instead of trying to saturate the activation of the specified constraint (which doesn't work when one method of completing the task is much easier than the other), I use a message-passing system. If the input sentence specifies a constraint, such as "make coffee with water", I'll add a constraint activation on the water node. This is a short range activation, but it's enough that the action "Get Water" now has a constraint activation. Any root activations (which determine the path the agent will take) passing through a constraint activation will pick up a constraint message.

Activations with constraint messages take priority over activations with fewer satisfied constraints. This means that activations that pass through the constraint will be chosen for the final sequence of actions. This system could easily be extended to allow for priorities (I've noticed agent planning systems seem to like priorities).

The great thing about this method is it can be reversed to indicate impossible or failed procedures. If an agent gets a failure state from a function, that node will be marked with a Failure message, and any activations passing through that node will receive a lower priority than those with fewer failures. Recomputing the possible paths will provide an alternate low cost path for the agent to take.

Once I get the Unity editor working, I hope to start making my system more state-based rather than procedure-based. This means that instead of having actions as preconditions, there will more often be states that are preconditions, and actions that trigger those states.

Thursday, February 10, 2011

Basic Algorithm finished

I did some programming this weekend and my basic algorithm is working, with a few bugs. The algorithm is presented in the scan of my notes, except I don't have any AND gates yet (they require some special programming which theoretically works, but hasn't been tested).

I can pass the input "make coffee" and it finds a path from the root to the goal node (which was activated by the command). I have the actual structure lying around here somewhere, I'll scan it later tomorrow.

The problem I am currently facing is increasing the weight of context from input sentences. Theoretically, if someone says "Get some water from the faucet and make coffee", the agent should take the hint that the user wants the agent to actually make coffee and not just buy it, even though buying the coffee would be easier. It's a delicate balancing act - how do I differentiate priming through simply mentioning something from a command that implies a particular path of action? I'm still working on it. Perhaps it will just be adjusting some parameters, or maybe I need some "must fulfill" nodes that are required to be on a final procedure path.

I need to make some adjustments, but at least I have something to work off of.

Another thing I was thinking about was spatial representations. At first I was planning on off-loading pathfinding to the game engine - a function would return a curve providing a path through the environment. But then I realized I already have a shortest-path algorithm doing my work in my cognitive model, so why not use that? It also allows me to do certain things like remembering common paths, integrating obstacles and location descriptors such as room numbers, cardinal directions, etc. Once I have the planning aspect finished I suspect that will be my next goal.

I'm also wondering if I should start using Unity as a visualization for my network. I can do some debugging simply with console output, but it will become more difficult as the network grows. I'm going to see how easy it will be to implement a basic visualization GUI. It'll help me get used to using Unity too.

Thursday, February 3, 2011

Test Experiment

Here you can see some of my notes. I'm envisioning a text environment to test my algorithm and some of the natural language features without going into Unity yet. The goal will be to obtain coffee in some manner.

The top is a simple diagram, the bottom is more representative of the final graph. There are some mistakes that I would fix (sometimes I'm missing an AND gate), but just wanted to put this up tonight.

As you can see there are many ways to obtain coffee, and the desired path will depend on many factors in the environment - is the store closed today? How much do grounds cost? Do I want to grind the beans myself?

Diagram Key: I - instance. These are physical instances of an object in the environment. In this case, I'm still debating on whether I want to have the instance node be a precondition for actions, or if I should abstract it more.
A - action. If it's on an edge, that means that that action can be performed for the concept it is coming from. If it's a node, it means it's an action.
P - precondition.
Parameter - think of parameters like templates in C++ or generics in Java. They are a way for the activation to carry information about the current path and make sure that a given action still fulfills the preconditions with an instance. For example, we want to say that to buy an object, the store must sell that same object. All instances must have an IS-A relationship to the parameter type to fill that parameter.
S- state. These represent possible states of a concept. For example, a store can be open or closed. You can enforce that only one state is active in a particular group by specifying strong inhibitory edges between states that can never be active at the same time (like open and closed).

While this may look like a lot for a simple case, since each concept is part of a hierarchy, it should extend reasonably well (fingers crossed). Also, the graph will soon be populated mostly from natural language input and experience, rather than programming it by hand.

I promise I'll use pencil next time.