Thursday, March 31, 2011

Actions

My visit to CMU was great, but I've decided on the University of Rochester. Unfortunately I didn't have any time over the weekend to work on this, but I still managed to make some great progress.

First, my interface is considerably more robust (although some things will probably break as I add more features). You can save and load scenes without having to worry about losing information. This was kind of tricky because static variables, dictionaries, and hash sets can't be serialized by Unity, which means they won't be saved, and it also means certain things will break when you go from Edit mode to Play mode. I had to do a few workarounds to fix this. You can also delete nodes without messing anything up, but I haven't added a feature to delete edges yet (won't be hard).

The major breakthrough for this week was getting actions working. You can assign a RigidBody object to the Root node's "Agent" parameter. This means that any actions that get performed in a sequence will be called on that RigidBody. Using this system means you theoretically could have multiple agents responding at the same time, but that would require multiple Root nodes, and I haven't thought of a good way of doing that yet.

The picture shows a block that has just been instructed to "Jump". It's a simple action, but there's a whole lot that went into it. I'll make some more complicated scenarios with different paths for the beta review.

I'm technically a little behind on my schedule, but now since my environments will be much easier to make, it works out that I'm doing ok.

Wednesday, March 23, 2011

Progress Update and Beta Review

First, my latest update, in picture form. I have intersectors working, which are integral to the operation of the algorithm. In this picture, only the spherical nodes were created by me. There's the Root node, the yellow action node GetCoffee, and the red concept node Coffee. The rest are generated from the text input "Get coffee". The cubes are Activators - the Root Activator, the Goal Activator, and two Search Activators - one for "Get" and one for "Coffee". The Search Activations are red (the one for getcoffee is hidden by the yellow Goal Activation). Finally, there is the Query Intersector "Get Coffee" which is finding the intersection of the "get" and "coffee" intersections, and the Goal Intersector, which finds the path between the Root node and the Goal node.

The "Step" button at the bottom should go through the procedure to get to the goal node, but there's a small bug in it right now. (I would fix it, but I'm leaving for Pittsburgh tomorrow afternoon and might not have time to make this post later).

Now the "Self-Evaluation". I'm really happy to have the Unity interface up to my original back-end progress now. It seemed a little slow, but I think this interface will help me find a lot of bugs more quickly. It has already helped me see some errors. When dealing with graph networks like this, it can be very difficult to catch small things like missing edges. And while it has taken away some time that I would have liked to work on the underlying system, I think it is both an innovative and useful tool for this application - I don't know if I've seen any visualization of cognitive models before, let alone a nice looking one like this.

I'll have to scale back some of my previous goals, but for the Beta review, I plan to have a small test environment connected to the network. A simple case would be a cube that could do something like jump, given a user's input saying "jump". If I have my Beta review on the 1st, then getting this working and making the interface more robust would be my two main goals. (Since I'll be at Carnegie Mellon this weekend I don't foresee getting much work done). If it's on the 4th, I'd like to have a more complicated environment (maybe multiple objects).

By the Final Poster session, I plan to have an environment where an agent has multiple actions available to it, and can do things like "pick up the blue box" or "walk to the green sphere". Being able to take in information from the user and store relationships would be a nice added touch. Other related dates would probably be any cool interesting features I can add. In general, however, I expect the major functionality to be done by the presentation date.

Also, Joe wanted me to submit a paper to IVA 2011, and that deadline is April 26th. I'm not sure how compelling my system can be by that point, but we'll see how it goes. I may focus more on the benefits of a visualization system for a language interface (since the theme of IVA 2011 is language).

Thursday, March 17, 2011

Activations in Unity

Have some progress to show tonight - I've got Activations working in Unity! I had to restructure my code a surprising amount to get this to work. In general, I had to adopt a more component-based approach. For example, instead of having specific Activation groups, I switched to an Activator object that takes a particular Activation (a prefab) as a parameter. The upshot to this is that it's making my structure more modular and visualization-friendly.

Also, I realized that you can't use the virtual and override keywords with Awake and Update - Unity uses their own method searching to call these methods, and if you mark a method as virtual or override, it will skip over it. Very frustrating. If you want to make your Awake or Update function semi-virtual, you can use the "new" keyword, but that method instance will only be called if the method is called from the correct type (not in an array of parent types, for example). Thought this insight might be helpful to anyone else doing Unity coding.

Anyway, here are the results. The cube is the Activator object, connected to two Concept nodes, and the purple Activations are shown after the Activator object was activated.

Wednesday, March 2, 2011

Not too much this week

Kind of a slow week, with midterms and all. I'm still working on Unity, haven't gotten activations done yet. Getting them to display in Unity is taking some adjustments that I didn't forsee. I'm also doing a slight architecture restructuring - I'm making everything a Node. This might seem kind of strange, but the idea is that I would like to have the network be able to build pieces itself. I've always had a superficial interest in metacognition (http://en.wikipedia.org/wiki/Metacognition), and it seems to me that any architecture with plans to support it should support it in the beginning. I don't know if I'll even get to anything complicated during this semester, but the changes aren't drastic and I'd prefer to get it out of the way.
I'm leaving to go to the University of Rochester tomorrow, so I won't be able to have my typical Thursday coding sprint. But at least I'll have time over break to get work done.

Friday, February 25, 2011

Alpha Review

I'm late with the blog update with the alpha review and all. But now I can post the video link if anyone wants to look at it again.


I am not taking a break (that would be dangerous :) ), but rather continuing on with my Unity integration. I'd like to get back to working on the underlying system ASAP, but having the interface done will make it easier to work on it. Right now I'm smoothing out the edges of my interface, and making sure it's both intuitive for anybody to use and robust enough for nobody to break (without effort). Soon I'll get Activations working displaying, if not tonight, then tomorrow.

Thursday, February 17, 2011

Integration into Unity

So I feel like I've made some pretty good progress this week. I have ported all of my code into Unity, which wasn't a terrible ordeal. I just had to get rid of my constructors for all of my Nodes, make my Nodes into GameObjects, and replace my properties with public variables.

At first I thought I needed to make a runtime GUI to manipulate my network in-game. However, Unity has a very nifty GUI editor for actually creating your own windows and widgets. I'm working on one that will allow a user to create the network nodes without having to even run the game.

You can see in the bottom left corner the Network Editor tab (that's mine). You can see the properties of a selected Node and add edges to other Nodes by dragging them into the Object field and specifying the edge type. It was really simple to do, and I couldn't imagine it being any easier to set up. Plus, I will be able to make a really cool-looking interface. I was thinking something like this from Ghost in the Shell:
Someday :)

For my alpha review, I plan on having done:
1. Saving and loading scenes of networks (this is already done theoretically, just need to check a few things).
2. Visualization of Activations.
3. Setting up and triggering activation groups.

This will put me a little behind on my schedule, but I think the time spent working with Unity will shorten the time needed for test scenarios. Plus, now that I know how simple it is to use Unity, I think my estimates were a little generous anyway.

Monday, February 14, 2011

This is why I need to make a GUI in Unity

Here you can see all the debug information for my system. The node labeled '0' is the Root node. (I forgot to add the edge from grounds to "get grounds", sorry).

As you can see, this is difficult to parse, even for me. When this system becomes larger, a text-based interface will not be sufficient for debugging. A visual interface will not only make it easier to create the knowledge base, but could also allow for debugging by creating activation traces or passing in a sentence and seeing the activation. Starting this week, I'm going to start working my code into Unity, and continue with the code from there. I hope to have the interface done by the Alpha review.

My plan for the interface is to use Unity spring joints to connect spheres representing the nodes. This should provide a tidy representation for navigating the graph in 3D space. Eventually I will implement a 2D projection for the 3-dimensionally challenged.

Some features I will have to implement:
  1. Camera control to center on nodes and activations
  2. Visualization system to analyze activations (will probably use colored lights)
  3. Graph editing system (add/remove/edit nodes and edges)

I got AND gates working. You can see the results at the bottom: makeCoffee is dependent on both having grinds and water, and so the sequence of actions incorporates both.

Also, I solved my problem of specifying constraints. Instead of trying to saturate the activation of the specified constraint (which doesn't work when one method of completing the task is much easier than the other), I use a message-passing system. If the input sentence specifies a constraint, such as "make coffee with water", I'll add a constraint activation on the water node. This is a short range activation, but it's enough that the action "Get Water" now has a constraint activation. Any root activations (which determine the path the agent will take) passing through a constraint activation will pick up a constraint message.

Activations with constraint messages take priority over activations with fewer satisfied constraints. This means that activations that pass through the constraint will be chosen for the final sequence of actions. This system could easily be extended to allow for priorities (I've noticed agent planning systems seem to like priorities).

The great thing about this method is it can be reversed to indicate impossible or failed procedures. If an agent gets a failure state from a function, that node will be marked with a Failure message, and any activations passing through that node will receive a lower priority than those with fewer failures. Recomputing the possible paths will provide an alternate low cost path for the agent to take.

Once I get the Unity editor working, I hope to start making my system more state-based rather than procedure-based. This means that instead of having actions as preconditions, there will more often be states that are preconditions, and actions that trigger those states.