Thursday, March 31, 2011

Actions

My visit to CMU was great, but I've decided on the University of Rochester. Unfortunately I didn't have any time over the weekend to work on this, but I still managed to make some great progress.

First, my interface is considerably more robust (although some things will probably break as I add more features). You can save and load scenes without having to worry about losing information. This was kind of tricky because static variables, dictionaries, and hash sets can't be serialized by Unity, which means they won't be saved, and it also means certain things will break when you go from Edit mode to Play mode. I had to do a few workarounds to fix this. You can also delete nodes without messing anything up, but I haven't added a feature to delete edges yet (won't be hard).

The major breakthrough for this week was getting actions working. You can assign a RigidBody object to the Root node's "Agent" parameter. This means that any actions that get performed in a sequence will be called on that RigidBody. Using this system means you theoretically could have multiple agents responding at the same time, but that would require multiple Root nodes, and I haven't thought of a good way of doing that yet.

The picture shows a block that has just been instructed to "Jump". It's a simple action, but there's a whole lot that went into it. I'll make some more complicated scenarios with different paths for the beta review.

I'm technically a little behind on my schedule, but now since my environments will be much easier to make, it works out that I'm doing ok.

Wednesday, March 23, 2011

Progress Update and Beta Review

First, my latest update, in picture form. I have intersectors working, which are integral to the operation of the algorithm. In this picture, only the spherical nodes were created by me. There's the Root node, the yellow action node GetCoffee, and the red concept node Coffee. The rest are generated from the text input "Get coffee". The cubes are Activators - the Root Activator, the Goal Activator, and two Search Activators - one for "Get" and one for "Coffee". The Search Activations are red (the one for getcoffee is hidden by the yellow Goal Activation). Finally, there is the Query Intersector "Get Coffee" which is finding the intersection of the "get" and "coffee" intersections, and the Goal Intersector, which finds the path between the Root node and the Goal node.

The "Step" button at the bottom should go through the procedure to get to the goal node, but there's a small bug in it right now. (I would fix it, but I'm leaving for Pittsburgh tomorrow afternoon and might not have time to make this post later).

Now the "Self-Evaluation". I'm really happy to have the Unity interface up to my original back-end progress now. It seemed a little slow, but I think this interface will help me find a lot of bugs more quickly. It has already helped me see some errors. When dealing with graph networks like this, it can be very difficult to catch small things like missing edges. And while it has taken away some time that I would have liked to work on the underlying system, I think it is both an innovative and useful tool for this application - I don't know if I've seen any visualization of cognitive models before, let alone a nice looking one like this.

I'll have to scale back some of my previous goals, but for the Beta review, I plan to have a small test environment connected to the network. A simple case would be a cube that could do something like jump, given a user's input saying "jump". If I have my Beta review on the 1st, then getting this working and making the interface more robust would be my two main goals. (Since I'll be at Carnegie Mellon this weekend I don't foresee getting much work done). If it's on the 4th, I'd like to have a more complicated environment (maybe multiple objects).

By the Final Poster session, I plan to have an environment where an agent has multiple actions available to it, and can do things like "pick up the blue box" or "walk to the green sphere". Being able to take in information from the user and store relationships would be a nice added touch. Other related dates would probably be any cool interesting features I can add. In general, however, I expect the major functionality to be done by the presentation date.

Also, Joe wanted me to submit a paper to IVA 2011, and that deadline is April 26th. I'm not sure how compelling my system can be by that point, but we'll see how it goes. I may focus more on the benefits of a visualization system for a language interface (since the theme of IVA 2011 is language).

Thursday, March 17, 2011

Activations in Unity

Have some progress to show tonight - I've got Activations working in Unity! I had to restructure my code a surprising amount to get this to work. In general, I had to adopt a more component-based approach. For example, instead of having specific Activation groups, I switched to an Activator object that takes a particular Activation (a prefab) as a parameter. The upshot to this is that it's making my structure more modular and visualization-friendly.

Also, I realized that you can't use the virtual and override keywords with Awake and Update - Unity uses their own method searching to call these methods, and if you mark a method as virtual or override, it will skip over it. Very frustrating. If you want to make your Awake or Update function semi-virtual, you can use the "new" keyword, but that method instance will only be called if the method is called from the correct type (not in an array of parent types, for example). Thought this insight might be helpful to anyone else doing Unity coding.

Anyway, here are the results. The cube is the Activator object, connected to two Concept nodes, and the purple Activations are shown after the Activator object was activated.

Wednesday, March 2, 2011

Not too much this week

Kind of a slow week, with midterms and all. I'm still working on Unity, haven't gotten activations done yet. Getting them to display in Unity is taking some adjustments that I didn't forsee. I'm also doing a slight architecture restructuring - I'm making everything a Node. This might seem kind of strange, but the idea is that I would like to have the network be able to build pieces itself. I've always had a superficial interest in metacognition (http://en.wikipedia.org/wiki/Metacognition), and it seems to me that any architecture with plans to support it should support it in the beginning. I don't know if I'll even get to anything complicated during this semester, but the changes aren't drastic and I'd prefer to get it out of the way.
I'm leaving to go to the University of Rochester tomorrow, so I won't be able to have my typical Thursday coding sprint. But at least I'll have time over break to get work done.