Things I did today:
- Added a piece of MoveFlowers() that makes sure Unity isn’t looping endlessly through random positions because the garden is completely full (there are no unique positions left). If 5 positions have already been tried and the randomly generated position is within the bounds of the garden, the position is treated as acceptable and unique, no matter how close it is to other flowers. This prevents an annoying error where Unity appears to freeze, but is actually just infinitely trying to find a unique position.
- Added a feature to TranslationLayer that increases the gestureRecognitionThreshold after a failed attempt at a gesture. This will make it easier for the user to complete the gesture on the next attempt. The gestureRecognitionThreshold goes back to its normal value when the user completes the gesture or when the program marks the gesture as unsuccessful.
- Doubled the maximum number of flowers from 100 to 200. This makes it easier to test the top bullet.
The rest of my summer will be focused on thinking and research concerning where to go with this project in the future. I’ll only use a bulleted list if I’m changing things about my game. Otherwise, I’ll write in paragraphs, as this allows for a clearer representation of in-depth thought processes.
Today, I discussed with Sharon a concept suggested during a conversation I had with Bill Chew of West Lafayette, Indiana. Chew has done some research into determining whether humans are in correct positions using computer algorithms. He brought to my attention that all joints in a specific posture are not created equal; that is, a shoulder exercise might put great importance on arm positioning, but might not care so much about where the patient’s knees are, provided they are not wildly out of position. The problem of whether the patient is correctly matching the indicated position becomes much more difficult when one considers these factors.
Currently, my game–and those created before mine at IC–uses an overall difference formula to determine whether or not the user is matching the indicated position. This means that the differences between corresponding joints of the indicator and avatar models are summed; if that sum is less than an overall gesture recognition threshold, the user has matched the position adequately. What Chew would suggest (and I agree that this really makes sense) is that each joint be given a weight based on how important it is to the exercise. If a joint is weighted more highly, it must be closer to the indicated position to make a match; if a joint is weighted less highly, the tolerance for that joint is somewhat higher–it can be a bit farther out of position and still match. Though this greatly ups the complexity of our checking algorithm, it seems to me that it’s important enough that we’ll at least want to consider it in the future.
One method of implementation would be to let the therapists decide what the weights for each joint should be. This ensures the most accuracy when assigning the importance of joints, which is excellent. However, it remains to be seen whether this method would be too costly in terms of time spent by therapists. It seems to me that if a therapist has to assign weights for each and every posture, it might feel like a time-consuming burden. However, the therapist certainly knows best which parts of the body are most important in each exercise.
Another idea would be to have a “base” position and to weight the indicator joints based on how far they are from the base. Joints that are farther from their counterparts in the base model are probably more important, and therefore should be held to a lower tolerance than those that are closer to where they started. This absolutely doesn’t ensure the same type of accuracy with weights that manual input by a physical therapist would, but I think it would be much simpler to implement.
I’m sure there are other, more complicated ways to do it too, but these are a start. As I said, I think the issue of “not all joints are created equal” is one that future researchers on this project should definitely think about, as it would add a layer of depth to our research that could be extremely valuable.
The focus of my next few days of research will be how to upgrade our games to run on new systems that are appearing everywhere. Unity 5 will be released soon, and Zigfu isn’t even supported on Unity 4. Unless they pick it back up for 5 (which seems unlikely), we’ll have to find a new way of bridging the gap between the computer and Kinect. Speaking of Kinect, they have a new version as well. Though you can still buy an Xbox 360 Kinect (as of right now), Microsoft isn’t making any more of that model. Instead, they’re putting out a whole bunch of Xbox One Kinects. I’ll be looking at what sorts of skeleton possibilities we have with that new version. Finally, we’ve tried to get our games to work on Windows 8, but we get an error having to do with differences between 32- and 64-bit versions of the OS. Currently, everything runs on Windows 7, but I’ll be researching what we’ll have to do to get Windows 8 to cooperate.