20130825

Musings on Robotic Proprioception

Proprioception is the sense of the relative positions and orientations of ones own body and its composing parts. How does our nervous system calculate these estimations? We don't even know how the data is represented in the nervous system.

It seems to me that if multiple sensors of the same type are used then each of their individual estimates could be filtered together to improve the estimation. For instance, imagine measuring the resistance across a stretchable conductive fabric. The resistance in such a product increases as it's stretched allowing an estimation of its length. If several pieces are attached between two links then the combination of the values could be used to estimate the relative orientations of the links under certain conditions. Furthermore, filtering the data from multiple sensors should result in cross sensor noise cancellation. Of course a model of how the values change given any given state of the joint would be necessary. However the model need not be known a priori as a regressor could be used to learn the model.

Recently someone, I don't recall who, stated that they had read that a baby's tendency to put its appendages in its mouth was a form of calibration. (If you know of any literature on this, please let me know.) The idea at least makes intuitive sense because such actions would provide additional information about the relation of two body parts. For instance, when standing, one cannot touch their toes with their hands if they do not bend at the waist. However our finger tips can reach to just a bit above our knee which tells us that our shoulder and thigh are closer together than our shoulder and our feet. Of course we can literally see this fact too.

What might be some strategies to learning a model of ones self from nothing? In other words, what actions can be performed to gather information about ourselves and what do the actions tell us?
  1. Moving randomly would give a sense of what commands move what parts. This would be the start of a dynamic model. It would provide information about how quickly a given part can be moved. It would also give some kinematic information because the movement of one body part would be felt in neighbouring body parts. It would even give information as to the extent of influence moving one part has on another, at least in relative terms. For instance moving the shoulder quickly in a circle would induce torques on the elbow as a result of the lower arm swinging around, controlled or otherwise. It would also induce torques in the upper torso but because the mass of the upper torso is higher than that of the lower arm one would expect the lower arm to be impacted more in a sense.
  2. As previously discussed, touching two separate body parts provides additional information. Such information contributes to a kinematic model. One question that arises is, how does one know when contact is made? Is there only a temporal correlation or can two nerves, not normally connected, form some sort of temporary connection? Certainly not a chemical connection but are there nerves that sense a change in conductivy?

    Such contact may or may not be intentional. Intentionally creating a contact would in effect be a test of the hypothesis of the various parameters involved. This could be done at random or strategically in which case learning may be sped up.

    Either way, such values could only be relative values without some form of a priori knowledge. For instance, touching ones hip , and knowing approximately how the elbow is bent, only gives one a sense of how far from the shoulder their hip is located. However if two pieces of information are know, such as the distance from the shoulder to the hip and the length of one of the arm links, then the other components can be solved for in closed form. Humans don't so much have this ability because we don't know the lengths of the various components of our body.
  3. Contacting the environment can provide information in a number of ways. For instance we know that contact with a stationary object eliminates drift in our sense of balance [1] . We also know that proprioception and vision work together, we call this hand-eye coordination. Tapping on an object provides another example. Tapping results in the nerves in our ears being activated at the same time our finger contacts the environment. This goes back to Dr. V.s Ramachandran TED talk where he discusses how the mind draws correlations between sounds and visual constructs called a cross-model synesthetic abstraction.
What strategies am I overlooking? Which of them do we already have evidence for with respect to our own development? Which have been used in robotics?

[1] Blakeslee, Sandra (2007). The Body has a Mind of its Own. Random House LLC.

No comments:

Post a Comment