000 is one off from : 001, 010, 100
101 is one off from 001, 111, 100
So a single bit is has changed - or rather a single bit would not cause an error. If you where to have your data, then wrap that in a hamming (things like parity, checksum and so on,) these would be done on the data. THEN wrap all of that up in this system, should reduce the need for resending corrupted packets (assuming a reasonable amount of corruption).
I suppose my next task is to simulate this with random corruption to see the reliability. Of course the downside is you're sending three times as much data :(
sab39 as he correctly pointed out. Well I am working on it. Oewww..oh yes, i had thought of that, and I recognise that my solution would only work assuming that you didnt "loose" a bit, only that a bit could get manipulated. But i think that checksumming the data should be done inside this encoding, and that this solution would *NOT* replace checksumming, just try to reduce the need for the resubmission of packets (in reference, for application on the scale of something like the voyager probes, where for a request at retransmit would take a week or so to arrive there.... and then a week or so back (or a month ...) those satialites had somewhere in the region of 150 error correction bits in relation to one bit back.
ladypine, you have it more to what i was originally suggesting in the original problem. What if when setting up a network we could define how much distance between words do you want? perhaps on low error networks (physical connection) we can say 0 differnece or 1 difference (for speed) but perhaps on high distortian wireless, or long distance radio connection (i admit here i have an ameature radio link in my past) then you can say a distance of 3, 4 or 5... but i dont believe that I have sufficently solved my original problem generically enough to say this yet.