## A Stateful Hash

### April 7, 2019

So we left off last Post with a jumbled paragraph that likely made no sense…

I’d characterise the algorithm as a Turing equivalent but I have no idea how to elucidate that accurately without losing your attention. Let’s just call it a stateful hash and leave the categorising to others?

Now, pay attention. This is important.

Any string expanded by this algorithm will have had a Real Numbered (floating point) initialisation vector as input to the hash, (an initialisation iteration is also required but we’ll leave the operating detail out for time being.) and likewise, a termination to the string expansion by a iteratively correlated, Real Numbered (floating point) message digest, or terminating vector if you will. We’ve covered this string-digest pairing before in terms of ‘trapdoor’ constructions of One Way Functions.

The terminating digest may then carry over to be the I.V of the next string-in-sequence of any further grammar being entered. Otherwise, the state-machine is re-calibrated or zeroed for subsequent hashing. The other identity element consists of the iterative depth/count. Remembering, the hash digest and expanded-file combined, are injective for any process expanded msg length.

Herein, lies a ‘Huffman’ type payoff!

You’ll recall that OWF’s are easy to generate, hard to invert. Approaching this algorithm from a cryptographic -inverting- perspective, it’s easy to overlook the fact that once the message digest has been processed, the expanded file is no longer necessary from a reconstruction point of view. Feeding the same I.V and base-string to the same stateful process will elaborate the same expanded message. Just as, a different I.V will result in a different expanded binary with a different digest. Reflecting on this, once a grammatical string has been elucidated, subsequent iterations with different initialisations and terminations, can be derived from the same prototypical string. The takeaway from this is that subsequent grammars can be generated at cost of only the digest number and iteration count. ‘Pointers’ if you will? The expanded strings of each and every subsequent hashing do not themselves, require retention, only the vectors. These vectors may also be converted from floating point to integer type if convention-economy requires.

We may then start to imagine a correlated artificial neuron consisting of a string with receiving and terminating nodes. Each node populated by correlation vectors, indexed alongside their iterative depth count. We may contemplate associative node members ranked by frequency of use, syntax, or any other category you might care to apply?