img_3273

“Urmila Mahadev spent eight years in graduate school solving one of the most basic questions in quantum computation: How do you know whether a quantum computer has done anything quantum at all?”

https://www.quantamagazine.org/graduate-student-solves-quantum-verification-problem-20181008/

https://arxiv.org/pdf/1804.01082.pdf

This is a brief meditation on some conceptual issues in AI pertinent to my recent posts.

All biological neurology must evolve on a substrate.
From an evolutionary point of view, form follows function.

By contrast, computers are forged by mortal design, top-down & bottom-up. Function follows form. Mega-aggregations of linear transistor-metropoli populated upon silicon-substrate.

I once made the mistake of stating that the human brain must have very little redundancy. A neuro-physiologist corrected me. The brain, is in fact, possessed of massive redundancy. Redundancy then is a term, like entropy, easily inverted by the unwitting.

My contention was couched in an understanding that, despite the near infinitude of experience processed in our daily lives, there remains preserved in our subconscious, minutely observed experiences from across a lifetime. Selective as those engrams may be, there is good neurological evidence they are lain down in all their associative glory, with a level of detail & sensory association well beyond that available through conscious voluntary recall. A phenomenon most famously elicited in Proust’s, ‘madelaine-cake’ episode, as detailed in Remembrance of Things Past.

So how exactly does that scale of information compression and associative recall occur?

Perhaps one of the most profound things I’ve read in my time on this planet, was a quote to the effect that; ‘G.O.D* designed the universe with infinite complexity, but it was built on a budget.’ I apologise for lack of attribution, the source escapes both myself & Google. Likely, in one of the selected readings from the bibliograph listed on this blog…
The point being, that the universe is mostly space. What matter exists, combines in repetitive form and non-random regularity, across space, time, and scale. The universe is physics. It is mathematics, and it is fractal. Information, ipso facto, follows the same laws governing energy & entropy.

So why then, do we expect emergent AI to pop out of linear substrates or, why should we expect any tractable efficiency from laboriously constructing and emulating any such physical ‘model’ of neural function? *the distinction here, being between a function-process from which structure emerges, versus a substrate constructed to perform a specific function ie; neural nets.

Sir Roger Penrose proposed, ‘because-Quantum’ in his dissertation on consciousness, ‘The Emperor’s New Mind’. But is the Emporer’s new mind cloaked in quantum spooky-ness, or is there a more mundane classical explanation? In my recent postings on this blog, I believe I’ve pointed towards the fundamental correlations of fractal economy from which practicable intelligence, and perhaps even consciousness, might one day emerge. Sometime, in the not-too-remotely distant future.

Thanks for your consideration.

Phill Somerville. April 8th., 2019.

*Great Organising Design (Deity if you prefer.)

A Stateful Hash

April 7, 2019

So we left off last Post with a jumbled paragraph that likely made no sense…

I’d characterise the algorithm as a Turing equivalent but I have no idea how to elucidate that accurately without losing your attention. Let’s just call it a stateful hash and leave the categorising to others?

Now, pay attention. This is important.

Any string expanded by this algorithm will have had a Real Numbered (floating point) initialisation vector as input to the hash, (an initialisation iteration is also required but we’ll leave the operating detail out for time being.) and likewise, a termination to the string expansion by a iteratively correlated, Real Numbered (floating point) message digest, or terminating vector if you will. We’ve covered this string-digest pairing before in terms of ‘trapdoor’ constructions of One Way Functions.

The terminating digest may then carry over to be the I.V of the next string-in-sequence of any further grammar being entered. Otherwise, the state-machine is re-calibrated or zeroed for subsequent hashing. The other identity element consists of the iterative depth/count. Remembering, the hash digest and expanded-file combined, are injective for any process expanded msg length.

Herein, lies a ‘Huffman’ type payoff!

You’ll recall that OWF’s are easy to generate, hard to invert. Approaching this algorithm from a cryptographic -inverting- perspective, it’s easy to overlook the fact that once the message digest has been processed, the expanded file is no longer necessary from a reconstruction point of view. Feeding the same I.V and base-string to the same stateful process will elaborate the same expanded message. Just as, a different I.V will result in a different expanded binary with a different digest. Reflecting on this, once a grammatical string has been elucidated, subsequent iterations with different initialisations and terminations, can be derived from the same prototypical string. The takeaway from this is that subsequent grammars can be generated at cost of only the digest number and iteration count. ‘Pointers’ if you will? The expanded strings of each and every subsequent hashing do not themselves, require retention, only the vectors. These vectors may also be converted from floating point to integer type if convention-economy requires.

We may then start to imagine a correlated artificial neuron consisting of a string with receiving and terminating nodes. Each node populated by correlation vectors, indexed alongside their iterative depth count. We may contemplate associative node members ranked by frequency of use, syntax, or any other category you might care to apply?

NeedleModular arithmetic is (the) central pillar of number theory. Number theorists being ‘Lord of the Rings’, if you will. Modular arithmetic can be characterised as sacrificing operational precision of the most significant digit.

Floating point operations however, involve loss of precision at -least- significant numerical placeholder.
Floating point numbers are quite rightly regarded as mathematical ‘hacks’. Unpredictable numerical compromises following from the infinite Reals being shoehorned into finite representation. A distasteful reminder of reality best left to computer engineers, not appropriate to the uncompromising nature of pure math. Donald Knuth’s volumes on representations of number systems might be unkindly summed as, ‘thar be dragons!’

That being said, well conditioned floating point operations compliant with IEEE 754, will reliably return precision within +/- 1 decimal unit in the last place, ULP.

The PRNG-Hash function presented on these pages –here-, harvests pseudo-randomness from floating point imprecision over a correlated hash of a binary string-file. From its state dependent forward-feed construction, it satisfies strict avalanche criteria. It is a PRNG-expanding function that takes a fixed length message and returns an expanded string comprised of the original message interleaved with error correcting data. In addition to the expanded string, it also returns a correlated numerical message-digest which, in tandem with the binary file, can be inverted to reconstruct the original binary file.

Discussion has so-far centred around cryptographic utility but I wish to present this work in a different light, as a universal indexing function.

A conventional perfect-hash of a fixed-length message, relies upon expanding a set S of n elements to a range of O(n) indices, and then mapping each index to a range of hash values. Obviously, the hash function itself requires storage space O(n) to store kp, and all of the second-level linear modular functions.

What then, is the message length cost of an injective but message-expanding function?

The ratio of error-free to imprecise inversion-recursion is asymptotic of 1:1.666
Which means total combined hashed message length of, on average, x 2.666 the original.
Plus the floating point digest.

From consideration of strict avalanche criteria, we may state that the function’s hash & digest are injective for a message of (n) length.

What point therefore, is there to this construction?

The answer lies in the economy of the indexing cost and the ability to self-construct non-archetypal strings from a reference hash-string without recourse to another specific hash file, using only the reference archetype & an initialisation vector plus terminal digest. This process will be elaborated in future posts.

 

 

 

 

13616424531654983932hourglass-computer-cursor-hi

Alex De Castro recently posted this paper on arXiv. I suppose it represents a rigorous development of the work mentioned here & here previously, whilst also extending the scope, demonstrating quantum information theory equivalence.
“Our result demonstrates by well-known theorems that existence of one-way functions implies existence of a quantum one-way permutation.”
The introduction & outline are relatively easy to follow. The proofs, for me, more difficult.

Link; Quantum one-way permutation over the finite field of two elements

Hound in the Hunt

October 31, 2016

That Obscura Object of Desire; a Verisimilitude of Truth?

While working up my next post, thought I’d leave this article from the MONA blog. (It’s worth watching the documentary, ‘Tim’s Vermeer’ by way of background.) Putting aside David’s Walsh’s ego, the post covers a fascinating if meandering discussion between David, Tim Jenison and interviewer Elizabeth Pearce. It touches upon a range of subjects, which I posit as generally concerning the verisimilitude of truth in art, science & anthropology?

Mona Blog

In the gallery at Mona, there is an exhibition-experiment taking place, called Hound in the Hunt. Read more about it here, and also – for the enthusiastic – watch the documentary Tim’s Vermeer, and get your hands on our big, beautiful book as well (online, in our bookshop, or in the library, for free).

The following is a conversation between David Walsh and Tim Jenison about Vermeer, Viagra, and the nature of genius. (Interviewed by Elizabeth Pearce, with a cameo appearance by Mona curator Jarrod Rawlins.)

Hound in the Hunt Photo Credit: Mona/Rémi Chauvin Image Courtesy Mona, Museum of Old and New Art, Hobart, Tasmania, Australia Hound in the Hunt
Photo Credit: Mona/Rémi Chauvin
Image Courtesy Mona, Museum of Old and New Art, Hobart, Tasmania, Australia

Elizabeth Pearce: David, in the exhibition catalogue for Hound in the Hunt, you write that even if you don’t give a shit about art you should watch Tim’s Vermeer, because it will teach you how to learn. What did…

View original post 13,959 more words

Whorls of Attraction

October 1, 2016

I recently purchased this most excellent ‘Kickstarter’ project, vintage style Mandelbrot map. Lovingly created by Bill Tavishttp://www.mandelmap.com.
He even went to the trouble of including a couple of the intrinsic attractor mappings…

mandelbrot_poster
Serendipitously, we were having a domestic Spring clean of accumulated detritus and I found this page among my personal effects. It dates from around 25 years ago. Ignoring the naïveté of the notation, screen co-ordinates & all, I thought I’d just leave this old print-out here for posterity, before it  gets trashed. It shows a graphic representation of the fractal reflection-translation used in my algorithm.
The bulbar cardoid of the Madelbrot may be famously familiar but the actual attractor-escape mappings are not so commonly illustrated, or much commented upon…

The closer an iterated point is to the central cartesian symmetry points x(-1,1) y(0), then characteristically, the stronger is the attractor’s ‘gravity’and the lesser the number of spiral limbs. Iterating points closer to the set’s escape boundary results in more complex spirals & increasingly chaotic ‘fingerprints’. From memory, the two centre shapes illustrated below, resulted from points on or just within the bulb’s boundary.

madelmap
So I’ve discarded one Mandelbrot picture only to frame another…
Haven’t fully decided what to do with the framed poster. If it looks too ‘school-roomish’ on my study wall, I may end up donating it to my kid’s school. Perhaps there, it might stimulate some errant student’s curiosity into discovering that mathematics holds deeper mystery and wonder than any dry school syllabus’ could ever convey?