img_3273

“Urmila Mahadev spent eight years in graduate school solving one of the most basic questions in quantum computation: How do you know whether a quantum computer has done anything quantum at all?”

https://www.quantamagazine.org/graduate-student-solves-quantum-verification-problem-20181008/

https://arxiv.org/pdf/1804.01082.pdf

This is a brief meditation on some conceptual issues in AI pertinent to my recent posts.

All biological neurology must evolve on a substrate.
From an evolutionary point of view, form follows function.

By contrast, computers are forged by mortal design, top-down & bottom-up. Function follows form. Mega-aggregations of linear transistor-metropoli populated upon silicon-substrate.

I once made the mistake of stating that the human brain must have very little redundancy. A neuro-physiologist corrected me. The brain, is in fact, possessed of massive redundancy. Redundancy then is a term, like entropy, easily inverted by the unwitting.

My contention was couched in an understanding that, despite the near infinitude of experience processed in our daily lives, there remains preserved in our subconscious, minutely observed experiences from across a lifetime. Selective as those engrams may be, there is good neurological evidence they are lain down in all their associative glory, with a level of detail & sensory association well beyond that available through conscious voluntary recall. A phenomenon most famously elicited in Proust’s, ‘madelaine-cake’ episode, as detailed in Remembrance of Things Past.

So how exactly does that scale of information compression and associative recall occur?

Perhaps one of the most profound things I’ve read in my time on this planet, was a quote to the effect that; ‘G.O.D* designed the universe with infinite complexity, but it was built on a budget.’ I apologise for lack of attribution, the source escapes both myself & Google. Likely, in one of the selected readings from the bibliograph listed on this blog…
The point being, that the universe is mostly space. What matter exists, combines in repetitive form and non-random regularity, across space, time, and scale. The universe is physics. It is mathematics, and it is fractal. Information, ipso facto, follows the same laws governing energy & entropy.

So why then, do we expect emergent AI to pop out of linear substrates or, why should we expect any tractable efficiency from laboriously constructing and emulating any such physical ‘model’ of neural function? *the distinction here, being between a function-process from which structure emerges, versus a substrate constructed to perform a specific function ie; neural nets.

Sir Roger Penrose proposed, ‘because-Quantum’ in his dissertation on consciousness, ‘The Emperor’s New Mind’. But is the Emporer’s new mind cloaked in quantum spooky-ness, or is there a more mundane classical explanation? In my recent postings on this blog, I believe I’ve pointed towards the fundamental correlations of fractal economy from which practicable intelligence, and perhaps even consciousness, might one day emerge. Sometime, in the not-too-remotely distant future.

Thanks for your consideration.

Phill Somerville. April 8th., 2019.

*Great Organising Design (Deity if you prefer.)

A Stateful Hash

April 7, 2019

So we left off last Post with a jumbled paragraph that likely made no sense…

I’d characterise the algorithm as a Turing equivalent but I have no idea how to elucidate that accurately without losing your attention. Let’s just call it a stateful hash and leave the categorising to others?

Now, pay attention. This is important.

Any string expanded by this algorithm will have had a Real Numbered (floating point) initialisation vector as input to the hash, (an initialisation iteration is also required but we’ll leave the operating detail out for time being.) and likewise, a termination to the string expansion by a iteratively correlated, Real Numbered (floating point) message digest, or terminating vector if you will. We’ve covered this string-digest pairing before in terms of ‘trapdoor’ constructions of One Way Functions.

The terminating digest may then carry over to be the I.V of the next string-in-sequence of any further grammar being entered. Otherwise, the state-machine is re-calibrated or zeroed for subsequent hashing. The other identity element consists of the iterative depth/count. Remembering, the hash digest and expanded-file combined, are injective for any process expanded msg length.

Herein, lies a ‘Huffman’ type payoff!

You’ll recall that OWF’s are easy to generate, hard to invert. Approaching this algorithm from a cryptographic -inverting- perspective, it’s easy to overlook the fact that once the message digest has been processed, the expanded file is no longer necessary from a reconstruction point of view. Feeding the same I.V and base-string to the same stateful process will elaborate the same expanded message. Just as, a different I.V will result in a different expanded binary with a different digest. Reflecting on this, once a grammatical string has been elucidated, subsequent iterations with different initialisations and terminations, can be derived from the same prototypical string. The takeaway from this is that subsequent grammars can be generated at cost of only the digest number and iteration count. ‘Pointers’ if you will? The expanded strings of each and every subsequent hashing do not themselves, require retention, only the vectors. These vectors may also be converted from floating point to integer type if convention-economy requires.

We may then start to imagine a correlated artificial neuron consisting of a string with receiving and terminating nodes. Each node populated by correlation vectors, indexed alongside their iterative depth count. We may contemplate associative node members ranked by frequency of use, syntax, or any other category you might care to apply?

NeedleModular arithmetic is (the) central pillar of number theory. Number theorists being ‘Lord of the Rings’, if you will. Modular arithmetic can be characterised as sacrificing operational precision of the most significant digit.

Floating point operations however, involve loss of precision at -least- significant numerical placeholder.
Floating point numbers are quite rightly regarded as mathematical ‘hacks’. Unpredictable numerical compromises following from the infinite Reals being shoehorned into finite representation. A distasteful reminder of reality best left to computer engineers, not appropriate to the uncompromising nature of pure math. Donald Knuth’s volumes on representations of number systems might be unkindly summed as, ‘thar be dragons!’

That being said, well conditioned floating point operations compliant with IEEE 754, will reliably return precision within +/- 1 decimal unit in the last place, ULP.

The PRNG-Hash function presented on these pages –here-, harvests pseudo-randomness from floating point imprecision over a correlated hash of a binary string-file. From its state dependent forward-feed construction, it satisfies strict avalanche criteria. It is a PRNG-expanding function that takes a fixed length message and returns an expanded string comprised of the original message interleaved with error correcting data. In addition to the expanded string, it also returns a correlated numerical message-digest which, in tandem with the binary file, can be inverted to reconstruct the original binary file.

Discussion has so-far centred around cryptographic utility but I wish to present this work in a different light, as a universal indexing function.

A conventional perfect-hash of a fixed-length message, relies upon expanding a set S of n elements to a range of O(n) indices, and then mapping each index to a range of hash values. Obviously, the hash function itself requires storage space O(n) to store kp, and all of the second-level linear modular functions.

What then, is the message length cost of an injective but message-expanding function?

The ratio of error-free to imprecise inversion-recursion is asymptotic of 1:1.666
Which means total combined hashed message length of, on average, x 2.666 the original.
Plus the floating point digest.

From consideration of strict avalanche criteria, we may state that the function’s hash & digest are injective for a message of (n) length.

What point therefore, is there to this construction?

The answer lies in the economy of the indexing cost and the ability to self-construct non-archetypal strings from a reference hash-string without recourse to another specific hash file, using only the reference archetype & an initialisation vector plus terminal digest. This process will be elaborated in future posts.

 

 

 

 

13616424531654983932hourglass-computer-cursor-hi

Alex De Castro recently posted this paper on arXiv. I suppose it represents a rigorous development of the work mentioned here & here previously, whilst also extending the scope, demonstrating quantum information theory equivalence.
“Our result demonstrates by well-known theorems that existence of one-way functions implies existence of a quantum one-way permutation.”
The introduction & outline are relatively easy to follow. The proofs, for me, more difficult.

Link; Quantum one-way permutation over the finite field of two elements

Hound in the Hunt

October 31, 2016

That Obscura Object of Desire; a Verisimilitude of Truth?

While working up my next post, thought I’d leave this article from the MONA blog. (It’s worth watching the documentary, ‘Tim’s Vermeer’ by way of background.) Putting aside David’s Walsh’s ego, the post covers a fascinating if meandering discussion between David, Tim Jenison and interviewer Elizabeth Pearce. It touches upon a range of subjects, which I posit as generally concerning the verisimilitude of truth in art, science & anthropology?

Mona Blog

In the gallery at Mona, there is an exhibition-experiment taking place, called Hound in the Hunt. Read more about it here, and also – for the enthusiastic – watch the documentary Tim’s Vermeer, and get your hands on our big, beautiful book as well (online, in our bookshop, or in the library, for free).

The following is a conversation between David Walsh and Tim Jenison about Vermeer, Viagra, and the nature of genius. (Interviewed by Elizabeth Pearce, with a cameo appearance by Mona curator Jarrod Rawlins.)

Hound in the Hunt Photo Credit: Mona/Rémi Chauvin Image Courtesy Mona, Museum of Old and New Art, Hobart, Tasmania, Australia Hound in the Hunt
Photo Credit: Mona/Rémi Chauvin
Image Courtesy Mona, Museum of Old and New Art, Hobart, Tasmania, Australia

Elizabeth Pearce: David, in the exhibition catalogue for Hound in the Hunt, you write that even if you don’t give a shit about art you should watch Tim’s Vermeer, because it will teach you how to learn. What did…

View original post 13,959 more words

Whorls of Attraction

October 1, 2016

I recently purchased this most excellent ‘Kickstarter’ project, vintage style Mandelbrot map. Lovingly created by Bill Tavishttp://www.mandelmap.com.
He even went to the trouble of including a couple of the intrinsic attractor mappings…

mandelbrot_poster
Serendipitously, we were having a domestic Spring clean of accumulated detritus and I found this page among my personal effects. It dates from around 25 years ago. Ignoring the naïveté of the notation, screen co-ordinates & all, I thought I’d just leave this old print-out here for posterity, before it  gets trashed. It shows a graphic representation of the fractal reflection-translation used in my algorithm.
The bulbar cardoid of the Madelbrot may be famously familiar but the actual attractor-escape mappings are not so commonly illustrated, or much commented upon…

The closer an iterated point is to the central cartesian symmetry points x(-1,1) y(0), then characteristically, the stronger is the attractor’s ‘gravity’and the lesser the number of spiral limbs. Iterating points closer to the set’s escape boundary results in more complex spirals & increasingly chaotic ‘fingerprints’. From memory, the two centre shapes illustrated below, resulted from points on or just within the bulb’s boundary.

madelmap
So I’ve discarded one Mandelbrot picture only to frame another…
Haven’t fully decided what to do with the framed poster. If it looks too ‘school-roomish’ on my study wall, I may end up donating it to my kid’s school. Perhaps there, it might stimulate some errant student’s curiosity into discovering that mathematics holds deeper mystery and wonder than any dry school syllabus’ could ever convey?

 

Albert Wengcrackeder, venture capitalist at Union Square Ventures, puts forward compelling arguments for a decentralised internet protocol layer. An idea whose time has come?
His proposal; blockchain, blockchain, blockchain…
He envisions incentive for this tectonic tech shift arising via value from a reserved token portion of any distributed crypto protocol(s).

I can see the enourmous potential benefit in any decentralised IP protocol which enables semantic utility but there already exists, planetary scale inertia in capture of the status quo . And are there any better candidates than blockchain? That model has scaling issues, ledger overhead and even more importantly, arbitrary semantic structure. A replacement with the self-same attendant problems handicapping HTTP, plus even some new unforeseen ones?
Correlated hashing as per this site’s trapdoor scheme is my two-bobs worth… (this concept deserves its own explanatory post. I will see what I can do?)
Private crytpo contracts from correlated hashings constitute an evolutionary step higher up the disruption hierarchy than any block-headed legerdemain…
How to start the revolution?

tumblr post; Crypto Tokens and the Coming Age of Protocol Innovation
by Albert Wengler

ChnSn1xVAAA4M7X

 

‘Roll your own crypto’ is an oft & casually tossed IT security pejorative… with good reason. Cryptography is complex. The security assumptions implicit within individual mathematical facets can easily cancel one another out when wielded indiscriminately.
Stretching the analogy further, one might also surmise that the quality of the security ‘smoke’ is very much & mightily dependent upon the type of leaf you’re rolling!

Presently, we find ourselves in the midsts of an undeclared ‘Crypto-War 2.0’. The first casualty of war being truth etc., then perhaps the bona-fides of the various legitimate actors are also worthy of examination? There is much misdirection & misinformation…
The major players occupy two corners of a supposedly three-cornered conflict. In one corner, the ‘Kong’ like proportions of the government-security State. In the other, the corporate Godzillas that are the trans-national entities such as Google, Facebook & Apple. Privacy & civil liberty interests are two chihuahuas called ‘EFF’ & ‘ACLU’, on leashes in the far corner .

A large amount of analysis has been written on the tensions at stake between State-sponsored issues over surveillance & privacy. Little attention to date, has focused upon the corporate world’s vectors of self-interest. Shoshana Zuboff’s excellent article; ‘The Secrets of Surveillance Capitalism‘, highlighted the conflicted posturing that underscores much collective corporate proselytizing upon privacy matters.

Recently, fresh evidence emerged of the vertical integration between State & academia in support of surveillance, (not that that should be surprising from an historical ‘spying’ perspective.) See: Carnegie-Mellon re; TOR de-anonymization.)
Perhaps what is surprising though, is the State’s co-option of research into weaponised-math, when it is so tightly tied up in support of an unparalleled expansion of dragnet scale surveillance? This state of affairs prompted Phillip Rogaway of UCLA to publish a missive plea for academic efforts towards protection of privacy last December in; ‘The Moral Character of Cryptographic Work‘.

And then, there was the NSA’s ill-considered sabotage of the NIST standard for Dual-EC cryptography.

At this years RSA Conference, Prof. Adi Shamir intimated at the dissonance between the supposed practical state of Quantum Computing and the NSA/NIST policy advice on the imperative for migration towards post-quantum cryptography standards. He conjectured that the NSA has likely made some advance (non quantum hardware related) in breaking elliptic curve cyptography.
Well worth watching; his views on quantum crypto & the move away from ECC @ 30:00.

And so it goes…

On that note, I’d like to largely close out my interest in, and promotion of cryptography on these pages. The efforts of this blog have been those of an honest (amateur) broker, the worth of the method(s) put forward remain for others to assess. I can state with certainty, that no ‘Elliptic Flake’ was used in their manufacture!

As cryptography and complexity essentially represent two-sides of the same coin, I perhaps would like, in the future, to make one or two very general posts about complexity issues as they relate to ‘free-lunch’ theorems and matters P & NP, as they apply to neuroscience. Beyond that, my work here is probably done.

Thankyou for your interest!

NB; I’ve been pondering this post for some time, & ended up knocking it out in short order. Edits may appear subsequent.

Ancillary Diagram.

May 24, 2016

 

The following ‘circuit’ diagram might be helpful in following the logic flow of my algorithm.
An unbroken blue line shows the path of inheritance, dotted line demonstrates the active node’s interrogation of the preceding node, in order to determine the error locus relative to the default median inversion of blunt-precision regression. Grey line, excluded path.
Essentially, a 1-in-3 tree?
The designation of the interrogating node as x’ might be a little confusing in terms of order, (should be x ?) but I wanted to avoid clutter with the base level labels…

1INTHREETREE

Constituting a ternary tree, two bits are required to encode the a-priori, inherited state.
One (control?) bit is assigned to indicate if the tree is of median or lateral inheritance.
The other (target?) bit receives its value according to the L-R lateral branch to be encoded.
In the case of the ‘median’ case inheritance, the target-bit is ‘free’ to encode on some external data set, as the discriminant of the lateral inheritance case is superfluous in this instance.

One final technical point. The computationally intensive nature of my algorithm is in contrast to the intrinsic Boolan-algebraic efficiency of the method proposed by Alex DeCastro. Perhaps my work is better considered on its merits as a PRG?

Quiz; Perhaps someone would like to explain how the cumulative probability of the two lateral legs converges to 0.625?

Suggestions welcome, as always…