It would be nice to have a coded, (or even pseudo-code) working prototype of this algorithm but time and lack of programming ability have conspired against me to date. In some ways it’s preferable to see the relationships between variables and their outputs, as lain out ‘on the page’, in columns of step iterations.

Obviously, column I. is the ‘step-wise’ operation of function f(x) & column L. is the inverse function.
Column J. is a rounded, truncated copy of I. upon which the inverse function operates. This step is critical to the assumption that the inverse function is operating step-wise, within +/- 1 U.L.P accuracy of the previously calculated value of f(x). The inverse function must operate within the displayed range of precision and cannot use an extended range to arrive at a ‘perfect’ result.
Column M. therefore, presents the difference (∆) between the calculated result of f(x) & f-1(x).
Column A. is the absolute value of the ∆ displayed in N., forward cast two steps (n+2)*. The absolute value of ∆ is an index to the function output, (& variable feedback into the equation.) Delta = 0 indicates a ‘perfect’ regression of f-1(x) and this means the output (n+1)* of f(x)’s variable is free to be modulated according to ‘some external binary data set’ (S). Delta = 1 indicates the regression is in error by +/- 1 U.L.P and as such, the binary modulated output of f(x), (n+1) must be assigned as indicating the error of +1 or -1 in U.L.P.
Column B. iterates some ‘mock’ binary data, representative of an ‘external binary data set’ as if it were undergoing coding/decoding. In this model, the ‘data’ is merely a randomized function with outputs for display purposes only.
Column C. is the binary operator assigned in correspondence with ∆ = +1, -1.
Column D is a static, somewhat arbitrary value, but calibrated never-the-less to maintain function outputs > 0.08. Lesser output values result in an increased magnitude of step-wise regression error.
Column F. is the forward cast, binary modulated variable, representative of column B. or C., as dictated by the ‘trapdoor’ table values of column A.

With a prescribed precision range of 14 decimal places, given that the leading decimal place is zero and the second place-holder of significance is also unchanging, then the hash function exhibits only around 96 bits of encryption strength and that, within a ‘key-set’ restricted to decimal numbers only. This is somewhat weak in terms of ‘worst case performance’ predicated upon a ‘lucky guess’. To account for this weakness, column E. presents an array of five static variables with place-holders in the least significant ten digits. These keys are iterated sequentially by the function. Not only do they increase encryption strength by extending the effective bit-range ~x5, but they also serve to split the ‘key function’ between consecutively iterated steps. This gives alorithm approximtely 480k of encryption strength over one ‘key cycle’.
The inverse function is therefore robust and exponentially difficult.
I haven’t attempted to accurately calculate the algorithmic complexity but trust me… it’s ->  huge. Anybody care to hazard the big ‘O’ notation?

Additional: Columns O,P,Q,R isolate the four discrete bands of the function output arising from the ascribed binary variable inputs. Min-max summations at bottom of columns show these bands maintain discrete separation. Beneath columns B. & C. is a count of the frequencies corresponding to the absolute ∆ index for 0 & 1. Surprisingly, it appears to run > 1:3 where ∆ = 0.

Sheet 2 presents a graph of the function’s output over 500 iterations.

*Forward cast (n+2), relative to the product of the inverse function, (n+1) relative to function f(x).