[ prog / sol / mona ]

prog


LISP Puzzles

1 2020-04-03 16:19

I've found a fun little paper while doing the ACM game in /prog/98. It's just four easy LISP puzzles. I'll copy them below for your amusement. Replace "DE" with "defun" obviously.

(Hofstadter) Write a LISP function to generate the following set of integers (1 3 7 12 18 26 35 45 56 ...

(Hofstadter) This is the function g. What is she doing?

(DE g (n)
  (IF (ZEROP n) 0
      (- n (g (g (SUB1 n))))))

"and what about the function q?"

(DE q (n)
  (COND ((= n 1) 1)
        ((= n 2) 1)
        (T (+ (q (- n (q (SUB1 n))))
              (q (- n (q (- n 2))))))))

(Bratley, Milo) This is the function FOO. What is she doing?

(DE foo NIL
  (PRINT (APPEND (QUOTE (DE foo))
                 (CDR (GET (QUOTE foo)
                           (QUOTE EXPR)))))
  (PRINT (QUOTE (foo))))
(foo)

(Goossens) This is the function BAR. What is she doing?

(DE bar (l x)
  (IF (NULL l) x
      (bar (REVERSE (CDR L)) (CAR L))))

link: https://dl.acm.org/doi/10.1145/1411829.1411830
I never knew functions were female also 100 GET

2 2020-04-03 23:58

That Hofstadter sequence is actually a pretty cool code golf. I'll submit something.

3 2020-04-05 02:29

>>2
Ok, try to beat this: O(n) in time. I don't think you can make it consume much less than O(n) memory or I'll be curious to know how. I'm sure the program can be much shorter in another language.

int main(int argc, char *argv[])
{
    int n = atol(argv[1]); unsigned long R[n]; R[0] = 1;
    if (n > 1) {
        R[1] = 3; int S = 2; int P = 1;
        for (int i = 1; i < n; i++) {
            R[i] = R[i - 1] + S;
            ++S  == R[P] && S++;
            R[P] < S && P++;
        }
    }
    for (int i = 0; i < n; i++) {
        printf("%lu\n", R[i]);
    }
}

It outputs the Hofstadter sequence like this:

 ./a.out 20
1
3
7
12
18
26
35
45
56
69
83
98
114
131
150
170
191
213
236
260
4 2020-04-05 02:35 *
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
    int n = atoi(argv[1]); unsigned long R[n]; R[0] = 1;
    if (n > 1) {
        R[1] = 3; int S = 2; int P = 1;
        for (int i = 1; i < n; i++) {
            R[i] = R[i - 1] + S;
            ++S  == R[P] && S++;
            R[P] < S && P++;
        }
    }
    for (int i = 0; i < n; i++) {
        printf("%lu\n", R[i]);
    }
}

edit: forgot to include stdio and stdlib :)

5 2020-04-05 11:19 *

The space complexity can actually be less than O(n) with 2 dynamically allocated vectors

E.g at step H(21)

                 P
                 |
R = 1 3 7 12 18 26 35 45 56 69 83 98 114 131  150  170 191 213 236 260
S = 2 4 5  6  8  9 10 11 13 14 15 16  17  19   20   21  22  23  24  25

We know that

R(21) = R(20) + S(20)
S(21) = S(20) + 1, if S(20) + 1 ∉ R(1,...,20)
S(21) = S(20) + 2, if S(20) + 1 ∈ R(1,...,20)

The precedent algorithm allocated an array of size 21 and kept a pointer P to R(5)=26, the next value to skip when S(n - 1) + 1 reaches it.

But to compute the next P = S(6) = R(5) + S(5) = 35 we only need to store

R = 1 3 7 12 18 26
S = 2 4 5  6  8  9

(we can't just store the last two values, because computing S(n + 1) would require backtracking and time complexity will explode)

6 2020-04-05 11:25 *

Also the exercise asked for a LISP function.

7 2020-04-05 14:01

Here's my godawful Scheme solution, I can't even figure out the general case for the first two iterations, someone please help?

(define (hofstadter+ n)
  (let* ((store (cons 7 's))
         (tail store)
         (r 7)
         (s 5))
    (let loop ((n n))
      (unless (zero? n)
        (display r)
        (newline)
        (set! r (+ r s))
        (set-cdr! tail (cons r 's))
        (set! tail (cdr tail))
        (if (>= (+ 1 s) (car store))
            (begin
              (set! s (+ s 2))
              (set! store (cdr store)))
            (set! s (+ s 1)))
        (loop (- n 1))))))

(define (hofstadter n)
  (when (> n 1) (display "1\n"))
  (when (> n 2) (display "3\n"))
  (when (> n 3) (hofstadter+ (- n 2))))

The space complexity is O(n) as S(n), the distance between R(n) and R(n+1), monotonously grows, meaning that it is becoming increasingly uncommon that you can discard a previously computed S(x) value.

8 2020-04-05 14:23

There is no need for any dynamic allocation.
http://void.wikidot.com/code:hofseq-c

9 2020-04-05 17:00

>>8
The nostalgia! I salute you!

I hope you won't mind if I copy and paste your submission here, turning off the space-saving FV indentation style for a less innovative one that, for unknown reasons, some of us still find more readable:

 /* hofseq.c by FrozenVoid */
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
uint64_t hofseq_ff(uint64_t N) {
    uint64_t R = 1, S = 2, P = 1, t = 0;
    for (uint64_t i = 1; i < N; i++) {
         R += S++;
         t = P > 1 ? hofseq_ff(P) : 3;
         S += (S == t); P += (t < S);
     }
     return R;
}
int main(int argc, char**argv) {
    uint64_t Num = argc != 2 ? 0 : strtoull(argv[1], NULL, 10);
    for(uint64_t i = 0; i < Num; i++) printf("%lu,", hofseq_ff(i+1)); return 0;
}

You're essentially trading off memory usage for time complexity (and time complexity did explode with the recursive calls to hofseq_ff, I non-figureofspeechously left the computer and fixed myself a coffee while running ./hofseq_ff 10000 which >>4 computes in a couple of milliseconds.

On a side note, avoiding the use of dynamic allocation while implementing the solution described in >>5 means finding the optimal size of the allocated arrays R and S, and it isn't immediately obvious to me.

>>7
Finally a Lisp solution. Unfortunately I can't try it right now on this computer. And who's running for the shortest now? Where are the Perl Monks?

I can't even figure out the general case for the first two iterations, someone please help?

R1 = 1, from there you derive S1 as the smallest integer not already used in R or S. So S1 = 2 and R2 = R1 + S1.

10 2020-04-05 17:46

>>9 I've updated the code, now it just prints the Nth number.
Its much faster now, no arrays required.

11 2020-04-05 17:46

>>8,9
It does not trade memory usage for time complexity, it uses the stack implicitly. The call-stack generated at its highest will be the same length as the ``store'' in >>7 which is already really close to the memory use in >>5. You can modify any solution to keep track of how many values are actually skipped and can be discarded. For R(100000), it is 430. It's simply not worth it.

12 2020-04-05 17:54

>>11
Yep it's linear in time and awfully fast, well done!

13 2020-04-05 18:34

>>7
Actually I can ssh in a machine with Guile. I couldn't resist. It's fast and Scheme doesn't have C's limitation on int size.

14 2020-04-05 19:45

Intuitively it seems fairly obvious that the number of R values touched in the S computation grows as O(n^(0.5+epsilon)), where epsilon is left as an exercise for the reader.

15 2020-04-05 22:50

The O(n^(0.5+epsilon)) space consumption is actually fairly easy to achieve while keeping the process iterative, contrary to the doubts expressed by >>3. I'll post some code after I clean up the variable names a bit to make it easier to follow.

16 2020-04-06 03:22

The newest version of http://void.wikidot.com/code:hofseq-c adds a small const array for values below 256, which is several times faster(because these values get recalculated at each call to hofseq_ff):

~/Code>time ./hofseq-old 2000000000
2000083969706219361
real 0m20.092s
user 0m20.050s
sys 0m0.008s

~/Code>time ./hofseq 2000000000
2000083969706219361
real 0m6.980s
user 0m6.978s
sys 0m0.000s

17 2020-04-06 03:38

hofseq-old code for reference:

/* hofseq.c v1.03 by FrozenVoid  */
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
//version without array: hofseq-old
uint64_t hofseq_ff(uint64_t N) {
    uint64_t R = 1, S = 2, P = 1, t = 0,Plast=0;
    for (uint64_t i = 1; i < N; i++) {
         R += S++; 
        if(Plast!=P){t = P > 1 ? hofseq_ff(P) : 3;

         }
         S += (S == t);Plast=P; P += (t < S);
     }
     return R;
}
uint64_t hofseq(uint64_t N){
    uint64_t R = 1, S = 2, P = 1, t = 0,Plast=0;
    for (uint64_t i = 1; i < N; i++) {
       printf("%lu,",R);
         R += S++; 
        if(Plast!=P){ t = P > 1 ? hofseq_ff(P) : 3;
          }
         S += (S == t);Plast=P; P += (t < S);
     }
     return R;


}

int main(int argc, char**argv) {
// Syntax: hofseq Num1 [Num2]
//if second argument, print sequence up to Num2
if(argc==3){ hofseq( strtoull(argv[2], NULL, 10));}
//print hofseq Nth number(num1)
if(argc>=2){  printf("%lu", hofseq_ff(strtoull(argv[1], NULL, 10)));}


return 0;}
18 2020-04-06 10:57

Here is a moderately efficient implementation using O(n^(0.5+epsilon)) space and an iterative computational process. It is based on two simple observations. The first one is that the S sequence can be partitioned into continuous groups with jumps only at the group boundaries. The groups contain no more information than their starting value and length, and can therefore be represented by such a pair, effectively performing run-length encoding on the S sequence. The second one is that, since the gaps in the S sequence are given by the R values, and since R is just a running sum over S, the S-groups can be computed directly from the S sequence, which can thus generate itself without any help from R. Combining these observations yields:

def work_seq (n):
   # heuristic size
   groups = [None] * int (2 * n ** 0.5)
   groups [0] = (2, 1)
   groups [1] = (4, 3)

   group_count  = 2
   index_groups = 1
   used_group   = 1
   lastR        = 3

   start, count = groups [index_groups]
   covered = 2 + count
   S       = start
   length  = count

   while covered < n:
      a = S
      b = S + length
      # R += [a...b)
      lastR += (b * (b - 1) - a * (a - 1)) // 2

      S += length + 1

      if used_group < count:
         length = start + used_group - 1
         used_group += 1
      else:
         index_groups += 1
         start, count = groups [index_groups]
         length = start - 1
         used_group = 1

      groups [group_count] = (S, length)
      group_count += 1
      covered     += length

   # print (groups)
   print ("groups:", group_count, "index:", index_groups)

   a = S
   b = S + length - (covered - n)
   # R += [a...b)
   R = lastR + (b * (b - 1) - a * (a - 1)) // 2

   return R

This is an inner helper function that is only called by its wrapper for n>=3. The function can be easily modified to yield all the R values to its caller. One could also snapshot the state of the computation and resume it later.

Using the convention that R(1)=1 and R(2)=3, the first n of the form 10^k for which R(n) no longer fits into an u64 is ten billion (10^10), so I propose that we use n=10^10 to compare implementations. The value of R(10^10) is
50000940106333921296
While computing it the function stored 141082 group entries, a touch over O(sqrt(n)), out of which 512 were used to compute group lengths, a touch over O(n^0.25).

$ python3 -m flake.hofs seq 10000000000
groups: 141082 index: 512
10000000000: 50000940106333921296 0x2B5E7061C419DA010

The computation takes just over 70 milliseconds. While this can be further improved, keep in mind that this does not use any special libraries like numpy or gmpy2, and that it is run by the default CPython interpreter. A port to C with GMP should easily bring down the time.

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq (10000000000)'
10 loops, best of 3: 71.3 msec per loop

I'll post some data about the epsilon values in a bit. I hope other Anons will post their timings for R(10^10).

>>16
Thanks for posting timings. Here is confirmation of your R(2*10^9) value:

$ python3 -m flake.hofs seq 2000000000
groups: 63021 index: 340
2000000000: 2000083969706219361 0x1BC1B9C6074AD761

and here is the runtime in my implementation, just over 30 milliseconds:

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq (2000000000)'
10 loops, best of 3: 30.5 msec per loop
19 2020-04-06 17:03 *

>>18
Very nice insight!

One could also snapshot the state of the computation and resume it later.

Isn't this only true if you save all the R values, which makes the space complexity O(n) again?

20 2020-04-06 20:18

>>19

Isn't this only true if you save all the R values, which makes the space complexity O(n) again?

No. My implementation does not need nor store any R values other than the last one. The S sequence generates itself. The only non-scalar in the entire computation is the 'groups' list, which is just a touch over O(sqrt(n)).

21 2020-04-06 21:12 *

>>20
You store the first 2 * n ** 0.5 R values encoded in your groups list. If you computed work_seq(10000) and dumped its internal state before returning, you couldn't just restore it and continue to R(20000), you would have to recompute the R values that were not saved.

22 2020-04-06 23:08

It is fairly obvious that the space consumption can be brought down to O(n^0.25). I might write that version next since the current version struggles to go above R(10^15). Or I might post the epsilon analysis first, since there is already enough data to empirically show that epsilon is zero.

>>21

You store the first 2 * n ** 0.5 R values encoded in your groups list.

This is false. No R value is ever stored other than the last one. The groups are not R values. The groups are (start, length) pairs of gapless S blocks.

If you computed work_seq(10000) and dumped its internal state before returning, you couldn't just restore it and continue to R(20000), you would have to recompute the R values that were not saved.

This is entirely false due to your misreading of the 'groups' list. No R value is ever stored other than the last one. No R value is ever used to compute the S sequence. The S sequence generates itself -- this observation is key to the group technique's efficiency.

23 2020-04-07 03:11

Here is the epsilon data I promised. The columns of the table, in order, are:
1. The exponent k that yields n=10^k.
2. The value of R(10^k). Please verify the value of R(10^15) with other implementations.
3. The number of S-groups in the S sequence.
4. The ratio between column 3 and sqrt(n).
5. The number of S-groups used to generate the S sequence.
6. The ratio between column 5 and n^0.25.

 1                             69        4 1.264911    1 0.562341
 2                           5764       12 1.200000    3 0.948683
 3                         526334       41 1.296534    7 1.244796
 4                       50874761      133 1.330000   13 1.300000
 5                     5028514748      431 1.362942   25 1.405853
 6                   500918449417     1384 1.384000   47 1.486271
 7                 50029364025646     4416 1.396462   87 1.547103
 8               5000934571821489    14039 1.403900  157 1.570000
 9             500029664631299264    44534 1.408289  285 1.602673
10           50000940106333921280   141082 1.410820  512 1.619086
11         5000029765607020822528   446604 1.412286  920 1.636017
12       500000941936492081577984  1413120 1.413120 1647 1.647000
13     50000029798618762867376128  4470180 1.413595 2944 1.655533
14   5000000942529842273283211264 14138641 1.413864 5255 1.661777
15 500000029809255645132191956992 44715123 1.414016 9372 1.666603

Three empirical conclusions can be drawn. Formal proof is left as an exercise for the reader.

I. The R values are increasingly good approximations of n*n/2 from above, as we would expect from a running sum. For example, for n=10^15 the distance is less than 6 parts in one hundred million.

II. The ratio in the fourth column will converge to a constant slightly above 1.4 but below 1.5. This means that the space consumption is bounded from above by a constant multiple of sqrt(n), so it is no worse than O(sqrt(n)) and the corresponding epsilon is zero.

III. The ratio in the sixth column will converge to a constant slightly above 1.6 but below 2. This means that the number of S-groups used to generate the S sequence is bounded from above by a constant multiple of n^0.25, so it is no worse than O(n^0.25) and the corresponding epsilon is zero.

I might write a faster version to get above R(10^15), if I find the time.

24 2020-04-07 23:06

Actually, the O(n^0.25) space version is easy to write and therefore boring. I have a better idea that will bring down space consumption a lot further, so I'll just write that instead.

25 2020-04-08 10:04

The new and improved version of >>18, based on the skipped version of >>24, now generates the Hofstadter R sequence in logarithmic space. I was going to post this version but while polishing it up I hit upon another trick that can be used to bring down the runtime, so I'll add that first. Since RAM size is now irrelevant, as opposed to the limiting factor it was for >>18, here is the value of R(10^16) to extend the table from >>23:

$ python3 -m flake.hofs seq2 10000000000000000
levels: 6
10000000000000000: 50000000942720152238624116795401 0x27716B7684EB9129B5951936809

Please verify this value using other implementations.

26 2020-04-09 03:49

[1/2]
There is a bug in gawk 4.1.4, which was used to generate the table from >>23, which causes the second column, that of R values, to be wrong starting with k=9. You can see that the value of R(10^10) stated in >>18, which is the correct value from raw python output, differs from the value from >>23 for k=10. The values stated from raw python output are correct, only the R column of >>23 is wrong starting with k=9. Here is the raw python output with the correct values:

$ cat data.txt
groups: 4 index: 1
10: 69 0x0000000000000045
groups: 12 index: 3
100: 5764 0x0000000000001684
groups: 41 index: 7
1000: 526334 0x00000000000807FE
groups: 133 index: 13
10000: 50874761 0x0000000003084989
groups: 431 index: 25
100000: 5028514748 0x000000012BB90BBC
groups: 1384 index: 47
1000000: 500918449417 0x00000074A110F509
groups: 4416 index: 87
10000000: 50029364025646 0x00002D805E78992E
groups: 14039 index: 157
100000000: 5000934571821489 0x0011C452D0B2C9B1
groups: 44534 index: 285
1000000000: 500029664631299282 0x06F07654A9819CD2
groups: 141082 index: 512
10000000000: 50000940106333921296 0x2B5E7061C419DA010
groups: 446604 index: 920
100000000000: 5000029765607020319048 0x10F0D5A2486CA185148
groups: 1413120 index: 1647
1000000000000: 500000941936492050650505 0x69E11AF9D4B68A281589
groups: 4470180 index: 2944
10000000000000: 50000029798618763894670256 0x295BEB0BEDFC4C3D3B43B0
groups: 14138641 index: 5255
100000000000000: 5000000942529842698007077786 0x1027E762374D2B62E383E39A
groups: 44715123 index: 9372
1000000000000000: 500000029809255627825531266625 0x64F9654B819C0B0427AA32641

Here is a diff between the values from the raw python output and the R column of >>23:

$ diff <(grep -e ' 0x' data.txt | cut -d ' ' -f 2) <(sed -r -e 's/^ *[0-9]+ +([0-9]+) +.+$/\1/' epsilon.txt)
9,15c9,15
< 500029664631299282
< 50000940106333921296
< 5000029765607020319048
< 500000941936492050650505
< 50000029798618763894670256
< 5000000942529842698007077786
< 500000029809255627825531266625
---
> 500029664631299264
> 50000940106333921280
> 5000029765607020822528
> 500000941936492081577984
> 50000029798618762867376128
> 5000000942529842273283211264
> 500000029809255645132191956992
27 2020-04-09 03:50

[2/2]
Gawk 4.1.4 claims GNU MP support:

$ gawk --version
GNU Awk 4.1.4, API: 1.1 (GNU MPFR 4.0.1, GNU MP 6.1.2)
Copyright (C) 1989, 1991-2016 Free Software Foundation.

but it mangles large numbers if they are printed as numbers rather than strings. A minimal test case:

$ echo "50000940106333921296" | gawk '{print $1}'
50000940106333921296
$ echo "50000940106333921296" | gawk '{printf "%d\n", $1}'
50000940106333921280

What happens is that gawk 4.1.4 mangles the 16th byte from the msb and drops lower bytes:

>>> hex (500029664631299282)
'0x6f07654a9819cd2'
>>> hex (500029664631299264)
'0x6f07654a9819cc0'
>>> hex (50000940106333921296)
'0x2b5e7061c419da010'
>>> hex (50000940106333921280)
'0x2b5e7061c419da000'
>>> hex (500000029809255627825531266625)
'0x64f9654b819c0b0427aa32641'
>>> hex (500000029809255645132191956992)
'0x64f9654b819c0c00000000000'

Somewhat surprisingly this already happens for R(10^9) even though it fits into an s64. Here is the corrected table with the R column matching the raw python output:

 1                             69        4 1.264911    1 0.562341
 2                           5764       12 1.200000    3 0.948683
 3                         526334       41 1.296534    7 1.244796
 4                       50874761      133 1.330000   13 1.300000
 5                     5028514748      431 1.362942   25 1.405853
 6                   500918449417     1384 1.384000   47 1.486271
 7                 50029364025646     4416 1.396462   87 1.547103
 8               5000934571821489    14039 1.403900  157 1.570000
 9             500029664631299282    44534 1.408289  285 1.602673
10           50000940106333921296   141082 1.410820  512 1.619086
11         5000029765607020319048   446604 1.412286  920 1.636017
12       500000941936492050650505  1413120 1.413120 1647 1.647000
13     50000029798618763894670256  4470180 1.413595 2944 1.655533
14   5000000942529842698007077786 14138641 1.413864 5255 1.661777
15 500000029809255627825531266625 44715123 1.414016 9372 1.666603

Sorry about the wrong R column in >>23, I did not expect a bug in gawk.

28 2020-04-09 09:47 *

>>26,27
Isn't it a bug in printf?

echo "50000940106333921296" | nawk '{print $1}'
50000940106333921296
% echo "50000940106333921296" | nawk '{printf "%d\n", $1}'
-9223372036854775808
29 2020-04-09 09:55 *

I mean "%d" specifies a signed int whose size is only guaranteed to be at least 16 bits.
"%llu" is unregnized and "%lu" yields

$ echo "50000940106333921296" | gawk '{printf "%lu\n", $1}'
5.00009e+19
30 2020-04-09 11:47

>>29
The issue was that I naively assumed large integer support to be permanently active. It needs to be explicitly requested with -M or gawk will mangle your large integers. Since the -M option effectively means "do not mangle my large integers", the gawk authors might claim it to be a "feature", rather than its implicit absence a bug.

$ echo "50000940106333921296" | gawk '{printf "%d\n", $1}'
50000940106333921280
$ echo "50000940106333921296" | gawk -M '{printf "%d\n", $1}'
50000940106333921296

https://www.gnu.org/software/gawk/manual/html_node/Arbitrary-Precision-Integers.html

31 2020-04-09 12:16

>>30

the -M option effectively means "do not mangle my large integers

M like "do not Mangle", thanks I will remember it now!

32 2020-04-09 19:55

The new version with both tricks >>25 added works and is considerably faster than >>18. I just need to write up some sort of description. As a preview, here is R(10^20):

k 20 index 100000000000000000000
levels: 5
100000000000000000000 -> 5000000000942800098290022420982686040347 0xEB194F8ED94AC565124861C3B7D0C411B 132
33 2020-04-10 05:19

>>32
Does it precalculate sum(1...N) and substract the S-group to S-Group transition points(R's) to calculate R?

34 2020-04-10 11:46

>>33

precalculate sum(1...N)

The final S value is not equal to N. You would need the actual value of S(N-1).

substract the S-group to S-Group transition points(R's)

No. That is a great idea, but would require visiting every R value that is used as an S gap, one at a time. That would run in O(sqrt(n)). That runtime was already achieved in >>18 with the complement of your method, although your way will give a better constant factor. My new technique is much faster than O(sqrt(n)) and is a descendant of the one I used in >>18. I will post an explanation and the code when I finish the write-up, it's just that doing write-ups is boring compared to writing code so it's slow going.

If you have an implementation of the technique in your question, please post the value of R(10^10) and the timing to compute it, so I can compare it with >>18. If you are willing, please link/post the code as well.

35 2020-04-13 00:01

[1/4]
The new and improved version of >>18 and >>25 uses stacked self-generation of the S sequence and second-order S-groups to achieve O(log(log(n))) space consumption and O(n^0.25) runtime.

The S sequence already generated itself in >>18, and to do so it used a portion of itself proportional to the square root of the number of items it was asked to produce. The natural idea is to apply self-generation to the generator sequence, to bring down the retained portion of S to n^(1/4). Adding another generator on top brings it down to n^(1/8). Dynamically adding a sufficient number of generators until the topmost generator is never asked to produce its next element eliminates the retained portion entirely. All the state that is kept is a single entry at the leading edge of each level of generation. This means that one entry is kept for S, one for the generator that will end at n^(1/2), one for the generator that will end at n^(1/4), one for n^(1/8) and so on. Additionally, a generator only comes into existence when its second item is requested, and is not stored anywhere prior to that. This means that the number of generators increases by 1 when n is squared. The function that increases by 1 when n is doubled is log(n), while the one that increases by 1 when n is squared, which means that the exponent that yields n is doubled, is log(log(n)). Therefore the space consumption of this algorithm is O(log(log(n))). This is the stacked self-generation of the S sequence.

Further up the thread I mistakenly understated the space efficiency of the algorithm by using a single logarithm. The data table will back up the relationship between the number of generators and n. I am fairly certain that this O(log(log(n))) is the theoretical minimum for the space consumption for generating the R sequence, but formal proof is left as an exercise for the reader.

Care must be taken with this technique in order to ensure that the computational process of generation remains linear instead of degenerating into tree recursive. This is achieved by asking any one generator to produce one particular item exactly once. No generator is ever asked twice for the same item. When generator k needs the next item from generator k+1 to satisfy a request from k-1, it asks k+1 for this item exactly once, remembers it, and uses it to satisfy an entire group of requests from k-1. Only after k-1 has exhausted an entire group of requests to k, will k ask k+1 for another item. Without this trick the process would indeed degenerate into tree recursive, but with this trick each generator walks its own portion linearly. Because the number of items yielded by a generator is the square root of the number of items yielded by the preceding generator, the total work is
q + sqrt(q) + sqrt(sqrt(q)) + sqrt(sqrt(sqrt(q))) + ...
where q denotes the number of items yielded by the lowest generator. The series cuts off as soon as a value drops below 2, because a generator is only stored once its second item is requested. With this the sum can easily be seen to be bounded from above by 2*q, and since it is evidently bounded from below by q, it follows that it belongs to O(q). In our case q will be n^(1/4).

In >>18 the S sequence was partitioned into groups and one group was folded into the R value in one step. These are the first-order S-groups and they give >>18 a runtime of O(n^0.5). In this new version the stream of S-groups is again partitioned into groups, which are now groups of groups of S entries, and one group of groups of S entries is folded into the R value in one step. These are the second-order S-groups and they cause the lowest generator to produce q=n^0.25 items. Since the process is linear in q, it follows that the runtime is O(n^0.25).

36 2020-04-13 00:02

[2/4]
With these explanations the code should be easy to follow:

def seq2_next (levels, level):
   if level < len (levels):
      state = levels [level]
      start, count, used, S = state

      if used < count:
         length = start + (used - 1)
         state [2] = used + 1
         state [3] = S + (length + 1)
         return (S, length)
      else:
         start, count = seq2_next (levels, level + 1)
         length = start - 1
         state [0] = start
         state [1] = count
         state [2] = 1
         state [3] = S + (length + 1)
         return (S, length)
   else:
      levels.append ([4, 3, 2, 13])
      return (8, 4)

def seq3_sum1sc (s, c):
   # start, count
   return (c - 1 + s + s) * c // 2

def seq3_covered (s, c):
   # start, count
   return (c - 3 + s + s) * c // 2

def seq3_inc (s, c, S):
   # start, count, S

   # incS   = (c - 1 + s + s) * c // 2
   # countS = (c - 3 + s + s) * c // 2
   cm1    = c - 1
   incS   = (cm1 + s + s) * c // 2
   countS = incS - c

   overR = (incS - 1) * incS // 2
   # backR = (s - 1) * c + s * (c - 1) * c // 2 + (c - 1) * c * (c + 1) // 6
   cm1c  = cm1 * c
   backR = (s - 1) * c + s * cm1c // 2 + cm1c * (c + 1) // 6
   incR  = overR - backR + S * countS
   return (incR, incS)

def work_seq3 (n):
   seq3covered = seq3_covered
   seq3inc     = seq3_inc
   seq3next    = seq2_next
   seq3sum1sc  = seq3_sum1sc

   levels       = []
   R, S         = 3, 4
   start, count = 4, 3
   advance      = seq3covered (start, count)
   covered      = 2 + advance

   while covered <= n:
      incR, incS = seq3inc (start, count, S)
      R += incR
      S += incS
      start, count = seq3next (levels, 0)
      advance      = seq3covered (start, count)
      covered     += advance

   print ("levels:", len (levels))
   covered     -= advance
   fakelevels   = [[start, count, 0, S]]
   start, count = seq3next (fakelevels, 0)
   covered     += count

   while covered < n:
      R += seq3sum1sc (start, count)
      start, count = seq3next (fakelevels, 0)
      covered     += count

   R += seq3sum1sc (start, count - (covered - n))
   return R

The first integer k for which R(10^k) overflows an u64 is k=10. The value of R(10^10) is
50000940106333921296
and, while this computation took about 70 milliseconds in >>18, it now takes about 850 microseconds in vanilla python:

$ g 10
k 10 index 10000000000
levels: 4
10000000000 -> 50000940106333921296 0x2B5E7061C419DA010 66
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq3 (10000000000)'
1000 loops, best of 3: 841 usec per loop

The first integer k for which R(10^k) overflows an u128 is k=20. The value of R(10^20) is
5000000000942800098290022420982686040347
and it is computed in 300 milliseconds in vanilla python:

$ g 20
k 20 index 100000000000000000000
levels: 5
100000000000000000000 -> 5000000000942800098290022420982686040347 0xEB194F8ED94AC565124861C3B7D0C411B 132
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq3 (100000000000000000000)'
10 loops, best of 3: 300 msec per loop

As you can see, the algorithmic improvements made possible by the application of a small amount of math effort tend to outweigh the differences between languages. I propose that we use the timing for computing R(10^20) to compare implementations with this runtime order of growth.

37 2020-04-13 00:03

[3/4]
Here is the raw python output for k up to 30, to avoid any gawk mishaps:

$ cat data2.txt
k 1 index 10
levels: 0
10 -> 69 0x45 7

k 2 index 100
levels: 1
100 -> 5764 0x1684 13

k 3 index 1000
levels: 2
1000 -> 526334 0x807FE 20

k 4 index 10000
levels: 3
10000 -> 50874761 0x3084989 26

k 5 index 100000
levels: 3
100000 -> 5028514748 0x12BB90BBC 33

k 6 index 1000000
levels: 3
1000000 -> 500918449417 0x74A110F509 39

k 7 index 10000000
levels: 3
10000000 -> 50029364025646 0x2D805E78992E 46

k 8 index 100000000
levels: 4
100000000 -> 5000934571821489 0x11C452D0B2C9B1 53

k 9 index 1000000000
levels: 4
1000000000 -> 500029664631299282 0x6F07654A9819CD2 59

k 10 index 10000000000
levels: 4
10000000000 -> 50000940106333921296 0x2B5E7061C419DA010 66

k 11 index 100000000000
levels: 4
100000000000 -> 5000029765607020319048 0x10F0D5A2486CA185148 73

k 12 index 1000000000000
levels: 4
1000000000000 -> 500000941936492050650505 0x69E11AF9D4B68A281589 79

k 13 index 10000000000000
levels: 4
10000000000000 -> 50000029798618763894670256 0x295BEB0BEDFC4C3D3B43B0 86

k 14 index 100000000000000
levels: 4
100000000000000 -> 5000000942529842698007077786 0x1027E762374D2B62E383E39A 93

k 15 index 1000000000000000
levels: 5
1000000000000000 -> 500000029809255627825531266625 0x64F9654B819C0B0427AA32641 99

k 16 index 10000000000000000
levels: 5
10000000000000000 -> 50000000942720152238624116795401 0x27716B7684EB9129B5951936809 106

k 17 index 100000000000000000
levels: 5
100000000000000000 -> 5000000029812655507343465281696595 0xF684DF6F6CF366E515963C973353 112

k 18 index 1000000000000000000
levels: 5
1000000000000000000 -> 500000000942780823112495107784753816 0x604BE740F05D25AE145E143A76C298 119

k 19 index 10000000000000000000
levels: 5
10000000000000000000 -> 50000000029813737262126730811322149673 0x259DA6548D98BAAC7E7EE6322EEC2B29 126

k 20 index 100000000000000000000
levels: 5
100000000000000000000 -> 5000000000942800098290022420982686040347 0xEB194F8ED94AC565124861C3B7D0C411B 132

k 21 index 1000000000000000000000
levels: 5
1000000000000000000000 -> 500000000029814080548392288266955229183571 0x5BD5E3139A066AAD8AF8E939162DA8B3E53 139

k 22 index 10000000000000000000000
levels: 5
10000000000000000000000 -> 50000000000942806209878293665994446398371544 0x23DF8CB3A1E2554E9C6BF863C38661F2936D8 146

k 23 index 100000000000000000000000
levels: 5
100000000000000000000000 -> 5000000000029814189323670710814676032031444555 0xE0352F62A75C24683091C00E99FD6AB95A4E4B 152

k 24 index 1000000000000000000000000
levels: 5
1000000000000000000000000 -> 500000000000942808145472258657037814775197247031 0x5794C68287D75EF3165412AE960B3F97A7C8A637 159

k 25 index 10000000000000000000000000
levels: 5
10000000000000000000000000 -> 50000000000029814223760934912839828327249688721190 0x22361D8AFCDFA146A21855C6F90DDAC4654C34F726 166

k 26 index 100000000000000000000000000
levels: 5
100000000000000000000000000 -> 5000000000000942808758091081952084125868709915711089 0xD5D238A4AC15D50E3D00380CE571F50DC9377ACD671 172

k 27 index 1000000000000000000000000000
levels: 5
1000000000000000000000000000 -> 500000000000029814234658067055252766458087914346630413 0x53861E20532CB00498751DF915A7CF79C1478C43FCD0D 179

k 28 index 10000000000000000000000000000
levels: 5
10000000000000000000000000000 -> 50000000000000942808951913512756782469229223199209677641 0x20A063C4A07BFE5299F1F53B301E47A1903967C6E1FE349 186

k 29 index 100000000000000000000000000000
levels: 5
100000000000000000000000000000 -> 5000000000000029814238105320163714566005337846445873165675 0xCBEA6F8CEB041179C7C69580FEB5120BE630BBDEAF99F56B 192

k 30 index 1000000000000000000000000000000
levels: 6
1000000000000000000000000000000 -> 500000000000000942809013222649956573426148951590151576716407 0x4FA793930BCD3B696B97D17F5ED30D4AD37D2F9880B58D8477 199
38 2020-04-13 00:04

[4/4]
This version can go above R(10^30) since space consumption is irrelevant and it can simply be left to run some more, but above R(10^30) the fans go into jet engine mode and I do not want to stress my potato. Here is the data table showing the exponent k, the value of R(10^k), the bit count of the R value and the number of generators:

$ cat levels.txt
 1                                                           69   7 0
 2                                                         5764  13 1
 3                                                       526334  20 2
 4                                                     50874761  26 3
 5                                                   5028514748  33 3
 6                                                 500918449417  39 3
 7                                               50029364025646  46 3
 8                                             5000934571821489  53 4
 9                                           500029664631299282  59 4
10                                         50000940106333921296  66 4
11                                       5000029765607020319048  73 4
12                                     500000941936492050650505  79 4
13                                   50000029798618763894670256  86 4
14                                 5000000942529842698007077786  93 4
15                               500000029809255627825531266625  99 5
16                             50000000942720152238624116795401 106 5
17                           5000000029812655507343465281696595 112 5
18                         500000000942780823112495107784753816 119 5
19                       50000000029813737262126730811322149673 126 5
20                     5000000000942800098290022420982686040347 132 5
21                   500000000029814080548392288266955229183571 139 5
22                 50000000000942806209878293665994446398371544 146 5
23               5000000000029814189323670710814676032031444555 152 5
24             500000000000942808145472258657037814775197247031 159 5
25           50000000000029814223760934912839828327249688721190 166 5
26         5000000000000942808758091081952084125868709915711089 172 5
27       500000000000029814234658067055252766458087914346630413 179 5
28     50000000000000942808951913512756782469229223199209677641 186 5
29   5000000000000029814238105320163714566005337846445873165675 192 5
30 500000000000000942809013222649956573426148951590151576716407 199 6

It can be seen that the number of generators increases by 1 roughly when k doubles, which corresponds to squaring n. This backs up the space consumption as O(log(log(n))). For the next version I might incorporate >>33's complement idea to see what constant factor improvement it yields, or I might add third-order S-groups which will bring the runtime down to O(n^0.125).

Please verify the value of R(10^30) with other implementations.

39 2020-04-13 10:12

I see that the OEIS page linked in the wiki article only has R values going up to R(1+10^4). Not even a selection of higher values. Is this a joke?
https://oeis.org/A005228

40 2020-04-13 18:34 *

>>39
Most of the OEIS sequences have a table of elements going from 1 to 10000, I think.

41 2020-04-13 20:16

>>40
That's fine and dandy but how about a small selection of higher values for those of us who want to verify our implementations?

42 2020-04-13 23:22

The same OEIS page has:

Cristian Francu, C program to generate the N-th element in O(sqrt(N))

The program is made publicly available by OEIS:

/* by Cristian Francu on 2019-12-19
   C source code that generates the Nth element of A005228.
   Runs in O(sqrt(N)) time and uses O(sqrt(N)) memory. Works for N up to
   2 billion, but can be made to run up to much higher values using either
   128 bit integers, or large numbers.

   Description:
   - We generate elements linearly, storing them in a queue, as to know when 
     to skip them. When skipping an element we take it out of the queue.
   - We keep track of how many elements we can generate until we reach the 
      last element in the queue (to skip).
   - We stop with linear generation when we can generate at least N elements.
     In other words, the last element in the queue minus one is greater or 
     equal to the desired Nth element.
   - This way we will generate about sqrt(N) elements.
   - At this point the queue will contain about sqrt(N) elements.
   - All the above uses integer computations (four bytes).
   - At this point we use a formula to compute the Nth element: it is the sum
     of a sequence of consecutive numbers.
   - We adjust that summation, subtracting the elements in the queue (except 
     the last one).
   - This last step is also sqrt(N) time since that is the length of the queue.
   - This last step uses long long subtractions (8 bytes).
*/
#include <stdio.h>

#define MAXQ 65536

int used[MAXQ];

int main() {
  int n, i, l, first, last, inqueue, xc, qlen, maxq;
  long long x;

  scanf( "%d", &n );

  used[first = last = 0] = -1; // sentinel
  xc = 1;
  l = 1;
  inqueue = 0;
  i = n - 1;
  maxq = 0;
  while ( i > inqueue ) {
    l++;
    if ( l == used[first] ) {
      first = (first + 1) % MAXQ;
      l++;
    }
    inqueue += l - 1;
    used[last] = xc + l;
    last = (last + 1) % MAXQ;

    if ( (last - first + MAXQ) % MAXQ > maxq )
      maxq = (last - first + MAXQ) % MAXQ;
    
    xc += l;
    i--;
    inqueue--;
  }
  
  // This is the second stage, summation and subtraction of elements in queue.
  x = xc;
  if ( i > 0 ) {
    /* We have i more elements to compute starting with xc to which we have
       to add l+1 l+2 ... and so on. However, we will need to add i such
       numbers plus extra elements. How many? The number of elements in the 
       queue minus one (these are the elements we will subtract). So, in fact,
       we will add: l+1 l+2 ... l+qlen-1
     */
    qlen = (last - first + MAXQ) % MAXQ;
    x += (2 * l + i + qlen) * (long long)(i + qlen - 1) / 2;

    // Then we subtract all elements in the queue, except the last one.
    while ( --qlen ) {
      x -= used[first];
      first = (first + 1) % MAXQ;
    }
  }
  
  printf( "%lld\n", x );

  return 0;
}

This is almost exactly >>33's idea, but with the variation of a circular buffer used as a deque. It is O(sqrt(n)) in both space and runtime, as >>18 is, so >>36 is superior in both orders of growth.

43 2020-04-13 23:41 *

>>42

>>36 is superior in both orders of growth.

Congrats you won the first /prog/ challenge.

44 2020-04-14 01:00

>>43
Whatever you say, Anon, but my third-order S-groups will improve the runtime of >>36 considerably.

45 2020-04-14 09:36

The same OEIS page has:

Catalin Francu, C++ program

The program is made publicly available by OEIS:

From Catalin Francu
C++ source code that generates A005228.

Runs in (approx.) O( n + sqrt(n) + sqrt(sqrt(n)) + sqrt(sqrt(sqrt(n)))...) 
to generate the first n elements. Tested up to n = 1,000,000,000. 
Input is n, output is a(n). Needs quasi-constant memory.

%o A005228 #include <stdio.h>

typedef unsigned long long uint64;

typedef struct {
  uint64 value;
  uint64 inc;
} pair;

pair series[100];

void init_all_series() {
  pair initial_pair = { 7, 5 }; // pregenerate the first three terms
  for (int i = 0; i < 99; series[i++] = initial_pair);
}

void generate(int k);

inline bool test(int k, unsigned number) {
  switch (number) {
    case 1: case 3: case 7: return 1;
    case 2: case 4: case 5: case 6: return 0;
    default:
      while (series[k].value < number) generate(k);
      return (series[k].value == number);
  }
}

inline void generate(int k) {
  series[k].value += series[k].inc;
  if (test(k+1, ++series[k].inc)) series[k].inc++;
}

int main(void) {
  int x;

  init_all_series();
  scanf("%d", &x); // Enter the index
  switch (x) {
    case 1: puts("1"); break;
    case 2: puts("3"); break;
    default:
      for (int k = 4; k <= x; k++) generate(0);
      printf("%llu", series[0].value); // print the x'th term
  }
  return 0;
}

This has the same space consumption as my new version >>36, but linear runtime, so it won't get to R(10^20) even with GMP. The space consumption is achieved with a different method that I still need to have a think on. But this already means that >>36 is algorithmically superior to both the C and C++ versions from OEIS. A bit disappointing.

46 2020-04-14 15:26

>>45

OEIS. A bit disappointing

You gave it more thought than the guy who submitted his program to OEIS. We're still looking for the shortest version in number of characters, efficiency aside.

47 2020-04-14 22:38

>>46

We're still looking for the shortest version in number of characters, efficiency aside.

It's fine if you personally are looking for code golf, the pdf and the OP make zero mention of "the shortest version in number of characters".

48 2020-04-15 06:29

>>47

the pdf and the OP make zero mention of "the shortest version in number of characters"

It's true. I like tiny clever solutions, especially if they're unreadable. I indulge in this perversion. As for the PDF, nobody answered the other questions but they were less challenging I suppose.

49 2020-04-17 02:51

The interplay of 'test' and 'generate' in the C++ version >>45 confused me for a moment. It becomes much clearer if 'generate' is renamed to 'advance' and 'test' to 'needs_jump'. It would be even better if 'test' did not have two responsibilities. The technique used is almost the same stack of generators as in >>36, but with a variation. In >>36 generator k only resorts to generator k+1 when this is needed. But in >>45 the call to 'test' is always made, and it both returns whether a jump is needed at level k, and advances level k+1 if a check determines this is needed. The generator at level k does not possess enough information to avoid the need for the 'while' check in 'test' on every single invocation, even though this is avoidable. The same extra information would also prevent the need for most 'test' calls. Currently, other than the code golfed increment inside the argument list, the overwhelming majority of 'test' calls are useless no-ops, hence the inline request. In >>36 this extra information is available so the call pattern is much more disciplined. From a design perspective, the C++ version sends out notifications all the time and the recipient decides whether they were needed, but in my version notifications are only sent out when it is known that they are needed.

Site was down, so posted on lain.

50 2020-04-17 03:25

>>48

nobody answered the other questions but they were less challenging I suppose

The fact that calling functions "she" sounds like an old goat desperately trying to be hip, to those of us who aren't twelve, might have something to do with that.

51 2020-04-18 01:48

>>50
Or maybe it has something to do with Hofstadter's Male and Female sequences which are familiar only to those of us who aren't twelve and were raised with books instead of Twitter's political correctness about gender fluidity?

52 2020-04-18 03:15 *

>>51
When you attempt such a huge misdirection, you give away the game too easily. As to the substance of your answer, none of the people you selected posted answers to the other questions in the 15 days that have elapsed, while in the same period at least 6 solutions have been posted to the first question, so your answer rings hollow.

53 2020-04-18 05:29 *

>>52
They're not exactly programming challenges anyway.

When Lisp was only something you could only read and fantasize about in magazines —at least for most of us— Douglas Hofstadter wrote little introductory articles about it. They're mostly of historical interest now, but his extensive and numerous digressions on feminism are ar from irrelevant. I'm not even sure they'd need trigger warning even though they could reasonably be considered as a highly problematic primer of mansplaining.

https://archive.org/details/MetamagicalThemas/mode/2up

The rules of the Nomic game are probably the most interesting part anyway. It'll probably sound like total bigotry but I'd still love them to be remembered even after the shitlord's rightfully canceled.

54 2020-04-18 08:58

>>53
http://www.nomic.net/

The following live games are hosted here:
B Nomic: email-based, open to new players, began on 5 December 2001

Dead. Fuck this timeline.

55 2020-04-18 09:58

>>53

When Lisp was only something you could only read and fantasize about in magazines —at least for most of us— Douglas Hofstadter wrote little introductory articles about it.

If you have a link to Douglas Hofstadter's own fastest version for generating the R sequence, I'd appreciate it.

56 2020-04-18 21:20 *

>>55
Oh no, I don't have one. I don't have a peer reviewed paper on mathematical foundations by Jorge Louis Borges either.
Bruteforcing the Diophantine equation x^3 + y^3 + z^3 = 42 merely took a million computing hours and not 7,5 millions years as Douglas Adams stated. And Douglas Adams had a much more powerful computer at his disposal*.

I wasn't even aware that Hofstadter, Adams and Borges were mathematicians or programmers. I always thought of them as novelists. What I could provide you with, however, are countless negative critics of Hofstadter's novel Gödel, Escher, Bach describing it as a ludicrous and unscientific entertainment.

If that wasn't a genuine question but your way to ask to get back on topic, I don't know if the OEIS publish what they believe is the optimal way to generate each sequences. >>36 is more efficient than Cristian Frâncu, Phd, Software Engineer at Google's proposal. So >>36 beats him. Hofstadter wasn't even playing AFAIK.

______________________
[*] https://www.livescience.com/diophantine-42-solved-meaning-of-life.html

57 2020-04-19 00:01

>>56

Hofstadter wasn't even playing AFAIK

OK.

Cristian Frâncu, Phd, Software Engineer at Google

How does one get this kind of information? And what was the topic of his Phd?

58 2020-04-19 03:05 *

>>57
He got a PhD from Rutger University, I haven't found the topic of his Phd, he only says his interests at Rutgers were in Machine Learning, Music Information Retrieval and Artificial Intelligence.

http://cristian.francu.com/about/
https://ro.linkedin.com/in/cristian-francu-691322?trk=people-guest_people_search-card

- national Olympic in programming competition in Romania (whatever that means)
- summarize his expertize in algorithms and data structures
- worked at Google on the Google Mini: https://en.wikipedia.org/wiki/Google_Search_Appliance

He seems to be a decent programmer at least. But Anon came up in a couple of hours with a better algorithm than the one he submitted at the OEIS, so...

59 2020-04-19 10:04

>>58
Thanks for all the info, Anon. Especially since you put it into the thread, since the linkedin is just a blank page for me that wants to run a script, and the about page is unreadable. Do you have some automatic translation service link that works without JS? Or can you actually read that?

60 2020-04-20 03:08 *

>>59
Sorry about the malicious websites. Linkedin will only let you see the page once anyway. Here's snapshot. http://archive.is/NZZsc

I can't read Romanian. It's a Latin language and I could maybe understand what an article in a newspaper is about and that'd be it.

Cristian Frâncu is a computer scientist fascinated by the world of computers. He currently lives in America, where he left as a young man. For a short time he returned to Romania, to follow his dreams, but, helped by the Romanian treasury, he understood that Romania is not ready for his return. In America, he worked for six years at Google as a Software Engineer, spending most of his time developing the Google Mini. Prior to that, he studied Computer Science in Rutgers, New Jersey, where he was able to earn his PhD by being saved in time by his project mentor, Craig Nevill-Manning, who pulled him out of the academic world and brought him to Google. His interests at Rutgers were in Machine Learning, Music Information Retrieval, Artificial Intelligence.

Before studying at Rutgers, Cristian obtained a diploma of in-depth studies from the Faculty of Automation and Computers at the Bucharest Polytechnic Institute, surviving the course of Electronic Devices and Circuits, which made many victims. At Automatică, he arrived from the Computer Science High School, where he discovered his passion for computers, becoming a national Olympic at the programming competitions, a major change from the previous years, from the gymnasium, when he was a national Olympic in physics and mathematics.

Currently, Cristian lives in his adoptive country, where he works as an independent in various projects having more or less connection with the world of computers. From time to time he visits with pleasure and nostalgia his favorite city, Bucharest, Romania.

Do you know why Romanian hackers (as in ``movie villain hackers'') were so good?

61 2020-04-20 09:58

>>60
Thanks a lot, Anon. That's a lot of info. I'm getting mixed signals here though:

about:
he was able to earn his PhD
linkedin:
Yet another PhD dropout. Sigh...

Also some peculiar and occasionally bizarre bits:

- I believe in a post work society
- 40 hours of work a week is unacceptable
- teaching university students is challenging [...] You can't tell on their mom
- Proud to have survived Electronic Circuitry and Devices class. Many people didn't (including our brain dead professors and assistants).

Do you know why Romanian hackers (as in ``movie villain hackers'') were so good?

I do not. Please tell me, especially if it's a lead up to a joke.

62 2020-04-21 02:50 *

>>61

I do not. Please tell me, especially if it's a lead up to a joke.

Sorry to disappoint you, there's no joke. They just thought very early that the war (against the West) would also be fought on this terrain and taught kids to hack computers in schools.

I don't believe they're so relevant now but they really had a head start.

63 2020-04-25 04:27

Insatiable need for consolation.

64 2020-05-01 14:15

As promised in >>44 third-order S-groups have been added to >>36. The code still needs a bit of cleanup because, while the runtime order of growth is now O(n^(1/8)), the constant factor given by the more complex S-group folding step is much higher than for second-order S-groups. As a preview here is R(10^31):

k 31 index 10000000000000000000000000000000
levels: 5
10000000000000000000000000000000 -> 50000000000000029814239195666197369782628797813806672178028303 0x1F1D75A5709C1FEA650A7046A3AAD99F2BC662D8153503D79F0F 205
65 2020-05-02 11:13 *

>>64
Anon, you should get credits for it.

66 2020-05-02 13:57

>>65
Thanks, but I take payment in meseta, payable to my account at the hunters' guild in Aiedo.

67 2020-05-03 00:01

[1/2]
The improved version of >>36 uses stacked self-generation of the S sequence and third-order S-groups to achieve O(log(log(n))) space consumption and O(n^(1/8)) runtime. As explained in >>35 a third-order S-group is a group of second-order S-groups, and this is folded into the R value in one step. The constant factor that results from the more complex S-group folding step is considerably higher than in >>36.

def seq4_covered (s, c, S1):
   # start, count
   incR, incS = seq3_inc (s, c, S1 - 1)
   return incR

def seq4_inc (s, c, S0, S1, covered):
   # start, count
   incS1   = seq3_sum1sc (s, c)
   countS1 = incS1 - c

   incS0   = countS1 * S1 + seq3_inc (s, c, 0) [0]
   countS0 = covered
   overR   = (incS0 - 1) * incS0 // 2
   backR   = (
      countS1 * (countS1 + 1) // 2 * S1
      - countS1 + seq4_backR (s, c)
   )
   incR    = overR - backR + S0 * countS0
   return (incR, incS0, incS1)

def seq4_backR (s, c):
   # start, count
   return c*(((((5*c + (30*s - 29))*c + ((60*s - 130)*s + 45))*c + (((40*s - 140)*s + 90)*s + 85))*c + ((-60*s + 250)*s - 290))*c + (20*s - 160)*s + 184) // 240

def work_seq4 (n):
   seq3covered = seq3_covered
   seq3inc     = seq3_inc
   seq4next    = seq2_next
   seq3sum1sc  = seq3_sum1sc
   seq4covered = seq4_covered
   seq4inc     = seq4_inc

   levels       = []
   R, S0, S1    = 3, 4, 4
   start, count = 4, 3
   advance      = seq4covered (start, count, S1)
   covered      = 2 + advance

   while covered <= n:
      incR, incS0, incS1 = seq4inc (start, count, S0, S1, advance)
      R  += incR
      S0 += incS0
      S1 += incS1
      start, count = seq4next (levels, 0)
      advance      = seq4covered (start, count, S1)
      covered     += advance

   print ("levels:", len (levels))

   covered     -= advance
   fakelevels   = [[start, count, 0, S1]]
   start, count = seq4next (fakelevels, 0)
   advance      = seq3covered (start, count)
   covered     += advance

   while covered <= n:
      incR, incS0 = seq3inc (start, count, S0)
      R  += incR
      S0 += incS0
      start, count = seq4next (fakelevels, 0)
      advance      = seq3covered (start, count)
      covered     += advance

   covered     -= advance
   fakelevels   = [[start, count, 0, S0]]
   start, count = seq4next (fakelevels, 0)
   covered     += count

   while covered < n:
      R += seq3sum1sc (start, count)
      start, count = seq4next (fakelevels, 0)
      covered     += count

   R += seq3sum1sc (start, count - (covered - n))
   return R

The timing for R(10^20) is down to about 80 milliseconds in vanilla python from the previous 300 milliseconds:

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (100000000000000000000)'
10 loops, best of 3: 81.2 msec per loop

The timing for R(10^30) is about 14 seconds in vanilla python:

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (1000000000000000000000000000000)'
10 loops, best of 3: 14.3 sec per loop

The first integer k for which R(10^k) overflows an u256 is k=39. The value of R(10^39) is
500000000000000000029814239694953305682145634502486118475654808671411192582303

$ g4 39
k 39 index 1000000000000000000000000000000000000000
levels: 5
1000000000000000000000000000000000000000 -> 500000000000000000029814239694953305682145634502486118475654808671411192582303 0x4516DF8A16FE63D603196794A678455A328B511811E5818CC6F66DDCD067DD09F 259
68 2020-05-03 00:01

[2/2]
Here is the k<=40 table in >>38 format. Increasing the group order by one means that one generator level is eliminated.

 1                                                                               69   7 0
 2                                                                             5764  13 0
 3                                                                           526334  20 1
 4                                                                         50874761  26 2
 5                                                                       5028514748  33 2
 6                                                                     500918449417  39 2
 7                                                                   50029364025646  46 2
 8                                                                 5000934571821489  53 3
 9                                                               500029664631299282  59 3
10                                                             50000940106333921296  66 3
11                                                           5000029765607020319048  73 3
12                                                         500000941936492050650505  79 3
13                                                       50000029798618763894670256  86 3
14                                                     5000000942529842698007077786  93 3
15                                                   500000029809255627825531266625  99 4
16                                                 50000000942720152238624116795401 106 4
17                                               5000000029812655507343465281696595 112 4
18                                             500000000942780823112495107784753816 119 4
19                                           50000000029813737262126730811322149673 126 4
20                                         5000000000942800098290022420982686040347 132 4
21                                       500000000029814080548392288266955229183571 139 4
22                                     50000000000942806209878293665994446398371544 146 4
23                                   5000000000029814189323670710814676032031444555 152 4
24                                 500000000000942808145472258657037814775197247031 159 4
25                               50000000000029814223760934912839828327249688721190 166 4
26                             5000000000000942808758091081952084125868709915711089 172 4
27                           500000000000029814234658067055252766458087914346630413 179 4
28                         50000000000000942808951913512756782469229223199209677641 186 4
29                       5000000000000029814238105320163714566005337846445873165675 192 4
30                     500000000000000942809013222649956573426148951590151576716407 199 5
31                   50000000000000029814239195666197369782628797813806672178028303 205 5
32                 5000000000000000942809032613363227344483520545074111347580106666 212 5
33               500000000000000029814239540504829346639521751526121067855028048469 219 5
34             50000000000000000942809038745792444395013325500575549841178605176222 225 5
35           5000000000000000029814239649559693010895101002840721426848127286038482 232 5
36         500000000000000000942809040685134541874198687080125967207639651015633313 239 5
37       50000000000000000029814239684047173947897406203055196803310606765583116781 245 5
38     5000000000000000000942809041298425788194699108125613517442497904284636150437 252 5
39   500000000000000000029814239694953305682145634502486118475654808671411192582303 259 5
40 50000000000000000000942809041492368615124254362375085779982992292343853681977675 265 5

Please verify the value of R(10^40) with other implementations.

69 2020-05-04 04:44 *

>>68

Please verify the value of R(10^40) with other implementations.

Stop taunting us!

70 2020-05-04 10:40

>>69
An anon on lain posted an O(log(log(n))):O(sqrt(n)) version and went up to R(10^15), although he didn't post R values other than the R(123456789) I requested for an indexing check. So it's not entirely outside the realm of possibility. I actually expect versions faster than mine to be posted, since python should be easy to beat.

71 2020-05-04 20:08 *

>>70
It's a good demonstration of how efficiency of the algorithm is the decisive factor, not the speed of the language. If you rewrite your code in C, you will probably get a (non-negligible) speed improvement of order O(1).

72 2020-05-05 02:48

I just thought of a way to cheat during the endgame using partial S-groups, so I'm adding it. It should give a nice speed boost, since the time of 14 seconds for R(10^30) is ridiculously high.

73 2020-05-05 11:39

Partial S-groups have been added during the endgame. The time for computing R(10^20) is down to 2 milliseconds in vanilla python from the 80 milliseconds of >>67:

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (100000000000000000000)'
1000 loops, best of 3: 1.98 msec per loop

The time for computing R(10^30) is down to 45 milliseconds in vanilla python from the 14 seconds of >>67:

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (1000000000000000000000000000000)'
10 loops, best of 3: 45.3 msec per loop

Such an improvement in runtime cannot be explained by a reduction in constant factor. The natural conclusion has to be that the previously stated runtime order of growth was wrong, but I do not yet see why this is. I will give it some thought. Regardless, while adding this trick I realized that it does not apply to the endgame alone but to all S-group order switches, so I'll just add it between second and third as well.

74 2020-05-05 23:01

Here is what happened with the runtime. The stated power is the runtime of the highest-order loop, and I naively assumed that because this gave the overall runtime in >>36, it will continue to do so in >>67. This assumption was false.

In >>36 the highest-order loop folds second-order S-groups. It continues until eventually one of the second-order S-groups doesn't fit because there aren't enough first-order S-groups left to fill it up. The number of passes through the loop is proportional to n^(1/4), and the number of first-order S-groups that the unsatisfied final second-order S-group wanted to consume is also proportional to n^(1/4). These match up nicely here but not in >>67. The number of first-order S-groups that the unsatisfied final second-order S-group had available gives the number of passes through the second 'while' loop of >>36. This 'while' loop will fold the remaining first-order S-groups one by one. Both loops having O(n^(1/4)) passes the overall runtime is O(n^(1/4)).

But in >>67 this chain of reasoning only holds for the first two 'while' loops. The first one folds G3s until a final G3 wants more G2s than are left. The number of passes is O(n^(1/8)), and so is the number of leftover G2s. The latter gives the number of passes through the second 'while' loop. This loop folds the remaining G2s one by one. Eventually a final G2 doesn't fit, not having enough G1s left to fill it up. The remaining G1s are consumed by the third loop one by one. However, groups contain each other via a chain of square roots, and expand back into each other as squares. So the number of leftover G1s is now O(n^(1/4)), and thus the third loop degrades the overall runtime back to O(n^(1/4)).

This only showed up for larger numbers because it had to overcome the much higher constant factor of the first loop. In the new, corrected version all the 'while' loops except the first one have been eliminated, and have been replaced with a single partial S-group each. The first loop now determines the overall runtime because it is the only S-group loop left. So the runtime is now properly O(n^(1/8)), and I'll also post some empirical data to back this up. In the meantime here is R(10^41):

$ g4 41
k 41 index 100000000000000000000000000000000000000000
levels: 5
100000000000000000000000000000000000000000 -> 5000000000000000000029814239698402168816728284052931912470389462588984951289354475 0xA8ACD7C0222311BCD695E19C79874B9A881767964E801B562297EFD84483853F18EB 272
75 2020-05-06 02:41

Here is the empirical data I promised. The number of calls to seq4_inc gives the number of passes through the G3 loop of >>67, and this can be obtained with cProfile. The numbers are R(10^16)->171, R(10^24)->1796, R(10^32)->18219 and R(10^40)->183011. The successive ratios are increasingly good approximations of 10:

>>> 1796 / 171
10.502923976608187
>>> 18219 / 1796
10.144209354120267
>>> 183011 / 18219
10.04506284647895

Therefore the number of steps of the corrected version increases roughly by a factor of 10 when the index of R increases by a factor of 10^8. This backs up the runtime of O(n^(1/8)).

76 2020-05-07 00:02

[1/3]
Here is the corrected version of >>67. As explained above it uses partial S-groups when switching S-group order and eliminates the lower loops. The runtime is now properly O(n^(1/8)), as backed by >>75.

def work_seq3_cheat (start, count, nleft, coverfun):
   wantlow = coverfun (start, 1)
   if wantlow > nleft:
      return (0, 0)

   low, high = 1, count
   diff = high - low

   while diff > 1:
      now  = (low + high) // 2
      want = coverfun (start, now)

      if want <= nleft:
         low     = now
         wantlow = want
      else:
         high    = now

      diff = high - low

   return (low, wantlow)

def work_seq4 (n):
   seq3covered = seq3_covered
   seq3inc     = seq3_inc
   seq4next    = seq2_next
   seq3sum1sc  = seq3_sum1sc
   seq4covered = seq4_covered
   seq4inc     = seq4_inc

   levels       = []
   R, S0, S1    = 3, 4, 4
   start, count = 4, 3
   advance      = seq4covered (start, count, S1)
   covered      = 2 + advance

   while covered <= n:
      incR, incS0, incS1 = seq4inc (start, count, S0, S1, advance)
      R  += incR
      S0 += incS0
      S1 += incS1
      start, count = seq4next (levels, 0)
      advance      = seq4covered (start, count, S1)
      covered     += advance

   print ("levels:", len (levels))

   covered -= advance
   used, advance = work_seq3_cheat (
      start, count, n - covered,
      lambda s, c: seq4covered (s, c, S1))

   if used > 0:
      incR, incS0, incS1 = seq4inc (start, used, S0, S1, advance)
      R  += incR
      S0 += incS0
      S1 += incS1
      covered += advance

   fakelevels    = [[start, count, used, S1]]
   start, count  = seq4next (fakelevels, 0)
   used, advance = work_seq3_cheat (
      start, count, n - covered, seq3covered)

   if used > 0:
      incR, incS0 = seq3inc (start, used, S0)
      R  += incR
      S0 += incS0
      covered += advance

   fakelevels   = [[start, count, used, S0]]
   start, count = seq4next (fakelevels, 0)
   nleft        = n - covered

   if nleft > 0:
      R += seq3sum1sc (start, nleft)

   return R

Here are the updated timings, all in vanilla python.
- The u64 overflow R(10^10) is computed in under 100 microseconds.
- The u128 overflow R(10^20) is computed in under 2 milliseconds.
- R(10^30) is computed in under 40 milliseconds.
- The u256 overflow R(10^39) is computed in just over 500 milliseconds.

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (10000000000)'
10000 loops, best of 3: 96.6 usec per loop
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (100000000000000000000)'
1000 loops, best of 3: 1.81 msec per loop
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (1000000000000000000000000000000)'
10 loops, best of 3: 35.9 msec per loop
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (1000000000000000000000000000000000000000)'
10 loops, best of 3: 527 msec per loop

R(10^39) is a good value for comparing implementations with this order of growth. Here is the k<=60 table in >>38 format.

77 2020-05-07 00:02

[2/3]

 1                                                                                                                       69   7 0
 2                                                                                                                     5764  13 0
 3                                                                                                                   526334  20 1
 4                                                                                                                 50874761  26 2
 5                                                                                                               5028514748  33 2
 6                                                                                                             500918449417  39 2
 7                                                                                                           50029364025646  46 2
 8                                                                                                         5000934571821489  53 3
 9                                                                                                       500029664631299282  59 3
10                                                                                                     50000940106333921296  66 3
11                                                                                                   5000029765607020319048  73 3
12                                                                                                 500000941936492050650505  79 3
13                                                                                               50000029798618763894670256  86 3
14                                                                                             5000000942529842698007077786  93 3
15                                                                                           500000029809255627825531266625  99 4
16                                                                                         50000000942720152238624116795401 106 4
17                                                                                       5000000029812655507343465281696595 112 4
18                                                                                     500000000942780823112495107784753816 119 4
19                                                                                   50000000029813737262126730811322149673 126 4
20                                                                                 5000000000942800098290022420982686040347 132 4
21                                                                               500000000029814080548392288266955229183571 139 4
22                                                                             50000000000942806209878293665994446398371544 146 4
23                                                                           5000000000029814189323670710814676032031444555 152 4
24                                                                         500000000000942808145472258657037814775197247031 159 4
25                                                                       50000000000029814223760934912839828327249688721190 166 4
26                                                                     5000000000000942808758091081952084125868709915711089 172 4
27                                                                   500000000000029814234658067055252766458087914346630413 179 4
28                                                                 50000000000000942808951913512756782469229223199209677641 186 4
29                                                               5000000000000029814238105320163714566005337846445873165675 192 4
30                                                             500000000000000942809013222649956573426148951590151576716407 199 5
78 2020-05-07 00:02

[3/3]

31                                                           50000000000000029814239195666197369782628797813806672178028303 205 5
32                                                         5000000000000000942809032613363227344483520545074111347580106666 212 5
33                                                       500000000000000029814239540504829346639521751526121067855028048469 219 5
34                                                     50000000000000000942809038745792444395013325500575549841178605176222 225 5
35                                                   5000000000000000029814239649559693010895101002840721426848127286038482 232 5
36                                                 500000000000000000942809040685134541874198687080125967207639651015633313 239 5
37                                               50000000000000000029814239684047173947897406203055196803310606765583116781 245 5
38                                             5000000000000000000942809041298425788194699108125613517442497904284636150437 252 5
39                                           500000000000000000029814239694953305682145634502486118475654808671411192582303 259 5
40                                         50000000000000000000942809041492368615124254362375085779982992292343853681977675 265 5
41                                       5000000000000000000029814239698402168816728284052931912470389462588984951289354475 272 5
42                                     500000000000000000000942809041553699275352369083555067298879759994519671549809149522 279 5
43                                   50000000000000000000029814239699492802485126708959655333290571225887358803916579127627 285 5
44                                 5000000000000000000000942809041573093831553392834637054332803551711977694994007009790372 292 5
45                               500000000000000000000029814239699837692448235838620313436065493069882669969298165276420932 298 5
46                             50000000000000000000000942809041579226946272526629650233821592037083538450290978659280817924 305 5
47                           5000000000000000000000029814239699946756464770327217558477196037097548855222595700238539054865 312 5
48                         500000000000000000000000942809041581166410560146744996087637489747926009831507674531860092771246 318 5
49                       50000000000000000000000029814239699981245576707388365411525341190531682837962715909950754430162916 325 5
50                     5000000000000000000000000942809041581779723574457735429357187680244102657245323510219796435077511331 332 5
51                   500000000000000000000000029814239699992151998934816806362938022380717694881944763055115621651771537182 338 5
52                 50000000000000000000000000942809041581973670277645147248950274174680459501036705469450283128240202936345 345 5
53               5000000000000000000000000029814239699995600913788820676216348225539327181484799269054153129487836460510789 352 5
54             500000000000000000000000000942809041582035001627896095619939474929936571762744906288012004273501460096218956 358 5
55           50000000000000000000000000029814239699996691556662690868009166942952903517096871912188005052332391102154504541 365 5
56         5000000000000000000000000000942809041582054396306898966297540047317634200313888303569207665197822737010723880453 372 5
57       500000000000000000000000000029814239699997036448263909956827998702688566561312699712649393246034276487069718675623 378 5
58     50000000000000000000000000000942809041582060529443468690126506942010712319519060683290576216509510379646108458690288 385 6
59   5000000000000000000000000000029814239699997145512571896807332586675028913049270348666820028707699723929284676993408671 391 6
60 500000000000000000000000000000942809041582062468911643702033358845623678711109326514636451791089982268036336668374054905 398 6

Please verify the value of R(10^60) with other implementations.

79 2020-05-07 13:38

There seem to be some recognizable patterns emerging from the three orders of S-groups that have been worked out manually. If a general procedure could be extracted for incrementing the S-groups order, it would remove the need to work out fourth-order S-group formulas manually, which while possible would be exceedingly tedious.

80 2020-05-07 23:08

While working on the pattern for incrementing the S-group order I realized that there is no need to call seq3_inc with modified arguments in seq4_inc >>67 after it has already been called in seq4_covered. The two lines:

incS0   = countS1 * S1 + seq3_inc (s, c, 0) [0]
countS0 = covered

from seq4_inc can be replaced with:

countS0 = covered
incS0   = countS0 + countS1

This does not alter the runtime complexity but does give a small speed boost by eliminating a few multiplications. The new timing for the u256 overflow R(10^39) is just over 400 milliseconds in vanilla python.

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_seq4 (1000000000000000000000000000000000000000)'
10 loops, best of 3: 417 msec per loop

More importantly, a similar relation will hold for all higher S-groups.

81 2020-05-08 09:40

While working on the pattern for incrementing the S-group order I realized that S-groups can be modeled as a series of layers, where a layer is a (count, inc) pair in seqN_inc. Group[N] will consist of layers[1...N] and the only difference in a layer's use by G[N] and G[N+1] is that its invocation arguments will be shifted. This is why seq4_inc >>67 starts by computing incS1 and countS1 in a way that is equivalent to how seq3_inc >>36 computes incS and countS. This means that with G[N] in hand, G[N+1] need not be derived from scratch. The first N layers will match modulo a shift in arguments, and only the new layer[N+1] has to be derived.

82 2020-05-08 10:36

It occured to me that it makes more sense to make seqN_covered output a vector instead of a scalar, in the way that seqN_inc already returns a vector. The purpose of the separation between seqN_covered and seqN_inc is to perform the minimal amount of upfront work that determines whether a group fits, then if it does perform the more expensive work of summing it. The determination computes how many S entries are needed and whether this count fits into 'n', the targeted R index. But seqN_covered currently only returns the final S count, even though it might need to compute additional information on the way to obtaining it. This additional information is thrown away in seq4_covered >>67 as the incS value, only to be recomputed in seq4_inc as incS1. In seq3_inc >>36 the countS computation duplicates the work of seq3_covered. It is better to pass all the additional information along from seqN_covered to seqN_inc and avoid recomputation, because it appears that this pattern of reuse will hold for all higher S-groups. The reused information comes from the layers >>81.

83 2020-05-08 23:19

Let Svec denote [S0, S1, ..., s]. Layer[N] consists of two functions: layer[N].count(c,Svec) computes the number of S sequence entries covered by the Nth-order S-group (s, c), and layer[N].inc(c,Svec) computes the sum to be added to the R value by the same. The first equation that defines the pattern for incrementing the S-group order seems to be:
layer[N+1].count(c,Svec) = layer[N].inc(c,Svec')
where
Svec' = [S0, S1-1, S2, ..., s]
In seq3_covered >>36 this shows up as
seq3_covered(s, c) = seq2_inc(s - 1, c)
where seq2_inc is called seq3_sum1sc. In seq4_covered >>67 it shows up directly as the first component of seq3_inc (s, c, S1 - 1). It appears this will hold for all higher S-groups. What remains is to derive layer[N+1].inc(c,Svec). I have some ideas that way as well.

84 2020-05-09 00:38

Correction: I forgot to include the shift in arguments >>81 in the Svec' expression >>83. It is meant to be:
Svec' = [S1-1, S2, ..., s]
In other words, the lowest item, S0, is thrown away, the others are shifted down, and the newly lowest item is decremented.

85 2020-05-15 02:46

Behold the power of math.

$ la4 100
k 100 index 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
levels: 5
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 -> 50000000000000000000000000000000000000000000000000942809041582063365867792393110847758677724581317563632235727078699914934976135220713319517131197635890992941277256245865422676335917679671598995284082 0xA738C6BEBB12D16CB428F8AC016561DB40A0F74CDEBDD0B8BB97EF2EAFCDE06A31AEE8E7906C4DF2F4566C35A76043D80B494915A4193321F391E5ECDA54495AD035BA9ED6F151E03DD64B4E42CEA49E9D2072 664

R(10^100) has fallen.

86 2020-05-15 23:14

Loop counts for exponents k from R(10^k) to back up the new runtime order of growth:

12      8
28     99
44   1049
60  10679
76 107403

Consecutive ratios:

>>> 99 / 8
12.375
>>> 1049 / 99
10.595959595959595
>>> 10679 / 1049
10.180171591992373
>>> 107403 / 10679
10.057402378499859

So the new number of steps increases roughly by a factor of 10 when the index of R increases by a factor of 10^16.

87 2020-05-16 12:18

Here is my attempt at an ``understandability golf'' of the hofstadter sequence. It was naively created by using a table of indices, and values to look for patterns. The program is more or less a direct encoding of the table. The key to this sequence is to understand that the rate of change is incremental with a skip every once in a while, such as on iterations 1, 4, 8, and 13. The rate of change of these skips is incremental, the gap between them increasing by one after every skip. This implementation is one of the slower implementations here and simply linear, it starts to really chug around i = 10^8 on my poor machine.

| i |  n |  d | d2 | d3 |
|---+----+----+----+----|
| 0 |  1 |  1 |  1 |  3 |
| 1 |  3 |  2 |  1 |  3 |
| 2 |  7 |  4 |  4 |  4 |
| 3 | 12 |  5 |  4 |  4 |
| 4 | 18 |  6 |  4 |  4 |
| 5 | 26 |  8 |  8 |  5 |
| 6 | 35 |  9 |  8 |  5 |
| 7 | 45 | 10 |  8 |  5 |
| 8 | 56 | 11 |  8 |  5 |
| 9 | 69 | 13 | 13 |  6 |
(define (hofstadter nth)
  (letrec ((I (lambda (i d d2 d3 n)
                (cond ((= i nth) n)
                      ((= i d2)
                       (I (+ i 1) (+ d 2) (+ d2 d3) (+ d3 1) (+ n d 2)))
                      (else (I (+ i 1) (+ d 1) d2 d3 (+ n d 1)))))))
    (I 0 1 1 3 1)))

(map hofstadter '(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19))
=> (1 3 7 12 18 26 35 45 56 69 83 98 114 131 150 170 191 213 236 260)

(hofstadter (expt 10 8))
=> 5000942759072091
88 2020-05-16 12:33

>>87
Interestingly in the i = 10^8 case my sequence seems to be off, and even very small values such as 100 it is incorrect. I suppose I must have misinterpreted the sequence.

(map (lambda (e) (hofstadter (sub1 (expt 10 e)))) '(0 1 2 3 4 5 6 7 8))
=> (1 69 5808 528364 50927960 5029664750 500941310658 50029799244875 5000942659057950)
;; (1 69 5764 526334 50874761 5028514748 500918449417 50029364025646 5000934571821489)
89 2020-05-16 13:01

>>88

My implementation diverges at i = 20 due to my interpretation of the sequence. I think it might be a valid interpretation though, I'm not sure, what are your thoughts?

|  i |  n |  d | d2 |  d3 |
|----+----+----+----+-----|
| 19 | 24 | 19 |  7 | 260 |
| 20 | 26 | 26 |  8 | 286 |
90 2020-05-16 20:15

>>89
Your 'n' column is usually called R. Your 'd' column with the initial 1 removed is usually called S. The extra 1 in 'd' causes a shift in d2. Compare the stream of unique (d2-1) values, starting with d2=4, to the R sequence. This will tell you what d3 is and why it cannot have the simple pattern you ascribe to it in your cond. Where your table cuts off, after it stays on 6 for a while, d3 does not continue with 7.

91 2020-05-16 22:41

>>90

I was blind but now I see, what an elegant sequence.

(define (hofstadter n)
  (letrec ((H (lambda (i R S)
               (cond ((zero? i) (car R))
                     ((member S R)
                      (H (- i 1) (cons (+ (car R) S 1) R) (+ S 2)))
                     (else (H (- i 1) (cons (+ (car R) S) R) (+ S 1)))))))
    (H (sub1 n) '(1) 2)))

(map (lambda (e) (hofstadter (expt 10 e))) '(0 1 2 3 4))
=> (1 69 5764 526334 50874761)
;; (1 69 5764 526334 50874761)
92 2020-05-16 23:45

>>91
This is now correct.

'(0 1 2 3 4)

You might be interested in the iota procedure.

(member S R)

This is the part that holds back the speed. If you look at your final S, you will find that it grows roughly like the square root of R. But for each S jump test you are searching the entire R storage from the highest value downwards. You can tell exactly which R value will be hit by the next S jump and avoid searching entirely.

93 2020-05-17 11:43 *

Here's a Scheme solution using queues to calculate the sequence up to the last relevant "skip" in R values and calculate the "jump" from there.

(use-modules (ice-9 q))

(define (sequence n)
  (let* ((rs (make-q))
         (ss (make-q))
         (r 7)
         (s 5)
         (l 3)
         (n (- n 1)))
    (enq! rs r)
    (enq! ss s)
    (let loop ((i 4))
      (when (<= (+ s r) (+ i n))
        (set! r (+ r s))
        (set! s (+ s 1))
        (when (= s (q-front rs))
          (set! s (+ s 1))
          (set! l (+ l 1))
          (deq! ss)
          (deq! rs))
        (enq! rs r)
        (enq! ss s)
        (loop (+ i 1)))
      (list l rs ss))))

(define (sum m n)
  (* (/ (- m n -1) 2) (+ m n)))

(define (negsum a l)
  (cond
   ((null? l) a)
   (else (negsum (- a (car l)) (cdr l)))))

(define (jump n)
  (cond
   ((= n 1) 1)
   ((= n 2) 3)
   ((= n 3) 7)
   ((= n 4) 12)
   (else
    (let* ((seq (sequence n))
           (p   (caadr seq))
           (l   (length p))
           (n'  (+ (car seq) l -1))
           (s'  (q-rear (caddr seq)))
           (r'  (q-rear (cadr seq)))
           (s   (+ s' (- n n') l)))
      (+ r' (sum (- s 1) s') (negsum 0 p))))))

It starts getting slow at 10^13 (which is 50000029798618763894670256).

94 2020-05-17 12:03 *

>>93
Well, it is not actually a solution, since it does not generate every R value, just the first some and then the last.

95 2020-05-17 15:46

>>92

You might be interested in the iota procedure.

I haven't programmed for a fairly long time, so I'm very rusty (I even forgot about named lets), but even when I did program regularly I always forgot about iota.

[member] is the part that holds back the speed. If you look at your final S, you will find that it grows roughly like the square root of R. But for each S jump test you are searching the entire R storage from the highest value downwards. You can tell exactly which R value will be hit by the next S jump and avoid searching entirely.

I sort of figured all that when I wrote it, I was just trying to write in the most stupid intuitive way possible to try to understand the sequence, and I guess for others to understand it. I might play the optimisation game, but I'll probably mostly play from the sidelines, lurking and what not. I have no where near the skills of you guys here, and I don't want to distract from the discussion. Anyway, this morning and last night I did write a more optimised version (still slow) I'd be willing to share, and cleaned up my original further, I was able to hit 10^7 on my machine in a reasonable time-frame.

The basic ideas were to reverse R keeping a pointer to the end, to remove values from the head of R once they've been used to construct S, and to stop growing R once it's clear the values won't be needed to construct S[n] (I still need to actually find when this is the case exactly).

;; clean
(define (hofstadter n)
  (let H ((i (- n 1)) (R '(1)) (S 2))
       (cond ((zero? i) (car R))
             ((member S R)
              (H (- i 1) (cons (+ (car R) S 1) R) (+ S 2)))
             (else (H (- i 1) (cons (+ (car R) S) R) (+ S 1))))))

;; faster
(define (hofstadter n)
  (let ((r0 (list 1)))
    (let H ((i (- n 1)) (r r0) (R r0) (S 1))
         (letrec ((R! (lambda (n) (set-cdr! r (list (+ (car r) S n))))))
           (cond ((zero? i) (car r))
                 ((> (car r) (* n 2)) ; this is very conservative
                  (let E ((i i) (r (car r)) (R R) (S S))
                       (cond ((zero? i) r)
                             ((= (car R) S)
                              (E (- i 1) (+ r S 1) (cdr R) (+ S 2)))
                             (else (E (- i 1) (+ r S) R (+ S 1))))))
                 ((= (car R) S)
                  (begin (R! 1) (H (- i 1) (cdr r) (cdr R) (+ S 2))))
                 (else
                  (begin (R! 0) (H (- i 1) (cdr r) R (+ S 1)))))))))

(map (lambda (e) (hofstadter (expt 10 e))) (iota 8))
=> (1 69 5764 526334 50874761 5028514748 500918449417 50029364025646)
;; (1 69 5764 526334 50874761 5028514748 500918449417 50029364025646)
96 2020-05-17 20:16

>>95
After reading >>23 and doing some experiments I've started my `E' case when r >= n + 1.5(n)^.5 I was able to compute 10^8 in 2 minutes, but I think that's it for me. To go any further would require basically just copying because I completely lack the skills to go beyond this. I'll be reading though.

(hofstadter (expt 10 8))
=> 5000934571821489
;; 5000934571821489
97 2020-05-17 20:26

>>95
>>96
A nice upgrade, congrats on R(10^8). Also a small reminder that, just as lambda bodies, the bodies of 'cond' clauses have an implicit 'begin'.

98 2020-05-17 21:31

>>98

A nice upgrade, congrats on R(10^8). Also a small reminder that, just as lambda bodies, the bodies of 'cond' clauses have an implicit 'begin'.

Thanks! Going from R(20) to R(10^4) to R(10^8) just shows to me how insane the numbers you guys are pulling. Also I've removed the begins now from my function (and also changed the letrec to let).

99 2020-05-17 22:55

>>98

how insane the numbers you guys are pulling

A new, higher table will be posted in a day or two, in case you're into that sort of thing.

100 2020-05-18 01:57

>>99
I look forward to seeing it!

101 2020-05-19 00:10

[1/14]
Here is the upgraded version of >>76 using stacked self-generation >>35 of the S sequence and layered >>81 partial >>72 fourth-order S-groups >>35 to achieve O(log(log(n))) space consumption and O(n^(1/16)) runtime >>86. Fourth-order S-group formulas were obtained from the general procedure for incrementing the S-group order. To avoid shifting in the main loop the Svec of >>83 is reversed so the 's' value is at index 0. The new Slist is [s, S1, S2 ...] and holds the starting values of the S-groups from the one with the highest order, which is furthest from the base S sequence, to the one with the lowest order, which is closest to the base S sequence, and ending with the S sequence itself. With layers numbered from 1, the first equation that defines the general procedure is:
layer[N+1].count(c,Slist) = layer[N].inc(c,Slist')
where Slist' is Slist with the value at index N-1 decremented. The [N+1].inc equation can be obtained by expressing the [N+2].count both in terms of [N+1].inc and as a sum of the component [N+1].counts:
layer[N+1].inc(c,Slist') = sum(for j in [1...c] of layer[N+1].count(s-2+j,h(j,Slist)))
where h advances the values of Slist[k in 0...N-1] for subgroup j:
h[k](j,Slist) = Slist[k+1]+layer[k+1].inc(j-1,Slist)
One thing that saves on computations is to note that Slist[N] need not be included in the final h because it is a foregone conclusion that in the final result it will appear as a linear term, and its factor will be the full count in the same layer. Implementing these as well as the previous observations yields:

class Layer:
   @ classmethod
   def count (cls, c, Slist, counts, incs):
      pass

   @ classmethod
   def increment (cls, c, Slist, counts, incs):
      pass

class ChainedLayers:
   class Layer1 (Layer):
      @ classmethod
      def count (cls, c, Slist, counts, incs):
         counts [0] = c

      @ classmethod
      def increment (cls, c, Slist, counts, incs):
         s = Slist [0]
         incs [0] = (c - 1 + s + s) * c // 2

   def _access (setasup):
      def fun (cls):
         cls.up = setasup.increment
         return cls
      return fun

   @ _access (Layer1)
   class Layer2 (Layer):
      @ classmethod
      def count (cls, c, Slist, counts, incs):
         s = Slist [0]
         # Slist [0] = s - 1
         # cls.up (c, Slist, counts, incs)
         # Slist [0] = s
         # counts [1] = incs [0]
         counts [1] = (c - 3 + s + s) * c // 2

      @ classmethod
      def increment (cls, c, Slist, counts, incs):
         countS = counts [1]
         incS   = countS + c
         s      = Slist [0]
         overR  = (incS - 1) * incS // 2
         cm1c   = (c - 1) * c
         backR  = (s - 1) * c + (c + 1 + 3 * s) * cm1c // 6
         incR   = overR - backR + Slist [1] * countS
         incs [0] = incS
         incs [1] = incR

   @ _access (Layer2)
   class Layer3 (Layer):
      @ classmethod
      def count (cls, c, Slist, counts, incs):
         S = Slist [1]
         Slist [1] = S - 1
         cls.up (c, Slist, counts, incs)
         Slist [1] = S
         counts [2] = incs [1]

      @ classmethod
      def increment (cls, c, Slist, counts, incs):
         # names from seq4_inc
         countS1 = counts [1]
         countS0 = counts [2]
         incS0   = countS0 + countS1
         overR   = (incS0 - 1) * incS0 // 2
         s       = Slist [0]
         s20     = 20 * s
         s30     = 30 * s
         s60     = s30 + s30
         backR   = c*(((((5*c + (s30 - 29))*c + ((s60 - 130)*s + 45))*c + (((s20 + s20 - 140)*s + 90)*s + 85))*c + ((250 - s60)*s - 290))*c + (s20 - 160)*s + 184) // 240
         backR   = countS1 * (countS1 + 1) // 2 * Slist [1] - countS1 + backR
         incR    = overR - backR + Slist [2] * countS0
         incs [1] = incS0
         incs [2] = incR
102 2020-05-19 00:12

[2/14]

   @ _access (Layer3)
   class Layer4 (Layer):
      @ classmethod
      def count (cls, c, Slist, counts, incs):
         S = Slist [2]
         Slist [2] = S - 1
         cls.up (c, Slist, counts, incs)
         Slist [2] = S
         counts [3] = incs [2]

      @ classmethod
      def increment (cls, c, Slist, counts, incs):
         # s A B C
         countB = counts [2]
         countC = counts [3]
         incC   = countC + countB
         s      = Slist [0]
         A      = Slist [1]
         B      = Slist [2]
         part1  = A*(A*(A*(A*c**3*(c*(c*(c*(3628800*c + 29030400*s - 43545600) + s*(87091200*s - 261273600) + 195955200) + s*(s*(116121600*s - 522547200) + 783820800) - 391910400) + s*(s*(s*(58060800*s - 348364800) + 783820800) - 783820800) + 293932800) + c**2*(c*(c*(c*(c*(c*(c*(3628800*c + 36288000*s - 44755200) + s*(145152000*s - 362880000) + 195955200) + s*(s*(290304000*s - 1103155200) + 1175731200) - 263692800) + s*(s*(s*(290304000*s - 1490227200) + 2264371200) - 667699200) - 572140800) + s*(s*(s*(s*(116121600*s - 754790400) + 1219276800) + 754790400) - 3476390400) + 2340576000) + s*(s*(s*(-348364800*s + 2070835200) - 5080320000) + 5965747200) - 2743372800) + s*(s*(-309657600*s + 1393459200) - 2090188800) + 1045094400)) + c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(1360800*c + 16329600*s - 17236800) + s*(81648000*s - 175996800) + 74239200) + s*(s*(217728000*s - 718502400) + 596332800) - 57093120) + s*(s*(s*(326592000*s - 1466035200) + 1735776000) - 52980480) - 518222880) + s*(s*(s*(s*(261273600*s - 1495065600) + 2036966400) + 1269112320) - 3868421760) + 1589353920) + s*(s*(s*(s*(s*(87091200*s - 609638400) + 462067200) + 3914265600) - 9503222400) + 6943708800) - 754034400) + s*(s*(s*(s*(-503193600*s + 3241728000) - 7576934400) + 6643123200) + 1337817600) - 3952126080) + s*(s*(s*(290304000*s - 2157926400) + 8087385600) - 13536875520) + 7980094080) + s*(s*(s*(-19353600*s + 1645056000) - 7063096320) + 10578677760) - 5430136320) + s*(232243200*s - 696729600) + 503193600) - 38707200*s + 58060800) + c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(226800*c + 3175200*s - 2948400) + s*(19051200*s - 36288000) + 12625200) + s*(s*(63504000*s - 185976000) + 127461600) - 3707760) + s*(s*(s*(127008000*s - 508032000) + 500774400) + 41845440) - 133524720) + s*(s*(s*(s*(152409600*s - 780192000) + 925344000) + 542263680) - 1263376800) + 366997680) + s*(s*(s*(s*(s*(101606400*s - 638668800) + 700358400) + 1967454720) - 4436147520) + 2285216640) + 69970320) + s*(s*(s*(s*(s*(s*(29030400*s - 217728000) - 50803200) + 2999404800) - 6769929600) + 4090443840) + 2021513760) - 1954478160) + s*(s*(s*(s*(s*(-241920000*s + 1678924800) - 3536870400) + 392716800) + 8533123200) - 10621235520) + 3069944640) + s*(s*(s*(s*(464486400*s - 3215923200) + 10204992000) - 15705043200) + 8335474560) + 1164072000) + s*(s*(s*(s*(-19353600*s + 899942400) - 3424619520) + 710277120) + 9379426560) - 8806936320) + s*(s*(s*(58060800*s - 2532096000) + 10668994560) - 16847631360) + 9472619520) + s*(s*(58060800*s - 1403136000) + 3741050880) - 2127928320) + s*(3870720*s + 1033482240) - 1552711680)
103 2020-05-19 00:13

[3/14]

         part2  = B*(A*(A*(A*c**2*(c*(c*(29030400*c + 174182400*s - 261273600) + s*(348364800*s - 1045094400) + 783820800) + s*(s*(232243200*s - 1045094400) + 1567641600) - 783820800) + c*(c*(c*(c*(c*(c*(21772800*c + 174182400*s - 203212800) + s*(522547200*s - 1248307200) + 551577600) + s*(s*(696729600*s - 2554675200) + 2090188800) + 246758400) + s*(s*(s*(348364800*s - 1741824000) + 1393459200) + 2815948800) - 3418329600) + s*(s*(-1161216000*s + 5167411200) - 8360755200) + 4833561600) + s*(-696729600*s + 2090188800) - 1567641600)) + c*(c*(c*(c*(c*(c*(c*(c*(5443200*c + 54432000*s - 52617600) + s*(217728000*s - 435456000) + 131846400) + s*(s*(435456000*s - 1349913600) + 742694400) + 256677120) + s*(s*(s*(435456000*s - 1857945600) + 1117670400) + 2228567040) - 1665740160) + s*(s*(s*(s*(174182400*s - 958003200) - 241920000) + 6178636800) - 7575724800) + 1616630400) + s*(s*(s*(-1122508800*s + 5554483200) - 8157542400) + 1485388800) + 3752179200) + s*(s*(1161216000*s - 5786726400) + 12275020800) - 9252472320) + s*(s*(-38707200*s + 2922393600) - 8387850240) + 5904783360) - 464486400*s + 696729600) + B*(A*(A*c*(c*(58060800*c + 232243200*s - 348364800) + s*(232243200*s - 696729600) + 522547200) + c*(c*(c*(c*(29030400*c + 174182400*s - 183859200) + s*(348364800*s - 774144000) + 145152000) + s*(s*(232243200*s - 812851200) - 58060800) + 1112832000) + s*(-928972800*s + 2748211200) - 2032128000)) + c*(c*(c*(c*(c*(c*(3628800*c + 29030400*s - 24192000) + s*(87091200*s - 154828800) + 4032000) + s*(s*(116121600*s - 329011200) - 67737600) + 290304000) + s*(s*(s*(58060800*s - 232243200) - 377395200) + 1354752000) - 473760000) + s*(s*(-464486400*s + 1606348800) - 774144000) - 846720000) + s*(928972800*s - 2709504000) + 1975680000)) + c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(453600*c + 5443200*s - 4536000) + s*(27216000*s - 47174400) + 10735200) + s*(s*(72576000*s - 195955200) + 81043200) + 36281280) + s*(s*(s*(108864000*s - 406425600) + 197164800) + 371589120) - 204876000) + s*(s*(s*(s*(87091200*s - 420940800) + 91929600) + 1411845120) - 1371565440) + 133156800) + s*(s*(s*(s*(s*(29030400*s - 174182400) - 263692800) + 2364364800) - 2937916800) + 19595520) + 935373600) + s*(s*(s*(s*(-270950400*s + 1475712000) - 1703116800) - 2165184000) + 5123865600) - 2034002880) + s*(s*(s*(677376000*s - 3506227200) + 7190668800) - 5481745920) - 251395200) + s*(s*(s*(-19353600*s + 541900800) - 782853120) - 3935554560) + 5662406400) + s*(s*(77414400*s - 3286886400) + 8872980480) - 5715763200) + 928972800*s - 1354752000)
         part3  = c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(c*(14175*c + 226800*s - 189000) + s*(1587600*s - 2721600) + 812700) + s*(s*(6350400*s - 16783200) + 9903600) + 52080) + s*(s*(s*(15876000*s - 57456000) + 49215600) + 6347040) - 11050550) + s*(s*(s*(s*(25401600*s - 117936000) + 125193600) + 65677920) - 128034480) + 28664160) + s*(s*(s*(s*(s*(25401600*s - 145152000) + 162388800) + 281756160) - 588183120) + 233335200) + 23121812) + s*(s*(s*(s*(s*(s*(14515200*s - 99187200) + 76204800) + 609799680) - 1329558720) + 628068000) + 363318480) - 231117040) + s*(s*(s*(s*(s*(s*(s*(3628800*s - 29030400) - 37497600) + 662054400) - 1443304800) + 413965440) + 1801674000) - 1744031520) + 303711135) + s*(s*(s*(s*(s*(s*(-38707200*s + 287884800) - 517708800) - 719913600) + 3666835200) - 4386013920) + 1191840000) + 491453160) + s*(s*(s*(s*(s*(122572800*s - 891072000) + 2710512000) - 3678716160) + 179323200) + 3719604000) - 1855381864) + s*(s*(s*(s*(s*(-4838400*s + 75479040) + 11289600) - 2663646720) + 7910091840) - 7185104640) + 1227340800) + s*(s*(s*(s*(25804800*s - 972518400) + 4195215360) - 5737536000) + 766308480) + 2760323440) + s*(s*(s*(-6451200*s + 669312000) - 3513619200) + 7885409280) - 5741832320) + s*(s*(s*(1612800*s - 119347200) + 2311787520) - 5494809600) + 2494340352) + s*(s*(s*(645120*s - 5160960) + 48015360) - 2009825280) + 2832168960
         extra  = c * (part1 + part2 + part3) // 464486400
         incR   = extra + Slist [3] * countC
         incs [2] = incC
         incs [3] = incR
104 2020-05-19 00:14

[4/14]

class Group:
   def counts (self, c, Slist, counts, incs):
      pass

   def increments (self, c, Slist, counts, incs):
      pass

class LayeredGroups:
   class Group1 (Group):
      def __init__ (self, layer1):
         self.layer1 = layer1

      def counts (self, c, Slist, counts, incs):
         self.layer1.count (c, Slist, counts, incs)

      def increments (self, c, Slist, counts, incs):
         self.layer1.increment (c, Slist, counts, incs)

   class GroupN (Group):
      def __init__ (self, order, layers):
         self.order  = order
         self.layers = layers

         self.cache_coufun = [l.count for l in layers [:order]]
         self.cache_incfun = layers [order - 1].increment

      def counts (self, c, Slist, counts, incs):
         for fun in self.cache_coufun:
            fun (c, Slist, counts, incs)

      def increments (self, c, Slist, counts, incs):
         self.cache_incfun (c, Slist, counts, incs)

def work_layers_partial (c, Slist, counts, incs, nleft, countfun, order):
   order -= 1
   countfun (1, Slist, counts, incs)
   want = counts [order]
   if want > nleft:
      return 0

   low, high = 1, c
   diff      = high - low
   savec     = counts [:]
   savei     = incs   [:]

   while diff > 1:
      now  = (low + high) // 2
      countfun (now, Slist, counts, incs)
      want = counts [order]

      if want <= nleft:
         low = now
         savec [:] = counts
         savei [:] = incs
      else:
         high = now

      diff = high - low

   counts [:] = savec
   incs   [:] = savei
   return low

def work_layers_loop (n, groups):
   glen   = len (groups)
   gm1    = glen - 1
   R      = 3
   Slist  = [   4] * glen
   counts = [None] * glen
   incs   = [None] * glen
   levels = []
   s, c   = 4, 3
   done   = 2
   next   = seq2_next

   g = groups [-1]
   gcounts    = g.counts
   increments = g.increments
   slistrange = range (gm1)

   gcounts (c, Slist, counts, incs)
   advance = counts [gm1]
   done   += advance

   while done <= n:
      increments (c, Slist, counts, incs)
      R += incs [gm1]
      for k in slistrange:
         Slist [k + 1] += incs [k]
      s, c = next (levels, 0)
      Slist [0] = s
      gcounts (c, Slist, counts, incs)
      advance = counts [gm1]
      done   += advance

   done -= advance
   print ("levels:", len (levels))

   for gorder in range (glen, 1, -1):
      g = groups [gorder - 1]
      used = work_layers_partial (
         c, Slist, counts, incs,
         n - done, g.counts, gorder)

      if used > 0:
         advance = counts [gorder - 1]
         g.increments (used, Slist, counts, incs)
         R += incs [gorder - 1]
         for k in range (gorder - 1):
            Slist [k + 1] += incs [k]
         done += advance

      levels = [[s, c, used, Slist [1]]]
      s, c   = next (levels, 0)
      Slist [0] = s
      del Slist [1]

   if done < n:
      g = groups [0]
      c = n - done
      g.counts (c, Slist, counts, incs)
      g.increments (c, Slist, counts, incs)
      R += incs [0]

   return R

def work_layers (n):
   layers = [
      ChainedLayers.Layer1,
      ChainedLayers.Layer2,
      ChainedLayers.Layer3,
      ChainedLayers.Layer4,
   ]
   groups = [
      LayeredGroups.Group1 (layers [0]),
      LayeredGroups.GroupN (2, layers),
      LayeredGroups.GroupN (3, layers),
      LayeredGroups.GroupN (4, layers),
   ]
   return work_layers_loop (n, groups)
105 2020-05-19 00:15

[5/14]
I do not recommend looking at the fourth-order formula for too long, it might start looking back at you. Here are the updated timings, all in vanilla python.
- The u64 overflow R(10^10) is too small for fourth-order S-groups to pay off and the third-order timing >>76 of under 100 microseconds remains as reference.
- The u128 overflow R(10^20) is computed in under one millisecond.
- The u256 overflow R(10^39) is computed in 16 milliseconds.
- The first integer k for which R(10^k) overflows an u512 is k=78. The value of R(10^78) is computed in under 5 seconds.

$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_layers (100000000000000000000)'
1000 loops, best of 3: 945 usec per loop
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_layers (1000000000000000000000000000000000000000)'
100 loops, best of 3: 15.6 msec per loop
$ python3 -m timeit -s 'import flake.hofs as mod' 'mod.work_layers (1000000000000000000000000000000000000000000000000000000000000000000000000000000)'
10 loops, best of 3: 4.79 sec per loop

R(10^78) is the appropriate value for comparing implementations with this runtime order of growth. Here is the k<=120 table in >>38 format. This is most likely the last such table I'll post in full so please excuse the length. Consider it raw research data.

106 2020-05-19 00:16

[6/14]

$ cat table4.txt
  1                                                                                                                                                                                                                                               69   7 0
  2                                                                                                                                                                                                                                             5764  13 0
  3                                                                                                                                                                                                                                           526334  20 0
  4                                                                                                                                                                                                                                         50874761  26 1
  5                                                                                                                                                                                                                                       5028514748  33 1
  6                                                                                                                                                                                                                                     500918449417  39 1
  7                                                                                                                                                                                                                                   50029364025646  46 1
  8                                                                                                                                                                                                                                 5000934571821489  53 2
  9                                                                                                                                                                                                                               500029664631299282  59 2
 10                                                                                                                                                                                                                             50000940106333921296  66 2
 11                                                                                                                                                                                                                           5000029765607020319048  73 2
 12                                                                                                                                                                                                                         500000941936492050650505  79 2
 13                                                                                                                                                                                                                       50000029798618763894670256  86 2
 14                                                                                                                                                                                                                     5000000942529842698007077786  93 2
 15                                                                                                                                                                                                                   500000029809255627825531266625  99 3
107 2020-05-19 00:18

[7/14]

 16                                                                                                                                                                                                                 50000000942720152238624116795401 106 3
 17                                                                                                                                                                                                               5000000029812655507343465281696595 112 3
 18                                                                                                                                                                                                             500000000942780823112495107784753816 119 3
 19                                                                                                                                                                                                           50000000029813737262126730811322149673 126 3
 20                                                                                                                                                                                                         5000000000942800098290022420982686040347 132 3
 21                                                                                                                                                                                                       500000000029814080548392288266955229183571 139 3
 22                                                                                                                                                                                                     50000000000942806209878293665994446398371544 146 3
 23                                                                                                                                                                                                   5000000000029814189323670710814676032031444555 152 3
 24                                                                                                                                                                                                 500000000000942808145472258657037814775197247031 159 3
 25                                                                                                                                                                                               50000000000029814223760934912839828327249688721190 166 3
 26                                                                                                                                                                                             5000000000000942808758091081952084125868709915711089 172 3
 27                                                                                                                                                                                           500000000000029814234658067055252766458087914346630413 179 3
 28                                                                                                                                                                                         50000000000000942808951913512756782469229223199209677641 186 3
 29                                                                                                                                                                                       5000000000000029814238105320163714566005337846445873165675 192 3
 30                                                                                                                                                                                     500000000000000942809013222649956573426148951590151576716407 199 4
108 2020-05-19 00:20

[8/14]

 31                                                                                                                                                                                   50000000000000029814239195666197369782628797813806672178028303 205 4
 32                                                                                                                                                                                 5000000000000000942809032613363227344483520545074111347580106666 212 4
 33                                                                                                                                                                               500000000000000029814239540504829346639521751526121067855028048469 219 4
 34                                                                                                                                                                             50000000000000000942809038745792444395013325500575549841178605176222 225 4
 35                                                                                                                                                                           5000000000000000029814239649559693010895101002840721426848127286038482 232 4
 36                                                                                                                                                                         500000000000000000942809040685134541874198687080125967207639651015633313 239 4
 37                                                                                                                                                                       50000000000000000029814239684047173947897406203055196803310606765583116781 245 4
 38                                                                                                                                                                     5000000000000000000942809041298425788194699108125613517442497904284636150437 252 4
 39                                                                                                                                                                   500000000000000000029814239694953305682145634502486118475654808671411192582303 259 4
 40                                                                                                                                                                 50000000000000000000942809041492368615124254362375085779982992292343853681977675 265 4
 41                                                                                                                                                               5000000000000000000029814239698402168816728284052931912470389462588984951289354475 272 4
 42                                                                                                                                                             500000000000000000000942809041553699275352369083555067298879759994519671549809149522 279 4
 43                                                                                                                                                           50000000000000000000029814239699492802485126708959655333290571225887358803916579127627 285 4
 44                                                                                                                                                         5000000000000000000000942809041573093831553392834637054332803551711977694994007009790372 292 4
 45                                                                                                                                                       500000000000000000000029814239699837692448235838620313436065493069882669969298165276420932 298 4
109 2020-05-19 00:21

[9/14]

 46                                                                                                                                                     50000000000000000000000942809041579226946272526629650233821592037083538450290978659280817924 305 4
 47                                                                                                                                                   5000000000000000000000029814239699946756464770327217558477196037097548855222595700238539054865 312 4
 48                                                                                                                                                 500000000000000000000000942809041581166410560146744996087637489747926009831507674531860092771246 318 4
 49                                                                                                                                               50000000000000000000000029814239699981245576707388365411525341190531682837962715909950754430162916 325 4
 50                                                                                                                                             5000000000000000000000000942809041581779723574457735429357187680244102657245323510219796435077511331 332 4
 51                                                                                                                                           500000000000000000000000029814239699992151998934816806362938022380717694881944763055115621651771537182 338 4
 52                                                                                                                                         50000000000000000000000000942809041581973670277645147248950274174680459501036705469450283128240202936345 345 4
 53                                                                                                                                       5000000000000000000000000029814239699995600913788820676216348225539327181484799269054153129487836460510789 352 4
 54                                                                                                                                     500000000000000000000000000942809041582035001627896095619939474929936571762744906288012004273501460096218956 358 4
 55                                                                                                                                   50000000000000000000000000029814239699996691556662690868009166942952903517096871912188005052332391102154504541 365 4
 56                                                                                                                                 5000000000000000000000000000942809041582054396306898966297540047317634200313888303569207665197822737010723880453 372 4
 57                                                                                                                               500000000000000000000000000029814239699997036448263909956827998702688566561312699712649393246034276487069718675623 378 4
 58                                                                                                                             50000000000000000000000000000942809041582060529443468690126506942010712319519060683290576216509510379646108458690288 385 5
 59                                                                                                                           5000000000000000000000000000029814239699997145512571896807332586675028913049270348666820028707699723929284676993408671 391 5
 60                                                                                                                         500000000000000000000000000000942809041582062468911643702033358845623678711109326514636451791089982268036336668374054905 398 5
110 2020-05-19 00:23

[10/14]

 61                                                                                                                       50000000000000000000000000000029814239699997180001735682481622436253734484378611829704069855253539840822385586766235223775 405 5
 62                                                                                                                     5000000000000000000000000000000942809041582063082225349534414976370874538365419773853994503610373186092493285764626069077930 411 5
 63                                                                                                                   500000000000000000000000000000029814239699997190908167132758797213601135850071440613709845156968123373754366585399707913755854 418 5
 64                                                                                                                 50000000000000000000000000000000942809041582063276172175725114596742736120123079153718814281207807930795334286345090895730008850 425 5
 65                                                                                                               5000000000000000000000000000000029814239699997194357083627207181322355525685599913243509599846712885018947248295200321875155512194 431 5
 66                                                                                                             500000000000000000000000000000000942809041582063337503547853702069293508176406619791980181270066919789232260704875309077658871313179 438 5
 67                                                                                                           50000000000000000000000000000000029814239699997195447726792843063653840657372075719381685614976391722679317994047727999252369922856111 445 5
 68                                                                                                         5000000000000000000000000000000000942809041582063356898230747593270774582830958194072544271344029190586861314812326101187327152707466137 451 5
 69                                                                                                       500000000000000000000000000000000029814239699997195792618445952782479879199592747532578345617801057490177491249280214020810071936484329248 458 5
 70                                                                                                     50000000000000000000000000000000000942809041582063363031368009324911993981653436695081910395214825714129009666207952735693967804641114586360 465 5
 71                                                                                                   5000000000000000000000000000000000029814239699997195901682763168111957892127609767027356231264653317643923496918836459245769003479450192260756 471 5
 72                                                                                                 500000000000000000000000000000000000942809041582063364970836307405265344147379095849591258863086231065338167605132633027300820966076112761019590 478 5
 73                                                                                               50000000000000000000000000000000000029814239699997195936171928594984731404832410431326232603415497802278529513210446399336722685688471505037581717 485 5
 74                                                                                             5000000000000000000000000000000000000942809041582063365584150035124009253385106350912037609197638702902567591834232001254415139381286688289965434265 491 5
 75                                                                                           500000000000000000000000000000000000029814239699997195947078360337128508382533807826849705743505607000402961948719068672551336010824498525861155901131 498 5
111 2020-05-19 00:25

[11/14]

 76                                                                                         50000000000000000000000000000000000000942809041582063365778096865206896625389696631480880925445600815945621310718924565484266894856253334433920035745613 504 5
 77                                                                                       5000000000000000000000000000000000000029814239699997195950527276883481022121537204551816939828037724959179119720005114619789384557978355406667770625673021 511 5
 78                                                                                     500000000000000000000000000000000000000942809041582063365839428238027648024522282176747574937498915190189902751724515553262387695706471431607983773634306949 518 5
 79                                                                                   50000000000000000000000000000000000000029814239699997195951617920058347188612863985188088019716697687439529171713502632907206540254112391538006189341119265333 524 5
 80                                                                                 5000000000000000000000000000000000000000942809041582063365858822921044628544419686861515336911850770576983501847405744828313705187013560455193479696110468330206 531 5
 81                                                                               500000000000000000000000000000000000000029814239699997195951962811713098347227573413736252403144303445747301758908535925291294897583816428354549868387341178588346 538 5
 82                                                                             50000000000000000000000000000000000000000942809041582063365864956058328249337495524348465181244040575792683558297760599704116002857011675528582962015410215017827124 544 5
 83                                                                           5000000000000000000000000000000000000000029814239699997195952071876030605575553722003515432640851210314605052934617877898005396030823635769597183022713568800739692533 551 5
 84                                                                         500000000000000000000000000000000000000000942809041582063365866895526630222251318501129855514085496196105626102656203814913215260423427828184462929343617690500791646666 558 5
 85                                                                       50000000000000000000000000000000000000000029814239699997195952106365196084356764765809535763331820943974807290580225482459630513467554281256221042352711338115017463240541 564 5
 86                                                                     5000000000000000000000000000000000000000000942809041582063365867508840358633208941979850651071615586015009119802497660371744120427648094844027760581745342810140498965106367 571 5
 87                                                                   500000000000000000000000000000000000000000029814239699997195952117271627835731401282096332807194179800666433767421322710901699216993989552590481278739190818085942770801469939 578 5
 88                                                                 50000000000000000000000000000000000000000000942809041582063365867702787188839192281512305720005862101628358486348391543656444785856690079066484000745660388932846521717583436672 584 5
 89                                                               5000000000000000000000000000000000000000000029814239699997195952120720544383725431643624010804777145056002961829350947459896502695796003933627466058904178569549311922074944075782 591 5
 90                                                             500000000000000000000000000000000000000000000942809041582063365867764118561681833720315754323670571454304241020343588011942036131390212670993069425493690411199667029506064382198016 597 5
112 2020-05-19 00:27

[12/14]

 91                                                           50000000000000000000000000000000000000000000029814239699997195952121811187558883507241345558737726545999412237587447898252547907582547510519089410040047788072242400367711147080157727 604 5
 92                                                         5000000000000000000000000000000000000000000000942809041582063365867783513244702706919210496847388097907490730561168684789641855837414230081302939593583530252590810192511687081389584649 611 5
 93                                                       500000000000000000000000000000000000000000000029814239699997195952122156079213686575662975687566014375880870268129216859868227878333405950277188139310550757934138235810778808652737796639 617 5
 94                                                     50000000000000000000000000000000000000000000000942809041582063365867789646381987019941821472362222203714728005221324275936722498476128726389007200479444393049485586814721792217509104079110 624 5
 95                                                   5000000000000000000000000000000000000000000000029814239699997195952122265143531203035031439823370678039196191802270607001412643779490569427497770692978007295457906377399559843297890568189866 631 5
 96                                                 500000000000000000000000000000000000000000000000942809041582063365867791585850289115953723515409412530453716179723949902714208291933178644568856485101792218509649629380619056180826695356377537 637 5
 97                                               50000000000000000000000000000000000000000000000029814239699997195952122299632696683457783499116765673688757669523265396670671188806266268720804525668981781967144106436746280657940665788439590390 644 5
 98                                             5000000000000000000000000000000000000000000000000942809041582063365867792199164017548801668450709192195528441356626675780278327749842187890684434038130314296479515824504606690865756223764948222340 651 5
 99                                           500000000000000000000000000000000000000000000000029814239699997195952122310539128435124332376888152625270659458987595531648669494610577903474208680016494042339796335956144164046454052238695640354016 657 5
100                                         50000000000000000000000000000000000000000000000000942809041582063365867792393110847758677724581317563632235727078699914934976135220713319517131197635890992941277256245865422676335917679671598995284082 664 5
101                                       5000000000000000000000000000000000000000000000000029814239699997195952122313988044983170272979669187564090056204087862225062294241222827751828464949986340519571497298198002598648939074996373483914447397 671 5
102                                     500000000000000000000000000000000000000000000000000942809041582063365867792454442220602011397936848113874420393537095457965215957158912649545530443213392290485913884416759482555267287764026295136483005354 677 5
103                                   50000000000000000000000000000000000000000000000000029814239699997195952122315078688158337579677655571138789770963267360003715873732916510918408098298214394658187267587530489642973331025727113006001249379524 684 5
104                                 5000000000000000000000000000000000000000000000000000942809041582063365867792473836903623007695580001287076860764743889778515149564610140577395467174937323081260061662272697625921374924102431637742255986945708 690 5
105                               500000000000000000000000000000000000000000000000000029814239699997195952122315423579813142289648032340230234978179722289107606270016375163200939959606513106143600536988097227759180425522664743968616636021441441 697 5
113 2020-05-19 00:29

[13/14]

106                             50000000000000000000000000000000000000000000000000000942809041582063365867792479970040907342608601729480939538163751892603941518401843547290072675831349350848171553750716945596883505269063591409273766136095805030 704 5
107                           5000000000000000000000000000000000000000000000000000029814239699997195952122315532644130659040929889331216248472550850083269783136807766340829778013041746760196610695120850754652637718380322298674397791085034653961 710 5
108                         500000000000000000000000000000000000000000000000000000942809041582063365867792481909509209442513232282866157701883060253714497239934368925370275988749321175503399479440603299642882935414549432956007859008157718549272 717 5
109                       50000000000000000000000000000000000000000000000000000029814239699997195952122315567133296139515592327460372291709902029314919549982973759829994860762372140116849311835050087465182281795286741721854664899827352151912566 724 5
110                     5000000000000000000000000000000000000000000000000000000942809041582063365867792482522822937876053413359268446786533037870819916002566574758232461266293922758424473117794836218022356203203733610226062472495023950284860065 730 5
111                   500000000000000000000000000000000000000000000000000000029814239699997195952122315578039727891191372323848942045187196389956266215002510615796659322779348204022642496544547177964957102389077431946880846604910511822868188331 737 5
112                 50000000000000000000000000000000000000000000000000000000942809041582063365867792482716769768086052568450245312452843417669093617146330284626106116978987640409292027276080645020792488674761112283184614545920243865425193780466 744 5
113               5000000000000000000000000000000000000000000000000000000029814239699997195952122315581488644439238954477824572294581975143652415875036756283026191242594450815983121437863375107877624835093736796129957926568426462450366758392478 750 5
114             500000000000000000000000000000000000000000000000000000000942809041582063365867792482778101140929408132244800358157207174012141617105525396954793271772415553258087888277333849857396284671289525172375713654636523221821335286376853 757 5
115           50000000000000000000000000000000000000000000000000000000029814239699997195952122315582579287614406553089530324116033795051581537701215496920388649969502926051838237707358517637569703550538467520187019116517002585257142738052253253 764 5
116         5000000000000000000000000000000000000000000000000000000000942809041582063365867792482797495823950408322620234762526209920619206513480935420807211061249643782036650334666400635855807415064897612541238001972361413653203208834167908773 770 6
117       500000000000000000000000000000000000000000000000000000000029814239699997195952122315582924179269211314970329478642833991142699924678528609685030399323964767533396715839851240530141402298048009514860572457480335347009482712253713858492 777 6
118     50000000000000000000000000000000000000000000000000000000000942809041582063365867792482803628961234743927878607006595156633637303703998869376378658739885940334308940718835321426318790230646440539042552038785423215306810427479795851308403 783 6
119   5000000000000000000000000000000000000000000000000000000000029814239699997195952122315583033243586728075483310895233493405113328031324626690805041965550114607412481285291759737215817417394308157788139557311388604511159666876568051518443324 790 6
120 500000000000000000000000000000000000000000000000000000000000942809041582063365867792482805568429536843955608187840000424522078995935973483241408413592544151956341508175361598028482106450141472450620675293371345306727392622125356063314915792 797 6
114 2020-05-19 00:30

[14/14]
Since I think R(10^100) is about as high as a reasonable person would want to go, I'll only be adding other improvements if they are interesting enough. For example, the numbers involved are well over the limit that the gmpy2 documentation states as the start of its speed advantage over python ints, so it might be time for a gmpy2 or GMP port to compare with vanilla python. So for possibly the last time, please verify the value of R(10^120) with other implementations.

115 2020-05-19 19:30

>>100-114
Well done and congratulations you absolute mad man. I have validated your values up to R(10^14) if that's of any utility.

116 2020-05-19 20:04

>>115

Well done and congratulations you absolute mad man.

Thanks! I come by my strangeness honestly and use it only for good.

I have validated your values up to R(10^14) if that's of any utility.

Of course it is, thank you.

117 2020-05-19 21:23

Big pipi ===3

118 2020-05-20 12:41

I wrote a first-order implementation based off of >>18 and my previous implementation >>95. I did end up deviating slightly in that I am computing R and taking the partial triangular number between every two values of R exclusive instead of doing the run-length encoding of S, so I thought it might be worth sharing:

;; first-order
(define (hofstadter n)
  (let ((r0 (list 1))
        (T (lambda (a b) (quotient (- (* b (- b 1)) (* a (+ a 1))) 2))))
     (let H ((i 1) (r r0) (R r0) (R2 1) (S 1))
       (let ((S1 (if (= (car R) S) (+ S 1) S)))
         (if (> (+ i S1) n)
             (+ R2 (T (car r) (+ (car r) (- n i) 1)))
             (begin
               (set-cdr! r (list (+ (car r) S1)))
               (H (+ i (- S1 1)) (cdr r) (if (= (car R) S) (cdr R) R)
                  (+ R2 (T (car r) (cadr r))) (+ S1 1))))))))

I plan on copying and integrating the higher-order concept into this implementation, if I can.

119 2020-05-20 13:59

It would be nice to have timings with names and versions of Scheme implementations for R(10^10) for >>93 and >>118.

120 2020-05-20 15:49

>>119

It would be nice to have timings with names and versions of Scheme implementations for R(10^10) for >>93 and >>118.

I can oblige. I was going to use `statprof' but for whatever reason this made jump-sequence.scm >>98 crash at R(10^12), so instead I used guile to cache the compilation and then used the time program. This probably makes the results more consistent across languages anyway. I ran each thrice and took the best time for each value, except for R(10^13), which I only ran on each once, and I've included your python3 implementation given in >>18 which should be of the same order of growth.

jump-sequence.scm
R(10^8)  5000934571821489             0.224 secs
R(10^9)  500029664631299282           0.556 secs
R(10^10) 50000940106333921296         1.397 secs
R(10^11) 5000029765607020319048       4.252 secs
R(10^12) 500000941936492050650505    13.444 secs
R(10^13) 50000029798618763894670256  67.324 secs

first-order.scm
R(10^8)  5000934571821489             0.355 secs
R(10^9)  500029664631299282           0.899 secs
R(10^10) 50000940106333921296         2.591 secs
R(10^11) 5000029765607020319048       7.796 secs
R(10^12) 500000941936492050650505    26.916 secs
R(10^13) 50000029798618763894670256 112.138 secs

flake.hofs
R(10^8)  5000934571821489             0.227 secs
R(10^9)  500029664631299282           0.361 secs
R(10^10) 50000940106333921296         0.811 secs
R(10^11) 5000029765607020319048       2.305 secs
R(10^12) 500000941936492050650505     7.006 secs
R(10^13) 50000029798618763894670256  22.340 secs

>>93 and >>118 seem to grow at roughly the same rate, although it seems the sequence approach given in >>98 is superior by a constant factor of roughly two. The python implementation given in >>18 grows like my Scheme translation in >>118 off set by one, this is probably because I am throwing away the last value of R in (cdr r) that I compute. I'll attempt to resolve this.

121 2020-05-20 16:00

>>120

This is probably because I am throwing away the last value of R in (cdr r) that I compute.

This isn't true actually, it must be something else.

122 2020-05-20 19:00 *

I have actually just noticed some very serious errors in >>93, let me fix it up quickly.

123 2020-05-20 19:40

I removed some silly mistakes and changed the sum to be calculated during sequence. According to my amateurish measurements it reduced the runtime to around 80% of the previous version.

(use-modules (ice-9 q))

(define (sequence n)
  (let* ((rs (make-q))                  ; Queue of R values.
         (ss (make-q))                  ; Queue of S values.
         (r 7)                          ; Current R value.
         (s 5)                          ; Current S value.
         (len 1)                        ; Length of queues.
         (negsum -7)                    ; Negative sum of R queue.
         (pos 3)                        ; Smallest n in the queues.
         (n (- n 1)))                   ; Off by one.
    (enq! rs r)
    (enq! ss s)
    (let loop ((i 4))
      (when (<= (+ s r) (+ i n))
        (set! r (+ r s))
        (set! s (+ s 1))
        (set! negsum (- negsum r))
        (when (= s (q-front rs))
          (set! negsum (+ negsum s))
          (set! s (+ s 1))
          (set! len (- len 1))
          (set! pos (+ pos 1))
          (deq! ss)
          (deq! rs))
        (enq! rs r)
        (enq! ss s)
        (loop (+ i 1))))
    (list pos len negsum (cadr rs) (cadr ss))))

(define (sum m n)
  (* (/ (- m n -1) 2) (+ m n)))

(define (jump n)
  (cond
   ((= n 1) 1)
   ((= n 2) 3)
   ((= n 3) 7)
   ((= n 4) 12)
   (else
    (let* ((seq (sequence n))
           (m   (list-ref seq 0))
           (l   (list-ref seq 1))
           (ns  (list-ref seq 2))
           (r'  (list-ref seq 3))
           (s'  (list-ref seq 4))
           (n'  (+ m l -1))
           (s   (+ s' (- n n') l)))
      (+ r' (sum (- s 1) s') ns)))))

I have Guile 3.0.0 and statprof does not crash for me, it just complains that it has a hard time making sense of the stack. It still produces some output but I am not sure if it can be trusted.

124 2020-05-20 19:53

>>123
I think I found a performance improvement in my implementation by integrating an idea from my previous version. I'll rerun the timings after that, I'll continue to not use `statprof' as it seems pretty sketchy in general as you mention.

125 2020-05-20 20:27 *

>>123,124
Nice. I also just realized that now I have no reason to save the R values. Their queue has been removed.

(use-modules (ice-9 q))

(define (sequence n)
  (let* ((rs 7)                         ; Smallest R value unused.
         (ss (make-q))                  ; Queue of S values.
         (r 7)                          ; Current R value.
         (s 5)                          ; Current S value.
         (negsum -7)                    ; Negative sum of unused R values.
         (pos 3)                        ; Smallest n in the queues.
         (n (- n 1)))                   ; Off by one.
    (enq! ss s)
    (let loop ((i 4))
      (when (<= (+ s r) (+ i n))
        (set! r (+ r s))
        (set! s (+ s 1))
        (set! negsum (- negsum r))
        (when (= s rs)
          (set! rs (+ rs (q-front ss)))
          (set! negsum (+ negsum s))
          (set! s (+ s 1))
          (set! pos (+ pos 1))
          (deq! ss))
        (enq! ss s)
        (loop (+ i 1))))
    (list pos negsum r (cadr ss))))

(define (sum m n)
  (* (/ (- m n -1) 2) (+ m n)))

(define (jump n)
  (cond
   ((= n 1) 1)
   ((= n 2) 3)
   ((= n 3) 7)
   ((= n 4) 12)
   (else
    (let* ((seq (sequence n))
           (m   (list-ref seq 0))
           (ns  (list-ref seq 1))
           (r'  (list-ref seq 2))
           (s'  (list-ref seq 3))
           (n'  (+ m -1))
           (s   (+ s' (- n n'))))
      (+ r' (sum (- s 1) s') ns)))))

This is my last update for today as it feels spammy. If you are going to rerun the benchmarks, please use this version.

126 2020-05-20 20:43

>>120
Thanks for the timings, Anon. I wanted to see how CPython compares with Guile and friends on the bigint side of things.

127 2020-05-20 20:45

>>123,124

sequence.scm
R(10^8)  5000934571821489            0.121 secs
R(10^9)  500029664631299282          0.421 secs
R(10^10) 50000940106333921296        1.189 secs
R(10^11) 5000029765607020319048      3.671 secs
R(10^12) 500000941936492050650505   13.163 secs
R(10^13) 50000029798618763894670256 54.254 secs

first-order.scm
R(10^8)  5000934571821489            0.268 secs
R(10^9)  500029664631299282          0.609 secs
R(10^10) 50000940106333921296        2.090 secs
R(10^11) 5000029765607020319048      6.838 secs
R(10^12) 500000941936492050650505   21.922 secs
R(10^13) 50000029798618763894670256 69.281 secs

flake.hofs performance remains unchanged, and continues to beat both these implementations except for the R(10^8) case which is now taken by the sequence implementation indicating a low constant. Your old implementation continues to beat mine at every point, and your improved version even more so, on my machine for most values your execution time decreased by about 15%. I improved the memory complexity of my program, and a small constant, which let me get much closer to you, especially when k got large.

128 2020-05-20 20:49

>>125
Here is my improved version, I took the group number directly from >>23 and keep only those values of R necessary to generate S[n] for the first order R just as I did with my second implementation >>95. My timings are still shifted one below that of flake.hofs, and I'm not still not sure why.

(define (hofstadter n)
  (let ((r0 (list 1))
        (T (lambda (a b) (quotient (- (* b (- b 1)) (* a (+ a 1))) 2)))
        (m (inexact->exact (ceiling (* 1.5 (sqrt n))))))
     (let H ((i 1) (r r0) (R r0) (R2 1) (S 1))
       (let ((S1 (if (= (car R) S) (+ S 1) S)))
         (cond ((> (+ i S1) n) (+ R2 (T (car r) (+ (car r) (- n i) 1))))
               ((>= (car r) m)
                (let E ((i i) (r (car r)) (R R) (R2 R2) (S S))
                  (let ((S1 (if (= (car R) S) (+ S 1) S)))
                    (if (> (+ i S1) n)
                        (+ R2 (T r (+ r (- n i) 1)))
                        (E (+ i (- S1 1)) (+ r S1)
                           (if (= (car R) S) (cdr R) R)
                           (+ R2 (T r (+ r S1))) (+ S1 1))))))
               (else (set-cdr! r (list (+ (car r) S1)))
                     (H (+ i (- S1 1)) (cdr r) (if (= (car R) S) (cdr R) R)
                        (+ R2 (T (car r) (cadr r))) (+ S1 1))))))))
129 2020-05-20 21:01

>>126

Thanks for the timings, Anon. I wanted to see how CPython compares with Guile and friends on the bigint side of things.

No problem! Especially seeing as how these timings seem to have pointed to two bugs in our scheme implementations! I have Chicken Scheme installed as well, and while I can't test the sequence implementation easily because of the library, apparently in Chicken my new implementation is competitive with yours using the python3 interpreter.
'''
R(10^13) 50000029798618763894670256 21.992 secs
'''
If there are no genuine errors in my implementation then boy is the Python3 interpreter fast!

130 2020-05-20 21:25

>>129
Here, this version should work with Guile, Chez Scheme and Chicken.

(define (sequence n)
  (let* ((rs 7)                         ; Smallest R value unused.
         (ss (let ((n (list 5)))        ; Queue of S values.
               (cons n n)))
         (r 7)                          ; Current R value.
         (s 5)                          ; Current S value.
         (negsum -7)                    ; Negative sum of unused R values.
         (pos 3)                        ; Smallest n in the queues.
         (n (- n 1)))                   ; Off by one.
    (let loop ((i 4))
      (when (<= (+ s r) (+ i n))
        (set! r (+ r s))
        (set! s (+ s 1))
        (set! negsum (- negsum r))
        (when (= s rs)
          (set! rs (+ rs (caar ss)))
          (set! negsum (+ negsum s))
          (set! s (+ s 1))
          (set! pos (+ pos 1))
          (set-car! ss (cdar ss)))
        (let ((n (list s)))
          (set-cdr! (cdr ss) n)
          (set-cdr! ss n))
        (loop (+ i 1))))
    (list pos negsum r (cadr ss))))

(define (sum m n)
  (* (/ (- m n -1) 2) (+ m n)))

(define (jump n)
  (cond
   ((= n 1) 1)
   ((= n 2) 3)
   ((= n 3) 7)
   ((= n 4) 12)
   (else
    (let* ((seq (sequence n))
           (m   (list-ref seq 0))
           (ns  (list-ref seq 1))
           (rx  (list-ref seq 2))
           (sx  (list-ref seq 3))
           (nx  (+ m -1))
           (s   (+ sx (- n nx))))
      (+ rx (sum (- s 1) sx) ns)))))

Both perform better on R(10^13) but slow down soon after.

>>126
You might want to run a benchmark that actually stresses bigints. At least for my case, R(10^13) spends at least 80% of its time in GC. If I turn off the GC (GC_DONT_GC=0 guile hofstadter.scm) it is much faster, but at R(10^15), memory consumption blows up.

131 2020-05-20 22:10

>>130
Awesome! I'll run this soon, I set my PC to chug on some larger values while doing my “The Seasoned Schemer” chapter of the day.

132 2020-05-20 22:48

>>127

except for the R(10^8) case which is now taken by the sequence implementation indicating a low constant

Another aspect is that my timings were taken from inside the python process, using the timeit module, in order to time only the computation itself and avoid including the startup time of the CPython interpreter, which is considerable. If measurements are taken from outside the python process, this interpreter startup time will be included.

boy is the Python3 interpreter fast!

You're a funny one.

>>130

You might want to run a benchmark that actually stresses bigints.

I think the higher-order versions will serve that purpose.

133 2020-05-20 22:51

>>130

sequence.scm
R(10^13) 50000029798618763894670256 17.100 secs

Anon, your solution is a good deal faster than the Python3 in Chicken. Nice!

>>116
I've now validated your values up to R(10^17) inclusive.

134 2020-05-20 22:54

>>132

Another aspect is that my timings were taken from inside the python process, using the timeit module, in order to time only the computation itself and avoid including the startup time of the CPython interpreter, which is considerable. If measurements are taken from outside the python process, this interpreter startup time will be included.

That makes sense.

You're a funny one.

Nah, I just lack context to make sound appraisals I guess.

135 2020-05-20 22:57

>>133

I've now validated your values up to R(10^17) inclusive.

Thank you.

136 2020-05-21 07:05 *

>>130
I "compressed" the unused S values. Not much improvement in terms of order of growth though.

(define (sequence n)
  (let* ((rs 7)                         ; Smallest R value unused.
         (ss (let ((n (list 7)))        ; S skip queue.
               (cons n n)))
         (sl 5)                         ; Smalles S value unused.
         (r 7)                          ; Current R value.
         (s 5)                          ; Current S value.
         (negsum -7)                    ; Negative sum of unused R values.
         (pos 3)                        ; Smallest n in the queues.
         (n (- n 1)))                   ; Off by one.
    (let loop ((i 4))
      (when (<= (+ s r) (+ i n))
        (set! r (+ r s))
        (set! s (+ s 1))
        (set! negsum (- negsum r))
        (when (= s rs)
          (set! rs (+ rs sl))
          (set! sl (+ sl 1))
          (when (= sl (caar ss))
            (set! sl (+ sl 1))
            (set-car! ss (cdar ss)))
          (set! negsum (+ negsum s))
          (set! s (+ s 1))
          (set! pos (+ pos 1))
          (let ((n (list rs)))
            (set-cdr! (cdr ss) n)
            (set-cdr! ss n)))
        (loop (+ i 1))))
    (list pos negsum r s)))

(define (sum m n)
  (* (/ (- m n -1) 2) (+ m n)))

(define (jump n)
  (cond
   ((= n 1) 1)
   ((= n 2) 3)
   ((= n 3) 7)
   ((= n 4) 12)
   (else
    (let* ((seq (sequence n))
           (m   (list-ref seq 0))
           (ns  (list-ref seq 1))
           (rx  (list-ref seq 2))
           (sx  (list-ref seq 3))
           (nx  (+ m -1))
           (s   (+ sx (- n nx))))
      (+ rx (sum (- s 1) sx) ns)))))
137 2020-05-21 11:38

Intuition would indicate and profiling shows that most of the time for R(10^78) >>105 is spent in Layer4.increment >>102.

$ python3 -m cProfile -s time wrapper.py flake.hofs layers "1$(printf "%078d" 0)" | tee temp/profile.txt | less
levels: 5
1000000000000000000000000000000000000000000000000000000000000000000000000000000 -> 500000000000000000000000000000000000000942809041582063365839428238027648024522282176747574937498915190189902751724515553262387695706471431607983773634306949 0x254AAD173EA5E82CB765169D1EFC9860FE05AE7B1E8FB19314D597561CED1F19F448CA74BCDEE10F0F168DAE0F529DA97AADE66B397A970EDCFEA83CA35C68A385 518
         1579011 function calls (1578437 primitive calls) in 5.325 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   143277    4.213    0.000    4.213    0.000 hofs.py:821(increment)
   143296    0.297    0.000    0.297    0.000 hofs.py:794(increment)
        1    0.181    0.181    5.322    5.322 hofs.py:909(work_layers_loop)
   143397    0.151    0.000    0.776    0.000 hofs.py:869(counts)
   143331    0.125    0.000    0.125    0.000 hofs.py:771(increment)

But swapping out Layer4 for a gmpy2 version that converts inputs to mpz, performs all the work in mpz and converts outputs to python ints does not yield the expected speedup.

$ python3 -m cProfile -s time wrapper.py numsci.hofs l4 "1$(printf "%078d" 0)" | tee temp/profile.txt | less
levels: 5
1000000000000000000000000000000000000000000000000000000000000000000000000000000 -> 500000000000000000000000000000000000000942809041582063365839428238027648024522282176747574937498915190189902751724515553262387695706471431607983773634306949 0x254AAD173EA5E82CB765169D1EFC9860FE05AE7B1E8FB19314D597561CED1F19F448CA74BCDEE10F0F168DAE0F529DA97AADE66B397A970EDCFEA83CA35C68A385 518
         2726218 function calls (2725626 primitive calls) in 5.041 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   143277    3.690    0.000    3.832    0.000 hofs.py:24(increment)
   143296    0.301    0.000    0.301    0.000 hofs.py:794(increment)
        1    0.201    0.201    5.026    5.026 hofs.py:909(work_layers_loop)
   143397    0.163    0.000    0.834    0.000 hofs.py:869(counts)
   143331    0.130    0.000    0.130    0.000 hofs.py:771(increment)
  1002940    0.115    0.000    0.115    0.000 {built-in method gmpy2.mpz}

This is despite the fact that the conversion cost at the edges of the method barely registers at 0.115s, and the gmpy2.mpz type is faster than python ints for large numbers.

$ python3 -m timeit -s "x = $(echo 'print ("123456" * 13)' | python3)" "y = x * (x + 1)"
10000000 loops, best of 3: 0.126 usec per loop
$ python3 -m timeit -s 'import gmpy2' -s "x = gmpy2.mpz ($(echo 'print ("123456" * 13)' | python3))" "y = x * (x + 1)"
10000000 loops, best of 3: 0.0702 usec per loop

It seems the gmpy2.mpz advantage isn't nearly as dramatic as I expected. Or there is something else going on that I cannot see yet.

138 2020-05-24 15:10

>>128
I've been doing some other things the past few days so I haven't progressed on the higher-order implementation, other than making a brief note about it. I did take the time to remove the egregious code duplication in my first-order implementation though. It's probably still not as clear as making the run-length encoding explicit as was done in >>18 but it's certainly more readable than my the last version. I also just before posting this realized how smart of a move it was in the sequence implementation >>123,130,136 to instead of computing the triangular number bounded by the values of R[n-1] and R[n] for every recursion to compute the triangular number of R[n] + (- n i) and then subtract the values of R that have been seen. Something like half of my computation was spent on computing these triangular numbers, and it explained almost the complete difference between our implementations, so I've blatantly stolen this brilliant move, and integrated it into my own implementation.

(define (hofstadter n)
  (let ((r0 (list 1))
        (m (inexact->exact (ceiling (* 1.5 (sqrt n)))))
        (T (lambda (a) (quotient (* a (+ a 1)) 2))))
     (let H ((i 1) (r r0) (R r0) (R2 0) (S 1))
       (let* ((skip (= (car R) S))
              (S (if skip (+ S 1) S))
              (rh (car r))
              (rt (if (<= rh m)
                      (begin (set-cdr! r (list (+ rh S))) (cdr r))
                      (list (+ rh S)))))
         (if (> (+ i S) n)
             (- (T (+ rh (- n i))) (+ (- R2 1) rh))
             (H (+ i (- S 1)) rt (if skip (cdr R) R)
                (+ R2 rh) (+ S 1)))))))

>>136

I "compressed" the unused S values. Not much improvement in terms of order of growth though.

sequence.scm
R(10^12) 500000941936492050650505   2.782sec
R(10^13) 50000029798618763894670256 8.979sec

This is a significant improvement, you pretty much halved the runtime on my machine (in Chicken). This makes sense considering your implementation is memory/GC bound, and you just dramatically reduced the amount of values you are creating/deleting in `ss'. In contrast almost the entirety the time my implementation takes is in computing `T' on each iteration, which makes me wonder if I could copy your... [see the start of the reply :)]

>>137

It seems the gmpy2.mpz advantage isn't nearly as dramatic as I expected.

That's really too bad, I wish I could help understand why.

139 2020-05-25 10:41

Precomputing and storing all the integer constants from Layer4.increment >>102 as mpz values actually slows things down >>137. There are 326 of them. Attribute lookup is so slow in CPython that it is faster to let the same mpz constants be converted anew from python ints on every invocation.

$ python3 -m cProfile -s time wrapper.py numsci.hofs l4 "1$(printf "%078d" 0)" | tee temp/profile.txt | less
levels: 5
1000000000000000000000000000000000000000000000000000000000000000000000000000000 -> 500000000000000000000000000000000000000942809041582063365839428238027648024522282176747574937498915190189902751724515553262387695706471431607983773634306949 0x254AAD173EA5E82CB765169D1EFC9860FE05AE7B1E8FB19314D597561CED1F19F448CA74BCDEE10F0F168DAE0F529DA97AADE66B397A970EDCFEA83CA35C68A385 518
         2583269 function calls (2582676 primitive calls) in 5.997 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   143277    4.762    0.000    4.886    0.000 hofs.py:394(increment)
   143296    0.282    0.000    0.282    0.000 hofs.py:794(increment)
        1    0.183    0.183    5.979    5.979 hofs.py:909(work_layers_loop)
   143397    0.153    0.000    0.757    0.000 hofs.py:869(counts)
   143331    0.115    0.000    0.115    0.000 hofs.py:771(increment)
   859989    0.088    0.000    0.088    0.000 {built-in method gmpy2.mpz}

This is also part of why the method caching in LayeredGroups.GroupN >>104 pays off.

>>138

instead of computing the triangular number [...] subtract the values of R that have been seen

This idea was also expressed by >>33 and is used in the OEIS C version >>42.

140 2020-05-25 14:58

>>139

Attribute lookup is so slow in CPython that it is faster to let the same mpz constants be converted anew from python ints on every invocation.

This seems pretty crazy.

This idea was also expressed by >>33 and is used in the OEIS C version >>42.

In that case, thank you also to >>33, and >>42. I probably should have understood this earlier since I am completely certain I read >>33 already, and I have no reason to have not read >>42 other than a superior implementation having been made by that point in the thread. On a related note I think that the higher-order implementation is simply beyond my skills, (even with the reference implementation) while I wanted to validate your values for k greater than 17 and provide you with a benchmark for your Cpython comparison (the latter of which seems less relevant now with your recent observations), I'll have to leave this to someone else. Best of luck to everyone those who are going to continue to play!

141 2020-05-25 20:26

>>140

while I wanted to validate your values for k greater than 17 [...] I'll have to leave this to someone else

I checked each successive version against the previous version on those exponents that they could both compute, and because of this validating up to k=17 was already very helpful and increased confidence in the higher values. Thanks!

142 2020-05-26 12:03

>>141

I checked each successive version against the previous version on those exponents that they could both compute, and because of this validating up to k=17 was already very helpful and increased confidence in the higher values. Thanks!

Thank you for the affirmation, and also for your research here. Had you not shown the fun of this sequence, I probably never even would have thought to play.

143 2020-05-26 20:09

For the full gmpy2 port python ints that set initial values, like that of R >>104, are converted to mpz, while those in running computations are subject to implicit conversion >>139. Divisions with powers of 2 are replaced with t_div_2exp and the rest with divexact.
https://gmpy2.readthedocs.io/en/latest/mpz.html#mpz-functions

$ python3 -m timeit -s 'import numsci.hofs as mod' 'mod.work_gmp ('"1$(printf "%078d" 0)"')'
10 loops, best of 3: 4.18 sec per loop

The time for R(10^78) drops from 4.8s >>105 to 4.2s, so one eighth of the runtime is shaved off. Not as much as I had hoped but an improvement nevertheless.

$ python3 -m cProfile -s time wrapper.py numsci.hofs gmp "1$(printf "%078d" 0)" | tee temp/profile.txt | less
levels: 5
1000000000000000000000000000000000000000000000000000000000000000000000000000000 -> 500000000000000000000000000000000000000942809041582063365839428238027648024522282176747574937498915190189902751724515553262387695706471431607983773634306949 0x254AAD173EA5E82CB765169D1EFC9860FE05AE7B1E8FB19314D597561CED1F19F448CA74BCDEE10F0F168DAE0F529DA97AADE66B397A970EDCFEA83CA35C68A385 518
         2584205 function calls (2583609 primitive calls) in 4.811 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   143277    3.488    0.000    3.527    0.000 hofs.py:590(increment)
   143296    0.304    0.000    0.351    0.000 hofs.py:560(increment)
        1    0.192    0.192    4.798    4.798 hofs.py:909(work_layers_loop)
   143397    0.153    0.000    0.913    0.000 hofs.py:869(counts)
   143331    0.142    0.000    0.175    0.000 hofs.py:535(increment)
143828/143279    0.100    0.000    0.113    0.000 hofs.py:446(gmp_next)
   429904    0.080    0.000    0.080    0.000 {built-in method gmpy2.divexact}

Perhaps the GMP port will yield a nicer speed boost.

144 2020-06-01 17:35 *

>>130

You might want to run a benchmark that actually stresses bigints.

I redact this statement. Bigints are allocated on the heap and according the Valgrind's Massif, the overwhelming majority of allocations were caused by them. I compiled the master branch of Guile and it performs significantly better thanks to this commit: https://git.savannah.gnu.org/cgit/guile.git/commit/?id=00fbdfa7345765168e14438eed0b0b8c64c27ab9

145 2020-06-04 11:30 *

The GMP port >>143 of >>101 has the infrastructure completed and the first two layers ported. The same order of growth in python is in >>36, and it computed the u128 overflow R(10^20) in 300 milliseconds. The time in C+GMP is 50 milliseconds, so a speedup by a factor of 6.

$ time bin/gmphofs 100000000000000000000
levels: 5
5000000000942800098290022420982686040347 132

real    0m0,049s
user    0m0,049s
sys     0m0,000s

I expect the speedup for higher layers to suffer from less work being done in non-layer bookkeeping and more in multiplications, but to improve from eliminating allocations for immutable ints. I do not know which effect will prevail. Both python ints and gmpy2.mpzs are immutable so there is some allocation going on in the layer computations, although gmpy2 helps with caching.
https://gmpy2.readthedocs.io/en/latest/overview.html#miscellaneous-gmpy2-functions -> get_cache
But in GMP the mpz_t values are mutable and can be presized, and this is used to avoid any allocations in the main loop.
https://gmplib.org/manual/Efficiency

Time to port the third layer.

146 2020-06-04 11:32

I have no idea how that VIP happened.

147 2020-06-04 16:18

>>145

The time in C+GMP is 50 milliseconds, so a speedup by a factor of 6.

That's a pretty solid constant.

I do not know which effect will prevail

A sure sign of a great experiment! Won't lie, I'm hoping for the performance of the C+GMP and CPython+gmpy2 to converge.

148 2020-06-09 12:04

The third layer of >>101 has been ported to GMP >>145. The same order of growth in python is in >>76 with the trick in >>80 applied, and it computed the u256 overflow R(10^39) in just over 400 milliseconds. The time in C+GMP is 85 milliseconds, so a speedup by a factor of almost 5.

$ time bin/gmphofs "1$(printf "%039d" 0)"
levels: 5
500000000000000000029814239694953305682145634502486118475654808671411192582303 259

real    0m0,085s
user    0m0,084s
sys     0m0,001s
$ python3 
[...]
>>> 417 / 85
4.905882352941177
>>> 

Since the speedup factor decreased, It seems that the weight of the increasing number of multiplications takes precedence over eliminating an increasing number of allocations. Time to port the fourth layer.

>>147
Reality, it seems, chooses to mold itself to your hopes on this occasion.

149 2020-06-09 16:38

>>148

Reality, it seems, chooses to mold itself to your hopes on this occasion.

sweet.

150 2021-07-19 10:18

A backup of the "Challenge Weekly 2.0 (took a year)" #2036 thread from lainchan, containing the fourth-order GMP port and the reference time of under one second for the u512 overflow R(10^78):
http://0x0.st/-VMB.gz

151 2021-07-19 13:38

Now that's dedication.

152 2022-05-31 18:34

In case anyone needs a O(1) approximation formula:
https://github.com/FrozenVoid/Algorithms-DB/blob/main/Math/Sequences/Hofstadter%20Figure-Figure%20sequence/Interval_R_Formula.txt

153 2022-05-31 20:52

We need a good new challenge.

154 2022-05-31 23:14

>>153

We need a good new challenge.

Here's an easy one to warm up https://textboard.org/prog/34/165-169 involving division by invariant integers using multiplication, a technique commonly used in compilers.

Or you could try the Stern-Brocot https://textboard.org/prog/222 tree computation of e.

155 2022-06-01 09:00

>>154
Both are trivial algorithms with little space for tinkering.

156 2022-06-01 12:08 *

>>155

Both are trivial algorithms with little space for tinkering.

The timing ratios https://textboard.org/prog/222#t222p38 between the "trivial algorithms" version and the "space for tinkering" version demonstrate how much space there is for tinkering.

157


VIP:

do not edit these