@teorth @km-git-acc
I am opening a new issue to help us coordinate and drive the efforts required to finalise the write-up. The OP of the 11th thread nicely summarises the topics still to be addressed:

I) Is my understanding correct that the write-up for the Barrier numerics is pretty much complete now or is there more to be done?
II) The write-up for the numerics that need to show that the 'right side of the Barrier' is zero-free is still incomplete. KM has started work on this over Christmas, however not sure how far he has progressed the work (slightly concerned since he seems no longer active on the blog and doesn't respond to e-mail, hopefully he reads this message and chips in again :) ). We do have an idea on how to complete this piece (also based on the polymath15 10th thread OP):
- we need to ensure the range $N_a \leq N \leq N_b$ (where $N_a$ is the value of $N$ corresponding to the barrier $X$) is zero-free through checking that for each $N$, the $ABB_{eff}$ lower bound always exceeds the upper bound of the error terms.
- From theory, two lower bounds are available: the Lemma-bound (eq. 80 in the writeup) and an approximate Triangle bound (eq. 79 in the writeup). Both bounds can be ‘mollified’ by choosing an increasing number of primes (to a certain extent) until the bound is sufficiently positive.
- The Lemma bound is used to find the number of ‘mollifiers’ required to make the bound positive at $N_a$. For the Barrier location under study, using primes $2,3,5,7$ induced a positive Lemma bound of $0.067$ at the Barrier location $N_a=69098$.
- The approximate Triangle bound (no mollifier) evaluates faster and has been used to establish an $N_b = 1700000$ beyond which the analytical lower bound takes over.
- The Lemma-bound is then also used to calculate that for each $N$ in $[N_a, N_b]$, the lower bound stays sufficiently above the error terms. The Lemma bound only needs to be verified for the line segment $y=y_0, t=t_0, N_a \leq N(x) \leq N_b$, since the Lemma bound monotonically increases when $y$ goes to $1$.
- Explain that the upper bound on the error bound was chosen conservatively (something like eC_0+3 eB <- need to check).
- To speed up computations of 6), a fast “sawtooth” mechanism has been developed. This only calculates the minimally required incremental Lemma bound terms and only induces a full calculation when the incremental bound goes below a defined threshold:
7a) we wanted a threshold which was sufficiently above the error bounds, and 0.01 provided a reasonable buffer.
7b) for every successive N, if one actually found the bound using the non-sawtooth version, the bound is expected to keep going more positive (since the term in the denominator (t/2)*log(N) keeps increasing), (while the number of summand terms also increase with increasing N, the denominator effect is much larger). Using the sawtooth version, we assume that the summand terms for N+1 are the same as those for N (thus not providing the denominator advantage), and then we subtract the incremental terms (using N+1 in the denominator). Since we are basically constantly subtracting for each successive N, the sawtooth bound keeps decreasing towards the threshold value, and then we recalculate the bound using the non-sawtooth version (which provides a big jump) to start the process again.
(insert math descriptions from OP 10th thread of the sawtooth process here)
Plot of Dirichlet and error bounds:

Proposed text as 'caption' for the figure:
The upper bound per Lemma (x.y) in the left graph has been 'mollified' by the first 4 primes and was recalculated each time it became $\lt 0.01$. It clearly exceeds the upper bound of the error terms shown in the right picture and demonstrates that $f_{0.2}(x+0.2i)$ does not vanish in this range of $N$. The error bound followed the recalculation pattern of the Lemma bound.
III) For the conditional runs up to DBN $\lt 0.1$, we propose something along the lines of:
- In our quest to further reduce the DBN-constant below $0.22$, i.e. the domain where results will be conditional on RH-verification up to a certain height, we developed a new approach. For a carefully chosen combination of $y_0, t_0$, the Triangle bound N_b (mollified with prime 2) can be made positive already at the location of the Barrier. This implies $N_a=N_b$, hence no further computations are required after the Barrier has been ‘cleared’ from any zeros having passed through it.
So we have to find those $x, y_0, t_0$ combinations for which the Triangle bound just becomes (and stays) positive and then to place the Barrier at an optimal location just beyond the derived x-value. (insert curve for all suitable combinations, KM has such a graph).
- Explain that the Barrier verifications runs are using exactly the same math and algorithms as were used for $y=t=0.2$ (i.e. with the stored sums and a 'Tloop')
- Elaborate on the results achieved (all winding numbers zero, minmodABB never < 1, etc.). (maybe insert plot how the total number of rectangles to be processed increased for each run and/or how many mesh points needed to be evaluated at $t=0$ for each incremental run).
- Explain how the accuracy of the results has been assured (e.g. we used 10 digits for $10^{20}, 10^{21}$, 20 digits for the lower Barriers). We did quite a few checks on the mesh point output (mostly upfront when we tested the Boinc scripts esp for $10^{20}$and $10^{21}$).
Does this make sense? Any other data/information to provide?
@teorth @km-git-acc
I am opening a new issue to help us coordinate and drive the efforts required to finalise the write-up. The OP of the 11th thread nicely summarises the topics still to be addressed:
I) Is my understanding correct that the write-up for the Barrier numerics is pretty much complete now or is there more to be done?
II) The write-up for the numerics that need to show that the 'right side of the Barrier' is zero-free is still incomplete. KM has started work on this over Christmas, however not sure how far he has progressed the work (slightly concerned since he seems no longer active on the blog and doesn't respond to e-mail, hopefully he reads this message and chips in again :) ). We do have an idea on how to complete this piece (also based on the polymath15 10th thread OP):
(insert math descriptions from OP 10th thread of the sawtooth process here)
Plot of Dirichlet and error bounds:
Proposed text as 'caption' for the figure:
III) For the conditional runs up to DBN$\lt 0.1$ , we propose something along the lines of:
So we have to find those
Does this make sense? Any other data/information to provide?