Skip to content

Commit 31391fd

Browse files
Update heuristic scaling
1 parent 82eb47c commit 31391fd

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

FindAFactor/_find_a_factor.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1126,7 +1126,7 @@ std::string find_a_factor(std::string toFactorStr, size_t method, size_t nodeCou
11261126
// This level default (scaling) was suggested by Elara (OpenAI GPT).
11271127
const double N = sqrtN.convert_to<double>();
11281128
const double logN = log(N);
1129-
const BigInteger primeCeilingBigInt = (BigInteger)(smoothnessBoundMultiplier * exp(0.5 * std::sqrt(logN * log(logN))) + 0.5);
1129+
const BigInteger primeCeilingBigInt = (BigInteger)(smoothnessBoundMultiplier * pow(exp(0.5 * std::sqrt(logN * log(logN))) + 0.5, sqrt(2) / 4));
11301130
const size_t primeCeiling = (size_t)primeCeilingBigInt;
11311131
if (((BigInteger)primeCeiling) != primeCeilingBigInt) {
11321132
throw std::runtime_error("Your primes are out of size_t range! (Your formula smoothness bound calculates to be " + boost::lexical_cast<std::string>(primeCeilingBigInt) + ".) Consider lowering your smoothness bound, since it's unlikely you want to sieve for primes above 2 to the 64th power, but, if so, you can modify the SieveOfEratosthenes() code slightly to allow for this.");

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ The `find_a_factor()` function should return any nontrivial factor of `to_factor
4848
- `gear_factorization_level` (default value: `1`): This is the value up to which "wheel (and gear) factorization" are applied to "brute force." A value of `11` includes all prime factors of `11` and below and works well for `PRIME_PROVER`, though significantly higher might be preferred in certain cases.
4949
- `wheel_factorization_level` (default value: `1`): "Wheel" vs. "gear" factorization balances two types of factorization wheel ("wheel" vs. "gear" design) that often work best when the "wheel" is only a few prime number levels lower than gear factorization. Optimized implementation for wheels is only available up to `13`. The primes above "wheel" level, up to "gear" level, are the primes used specifically for "gear" factorization. Wheel factorization is also applied to map the sieving intervale of `FACTOR_FINDER` mode onto non-multiples on the wheel, if the level is set above `1` (which might not actually pay dividends in practical complexity, but we leave it for your experimentation). In `FACTOR_FINDER` mode, wheel factorization multiples are systematically avoided by construction, while gear factorization multiples are just rejected after-the-fact, without special construction to avoid them. (It is possible in theory to implement handling for gear factorization by construction, though, and that might be added in a future release.)
5050
- `sieving_bound_multiplier` (default value: `1.0`): This controls the sieving bound and is calibrated such that it linearly multiplies the square root of the number to factor (for each `1.0` increment). While this might be a huge bound, remember that sieving termination is primarily controlled by when `gaussian_elimination_row_multiplier` is exactly satisfied.
51-
- `smoothness_bound_multiplier` (default value: `1.0`): This controls smoothness bound and is calibrated such that it linearliy multiplies `exp(0.5 * std::sqrt(log(N) * log(log(N))))` for `N` being the number to factor (for each `1.0` increment). This was a heuristic suggested by Elara (an OpenAI custom GPT).
51+
- `smoothness_bound_multiplier` (default value: `1.0`): This controls smoothness bound and is calibrated such that it linearliy multiplies `pow(exp(0.5 * std::sqrt(log(N) * log(log(N)))), sqrt(2)/4)` for `N` being the number to factor (for each `1.0` increment). This was a heuristic suggested by Elara (an OpenAI custom GPT).
5252
- `gaussian_elimination_row_offset` (default value: `1`): This controls the number of rows greater than the count of smooth primes that are sieved before Gaussian elimination. Basically, for each increment starting with `1`, the chance of finding at least one solution in Gaussian elimination goes like `(1 - 2^(-m))` for a setting value of `m`: `1` value is a 50% chance of success, and the chance of failure is halved for each unit of `1` added. So long as this setting is appropriately low enough, `sieving_bound_multiplier` can be set basically arbitrarily high.
5353
- `check_small_factors` (default value: `False`): `True` performs initial-phase trial division up to the smoothness bound, and `False` skips it.
5454

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ build-backend = "setuptools.build_meta"
88

99
[project]
1010
name = "FindAFactor"
11-
version = "6.0.1"
11+
version = "6.1.0"
1212
requires-python = ">=3.8"
1313
description = "Find any nontrivial factor of a number"
1414
readme = {file = "README.txt", content-type = "text/markdown"}

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ def build_extension(self, ext):
4040

4141
setup(
4242
name='FindAFactor',
43-
version='6.0.1',
43+
version='6.1.0',
4444
author='Dan Strano',
4545
author_email='stranoj@gmail.com',
4646
description='Find any nontrivial factor of a number',

0 commit comments

Comments
 (0)