Compare commits

...

28 Commits

Author SHA1 Message Date
dca1d66346 Uploaded Technical Paper
Some checks failed
Publish Python Package to PyPI / deploy (push) Has been cancelled
2025-11-24 19:02:33 +00:00
1aa2e8875a Made the default values of min/max_range 0.0
All checks were successful
Run Python Tests / test (3.10) (pull_request) Successful in 14s
Run Python Tests / test (3.12) (pull_request) Successful in 26s
Run Python Tests / test (3.8) (pull_request) Successful in 14s
Publish Python Package to PyPI / deploy (push) Successful in 12s
2025-11-05 18:58:20 -04:00
94723dcb88 feat(Function): Add __eq__ method and improve quadratic_solve stability
All checks were successful
Run Python Tests / test (3.10) (pull_request) Successful in 13s
Run Python Tests / test (3.12) (pull_request) Successful in 12s
Run Python Tests / test (3.8) (pull_request) Successful in 14s
Publish Python Package to PyPI / deploy (push) Successful in 13s
Implements two features for the Function class:

1.  Adds the `__eq__` operator (`==`) to allow for logical comparison of two Function objects based on their coefficients.
2.  Replaces the standard quadratic formula with a numerically stable version in `quadratic_solve` to prevent "catastrophic cancellation" errors and improve accuracy.
2025-11-02 12:50:48 -04:00
f4c5d245e4 fix(ga): Derivative of a constant now returns 0 instead of a throwing an error
All checks were successful
Run Python Tests / test (3.12) (pull_request) Successful in 24s
Run Python Tests / test (3.10) (pull_request) Successful in 30s
Run Python Tests / test (3.8) (pull_request) Successful in 22s
Publish Python Package to PyPI / deploy (push) Successful in 20s
2025-10-31 11:17:29 -04:00
b7ea6c2e23 Added root_precision warning 2025-10-31 11:08:02 -04:00
9d967210fa feat(ga): Overhaul GA for multi-root robustness and CPU performance
All checks were successful
Run Python Tests / test (3.12) (pull_request) Successful in 34s
Run Python Tests / test (3.8) (pull_request) Successful in 35s
Run Python Tests / test (3.10) (pull_request) Successful in 3m8s
Publish Python Package to PyPI / deploy (push) Successful in 12s
### 🚀 Performance (CPU)
* Replaces `np.polyval` with a parallel Numba JIT function (`_calculate_ranks_numba`).
* Replaces $O(N \log N)$ `np.argsort` with $O(N)$ `np.argpartition` in the GA loop.
* Adds `numba` as a core dependency.

### 🧠 Robustness (Algorithm)
* Implements Blend Crossover (BLX-$\alpha$) for better, extrapolative exploration.
* Uses a hybrid selection model (top X% for crossover, 100% for mutation) to preserve root niches.
* Adds `selection_percentile` and `blend_alpha` to `GA_Options` for tuning.
2025-10-30 11:31:00 -04:00
1318006959 v0.5.1-dev (#20)
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 56s
Reviewed-on: #20
Co-authored-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
Co-committed-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
2025-10-28 15:42:34 +00:00
2d8c8a09e3 Update README.md
Some checks failed
Publish Python Package to PyPI / deploy (push) Has been cancelled
2025-10-27 21:00:30 +00:00
4e46c11f83 feat(ga): Implement quality filtering and precision-based clustering (#19)
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 12s
The previous GA logic was returning the "top N" solutions, which led to test failures when the algorithm correctly converged on only one of all possible roots (e.g., returning 1000 variations of -1.0).

This commit fixes the root-finding logic to correctly identify and return *all* unique, high-quality roots:

1.  **feat(api):** Adds `root_precision` to `GA_Options`. This new parameter (default: 5) allows the user to control the number of decimal places for clustering unique roots.

2.  **fix(ga):** Replaces the flawed "top N" logic in both `_solve_x_numpy` and `_solve_x_cuda`. The new process is:
    * Dynamically sets a `quality_threshold` based on the user's `root_precision` (e.g., `precision=5` requires a rank > `1e6`).
    * Filters the *entire* final population for all solutions that meet this quality threshold.
    * Rounds these high-quality solutions to `root_precision`.
    * Returns only the `np.unique()` results.

This ensures the solver returns all distinct roots that meet the accuracy requirements, rather than just the top N variations of a single root.

Reviewed-on: #19
Co-authored-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
Co-committed-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
2025-10-27 19:26:50 +00:00
962eab5af7 feat(ga): Implement Cauchy's bound for automatic root range detection
All checks were successful
Run Python Tests / test (3.12) (pull_request) Successful in 10s
Run Python Tests / test (3.10) (pull_request) Successful in 17s
Run Python Tests / test (3.8) (pull_request) Successful in 10s
Publish Python Package to PyPI / deploy (push) Successful in 12s
The previous benchmark results showed that the GA was failing to find accurate roots (high MAE) for many polynomials. This was because the fixed default search range ([-100, 100]) was often incorrect, and the GA was searching in the wrong place.

This commit introduces a significantly more robust solution:

1.  Adds a `_get_cauchy_bound` helper function to mathematically calculate a search radius that is guaranteed to contain all real roots.

2.  Updates `_solve_x_numpy` and `_solve_x_cuda` with new logic:
    * If the user provides a *custom* `min_range` or `max_range`, we treat them as an expert and use their specified range.
    * If the user is using the *default* range, we silently discard it and use the smarter, automatically-calculated Cauchy bound instead.

This provides the best of both worlds: a powerful, smart default for most users and an "expert override" for those who need to fine-tune the search area.
2025-10-27 14:33:12 -04:00
7c75000637 fix(ga): Suppress divide-by-zero warning in NumPy solver
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 18s
Run Python Tests / test (3.12) (pull_request) Successful in 12s
Run Python Tests / test (3.10) (pull_request) Successful in 17s
Run Python Tests / test (3.8) (pull_request) Successful in 10s
The `_solve_x_numpy` method was correctly using `np.where(error == 0, ...)` to handle perfect roots. However, NumPy eagerly calculates `1.0 / error` for the entire array before applying the `where` condition, which was causing a `RuntimeWarning: divide by zero` when a perfect root was found.

This warning was harmless but created unnecessary console noise during testing and use.

This commit wraps the `ranks = ...` assignments in a `with np.errstate(divide='ignore'):` block to silence this specific, expected warning. The CUDA kernel is unaffected as its ternary operator already prevents this calculation.
2025-10-27 12:25:54 -04:00
c3b3513e79 feat(ga, api): Implement advanced GA strategy and refactor API for v0.4.0 (#16)
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 18s
This commit introduces a major enhancement to the genetic algorithm's convergence logic and refactors key parts of the API for better clarity and usability.

- **feat(ga):** Re-implements the GA solver (CPU & CUDA) to use a more robust strategy based on Elitism, Crossover, and Mutation. This replaces the previous, less efficient model and is designed to significantly improve accuracy and convergence speed.

- **feat(api):** Updates `GA_Options` to expose the new GA strategy parameters:
    - Renames `mutation_percentage` to `mutation_strength` for clarity.
    - Adds `elite_ratio`, `crossover_ratio`, and `mutation_ratio`.
    - Includes a `__post_init__` validator to ensure ratios are valid.

- **refactor(api):** Moves `quadratic_solve` from a standalone function to a method of the `Function` class (`f1.quadratic_solve()`). This provides a cleaner, more object-oriented API.

- **docs:** Updates the README, `GA_Options` doc page, and `quadratic_solve` doc page to reflect all API changes, new parameters, and updated usage examples.

- **chore:** Bumps version to 0.4.0.

Reviewed-on: #16
Co-authored-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
Co-committed-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
2025-10-27 14:20:56 +00:00
0536003dce Update README.md
Some checks failed
Publish Python Package to PyPI / deploy (push) Has been cancelled
2025-06-30 15:39:47 +00:00
bb89149930 Update .all-contributorsrc
Some checks failed
Publish Python Package to PyPI / deploy (push) Has been cancelled
2025-06-30 15:38:24 +00:00
6596c2df99 typo fix
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 14s
2025-06-19 18:00:29 +00:00
24337cea48 Updated Project urls
Some checks failed
Run Python Tests / test (3.12) (pull_request) Successful in 10s
Run Python Tests / test (3.10) (pull_request) Successful in 14s
Run Python Tests / test (3.8) (pull_request) Successful in 10s
Publish Python Package to PyPI / deploy (push) Failing after 14s
Signed-off-by: Jonathan Rampersad <jonathan@jono-rams.work>
2025-06-19 17:58:50 +00:00
ee18cc9e59 Added Branding (#14)
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 19s
Reviewed-on: #14
2025-06-19 17:54:07 +00:00
ce464cffd4 FEAT: Added support for float coefficients (#13)
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 13s
Reviewed-on: #13
Co-authored-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
Co-committed-by: Jonathan Rampersad <rampersad.jonathan@gmail.com>
2025-06-18 13:20:18 +00:00
c94d08498d Edited README to advise of function * function multiplication being available (#12)
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 17s
Reviewed-on: #12
2025-06-18 12:55:37 +00:00
3aad9efb61 Merge pull request 'readme-patch' (#11) from readme-patch into main
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 17s
Reviewed-on: #11
2025-06-17 18:37:50 +00:00
32d6cfeeea Update pyproject.toml
All checks were successful
Run Python Tests / test (3.12) (pull_request) Successful in 9s
Run Python Tests / test (3.10) (pull_request) Successful in 13s
Run Python Tests / test (3.8) (pull_request) Successful in 9s
2025-06-17 18:37:33 +00:00
8d6fe7aca0 Update README.md
Signed-off-by: Jonathan Rampersad <jonathan@jono-rams.work>
2025-06-17 18:37:17 +00:00
7927845f17 Merge pull request 'v0.2.0' (#10) from v0.2.0-dev into main
All checks were successful
Publish Python Package to PyPI / deploy (push) Successful in 13s
Reviewed-on: #10
2025-06-17 18:36:25 +00:00
ac591f49ec docs: Added documentation for nth_derivative function
All checks were successful
Run Python Tests / test (3.12) (pull_request) Successful in 10s
Run Python Tests / test (3.10) (pull_request) Successful in 13s
Run Python Tests / test (3.8) (pull_request) Successful in 10s
2025-06-17 14:35:03 -04:00
ec97aefee1 feat: Added nth derivative showcase in __main__ 2025-06-17 14:34:16 -04:00
d27497488f fix: differential in README.md renamed to derivative 2025-06-17 14:30:40 -04:00
41daf4f7e0 Remove CONTRIBUTORS.md 2025-06-17 14:30:11 -04:00
36f51ca67e fix: Typo in test
All checks were successful
Run Python Tests / test (3.12) (pull_request) Successful in 11s
Run Python Tests / test (3.10) (pull_request) Successful in 13s
Run Python Tests / test (3.8) (pull_request) Successful in 10s
2025-06-17 14:27:53 -04:00
7 changed files with 555 additions and 141 deletions

View File

@@ -19,6 +19,7 @@
"avatar_url": "https://avatars.githubusercontent.com/u/29872001?v=4", "avatar_url": "https://avatars.githubusercontent.com/u/29872001?v=4",
"profile": "https://jono-rams.work", "profile": "https://jono-rams.work",
"contributions": [ "contributions": [
"maintenance",
"code", "code",
"doc", "doc",
"infra" "infra"

View File

@@ -1,10 +0,0 @@
## Contributors
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
[![All Contributors](https://img.shields.io/github/all-contributors/jono-rams/PolySolve?color=ee8449&style=flat-square)](#contributors)
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->

Binary file not shown.

View File

@@ -1,17 +1,20 @@
# polysolve <p align="center">
<img src="https://i.ibb.co/N22Gx6xq/Poly-Solve-Logo.png" alt="polysolve Logo" width="256">
</p>
[![PyPI version](https://img.shields.io/pypi/v/polysolve.svg)](https://pypi.org/project/polysolve/) [![PyPI version](https://img.shields.io/pypi/v/polysolve.svg)](https://pypi.org/project/polysolve/)
[![PyPI pyversions](https://img.shields.io/pypi/pyversions/polysolve.svg)](https://pypi.org/project/polysolve/) [![PyPI pyversions](https://img.shields.io/pypi/pyversions/polysolve.svg)](https://pypi.org/project/polysolve/)
A Python library for representing, manipulating, and solving polynomial equations using a high-performance genetic algorithm, with optional CUDA/GPU acceleration. A Python library for representing, manipulating, and solving polynomial equations. Features a high-performance, Numba-accelerated genetic algorithm for CPU, with an optional CUDA/GPU backend for massive-scale parallel solving.
--- ---
## Key Features ## Key Features
* **Create and Manipulate Polynomials**: Easily define polynomials of any degree and perform arithmetic operations like addition, subtraction, and scaling. * **Numerically Stable Solver**: Makes complex calculations **practical**. Leverage your GPU to power the robust genetic algorithm, solving high-degree polynomials accurately in a reasonable timeframe.
* **Genetic Algorithm Solver**: Find approximate real roots for complex polynomials where analytical solutions are difficult or impossible. * **Numba Accelerated CPU Solver**: The default genetic algorithm is JIT-compiled with Numba for high-speed CPU performance, right out of the box.
* **CUDA Accelerated**: Leverage NVIDIA GPUs for a massive performance boost when finding roots in large solution spaces. * **CUDA Accelerated**: Leverage NVIDIA GPUs for a massive performance boost when finding roots in large solution spaces.
* **Create and Manipulate Polynomials**: Easily define polynomials of any degree using integer or float coefficients, and perform arithmetic operations like addition, subtraction, multiplication, and scaling.
* **Analytical Solvers**: Includes standard, exact solvers for simple cases (e.g., `quadratic_solve`). * **Analytical Solvers**: Includes standard, exact solvers for simple cases (e.g., `quadratic_solve`).
* **Simple API**: Designed to be intuitive and easy to integrate into any project. * **Simple API**: Designed to be intuitive and easy to integrate into any project.
@@ -41,11 +44,12 @@ pip install polysolve[cuda12]
Here is a simple example of how to define a quadratic function, find its properties, and solve for its roots. Here is a simple example of how to define a quadratic function, find its properties, and solve for its roots.
```python ```python
from polysolve import Function, GA_Options, quadratic_solve from polysolve import Function, GA_Options
# 1. Define the function f(x) = 2x^2 - 3x - 5 # 1. Define the function f(x) = 2x^2 - 3x - 5
# Coefficients can be integers or floats.
f1 = Function(largest_exponent=2) f1 = Function(largest_exponent=2)
f1.set_constants([2, -3, -5]) f1.set_coeffs([2, -3, -5])
print(f"Function f1: {f1}") print(f"Function f1: {f1}")
# > Function f1: 2x^2 - 3x - 5 # > Function f1: 2x^2 - 3x - 5
@@ -56,18 +60,23 @@ print(f"Value of f1 at x=5 is: {y_val}")
# > Value of f1 at x=5 is: 30.0 # > Value of f1 at x=5 is: 30.0
# 3. Get the derivative: 4x - 3 # 3. Get the derivative: 4x - 3
df1 = f1.differential() df1 = f1.derivative()
print(f"Derivative of f1: {df1}") print(f"Derivative of f1: {df1}")
# > Derivative of f1: 4x - 3 # > Derivative of f1: 4x - 3
# 4. Find roots analytically using the quadratic formula # 4. Get the 2nd derivative: 4
ddf1 = f1.nth_derivative(2)
print(f"2nd Derivative of f1: {ddf1}")
# > Derivative of f1: 4
# 5. Find roots analytically using the quadratic formula
# This is exact and fast for degree-2 polynomials. # This is exact and fast for degree-2 polynomials.
roots_analytic = quadratic_solve(f1) roots_analytic = f1.quadratic_solve()
print(f"Analytic roots: {sorted(roots_analytic)}") print(f"Analytic roots: {sorted(roots_analytic)}")
# > Analytic roots: [-1.0, 2.5] # > Analytic roots: [-1.0, 2.5]
# 5. Find roots with the genetic algorithm (CPU) # 6. Find roots with the genetic algorithm (Numba CPU)
# This can solve polynomials of any degree. #    This is the default, JIT-compiled CPU solver.
ga_opts = GA_Options(num_of_generations=20) ga_opts = GA_Options(num_of_generations=20)
roots_ga = f1.get_real_roots(ga_opts, use_cuda=False) roots_ga = f1.get_real_roots(ga_opts, use_cuda=False)
print(f"Approximate roots from GA: {roots_ga[:2]}") print(f"Approximate roots from GA: {roots_ga[:2]}")
@@ -81,6 +90,41 @@ print(f"Approximate roots from GA: {roots_ga[:2]}")
--- ---
## Tuning the Genetic Algorithm
The `GA_Options` class gives you fine-grained control over the genetic algorithm's performance, letting you trade speed for accuracy.
The default options are balanced, but for very complex polynomials, you may want a more exhaustive search.
```python
from polysolve import GA_Options
# Create a config for a deep search, optimized for finding
# *all* real roots (even if they are far apart).
ga_robust_search = GA_Options(
num_of_generations=50, # Run for more generations
data_size=500000, # Use a larger population
# --- Key Tuning Parameters for Multi-Root Finding ---
# Widen the parent pool to 75% to keep more "niches"
# (solution-clouds around different roots) alive.
selection_percentile=0.75,
# Increase the crossover blend factor to 0.75.
# This allows new solutions to be created further
# away from their parents, increasing exploration.
blend_alpha=0.75
)
# Pass the custom options to the solver
roots = f1.get_real_roots(ga_accurate)
```
For a full breakdown of all parameters, including crossover_ratio, mutation_strength, and more, please see [the full GA_Options API Documentation](https://polysolve.jono-rams.work/docs/ga-options-api).
---
## Development & Testing Environment ## Development & Testing Environment
This project is automatically tested against a specific set of dependencies to ensure stability. Our Continuous Integration (CI) pipeline runs on an environment using **CUDA 12.5** on **Ubuntu 24.04**. This project is automatically tested against a specific set of dependencies to ensure stability. Our Continuous Integration (CI) pipeline runs on an environment using **CUDA 12.5** on **Ubuntu 24.04**.
@@ -105,7 +149,7 @@ Please read our `CONTRIBUTING.md` file for details on our code of conduct and th
<table> <table>
<tbody> <tbody>
<tr> <tr>
<td align="center" valign="top" width="14.28%"><a href="https://jono-rams.work"><img src="https://avatars.githubusercontent.com/u/29872001?v=4?s=100" width="100px;" alt="Jonathan Rampersad"/><br /><sub><b>Jonathan Rampersad</b></sub></a><br /><a href="https://github.com/jono-rams/PolySolve/commits?author=jono-rams" title="Code">💻</a> <a href="https://github.com/jono-rams/PolySolve/commits?author=jono-rams" title="Documentation">📖</a> <a href="#infra-jono-rams" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a></td> <td align="center" valign="top" width="14.28%"><a href="https://jono-rams.work"><img src="https://avatars.githubusercontent.com/u/29872001?v=4?s=100" width="100px;" alt="Jonathan Rampersad"/><br /><sub><b>Jonathan Rampersad</b></sub></a><br /><a href="https://github.com/jono-rams/PolySolve/commits?author=jono-rams" title="Maintenance">🚧</a> <a href="https://github.com/jono-rams/PolySolve/commits?author=jono-rams" title="Code">💻</a> <a href="https://github.com/jono-rams/PolySolve/commits?author=jono-rams" title="Documentation">📖</a> <a href="#infra-jono-rams" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a></td>
</tr> </tr>
</tbody> </tbody>
<tfoot> <tfoot>

View File

@@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
[project] [project]
# --- Core Metadata --- # --- Core Metadata ---
name = "polysolve" name = "polysolve"
version = "0.2.0" version = "0.6.3"
authors = [ authors = [
{ name="Jonathan Rampersad", email="jonathan@jono-rams.work" }, { name="Jonathan Rampersad", email="jonathan@jono-rams.work" },
] ]
@@ -33,7 +33,8 @@ classifiers = [
# --- Dependencies --- # --- Dependencies ---
dependencies = [ dependencies = [
"numpy>=1.21" "numpy>=1.21",
"numba"
] ]
# --- Optional Dependencies (Extras) --- # --- Optional Dependencies (Extras) ---
@@ -42,6 +43,7 @@ cuda12 = ["cupy-cuda12x"]
dev = ["pytest"] dev = ["pytest"]
[project.urls] [project.urls]
Homepage = "https://github.com/jono-rams/PolySolve" Homepage = "https://polysolve.jono-rams.work"
"Source Code" = "https://github.com/jono-rams/PolySolve" Documentation = "https://polysolve.jono-rams.work/docs"
Repository = "https://github.com/jono-rams/PolySolve"
"Bug Tracker" = "https://github.com/jono-rams/PolySolve/issues" "Bug Tracker" = "https://github.com/jono-rams/PolySolve/issues"

View File

@@ -1,5 +1,6 @@
import math import math
import numpy as np import numpy as np
import numba
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Optional, Union from typing import List, Optional, Union
import warnings import warnings
@@ -12,10 +13,10 @@ try:
except ImportError: except ImportError:
_CUPY_AVAILABLE = False _CUPY_AVAILABLE = False
# The CUDA kernel for the fitness function # The CUDA kernels for the fitness function
_FITNESS_KERNEL = """ _FITNESS_KERNEL_FLOAT = """
extern "C" __global__ void fitness_kernel( extern "C" __global__ void fitness_kernel(
const long long* coefficients, const double* coefficients,
int num_coefficients, int num_coefficients,
const double* x_vals, const double* x_vals,
double* ranks, double* ranks,
@@ -25,11 +26,10 @@ extern "C" __global__ void fitness_kernel(
int idx = threadIdx.x + blockIdx.x * blockDim.x; int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx < size) if (idx < size)
{ {
double ans = 0; double ans = coefficients[0];
int lrgst_expo = num_coefficients - 1; for (int i = 1; i < num_coefficients; ++i)
for (int i = 0; i < num_coefficients; ++i)
{ {
ans += coefficients[i] * pow(x_vals[idx], (double)(lrgst_expo - i)); ans = ans * x_vals[idx] + coefficients[i];
} }
ans -= y_val; ans -= y_val;
@@ -38,6 +38,31 @@ extern "C" __global__ void fitness_kernel(
} }
""" """
@numba.jit(nopython=True, fastmath=True, parallel=True)
def _calculate_ranks_numba(solutions, coefficients, y_val, ranks):
"""
A Numba-jitted, parallel function to calculate fitness.
This replaces np.polyval and the rank calculation.
"""
num_coefficients = coefficients.shape[0]
data_size = solutions.shape[0]
# This prange will be run in parallel on all your CPU cores
for idx in numba.prange(data_size):
x_val = solutions[idx]
# Horner's method (same as np.polyval)
ans = coefficients[0]
for i in range(1, num_coefficients):
ans = ans * x_val + coefficients[i]
ans -= y_val
if ans == 0.0:
ranks[idx] = 1.7976931348623157e+308 # np.finfo(float).max
else:
ranks[idx] = abs(1.0 / ans)
@dataclass @dataclass
class GA_Options: class GA_Options:
""" """
@@ -45,18 +70,95 @@ class GA_Options:
Attributes: Attributes:
min_range (float): The minimum value for the initial random solutions. min_range (float): The minimum value for the initial random solutions.
Default: 0.0
max_range (float): The maximum value for the initial random solutions. max_range (float): The maximum value for the initial random solutions.
Default: 0.0
num_of_generations (int): The number of iterations the algorithm will run. num_of_generations (int): The number of iterations the algorithm will run.
sample_size (int): The number of top solutions to keep and return. Default: 10
data_size (int): The total number of solutions generated in each generation. data_size (int): The total number of solutions (population size)
mutation_percentage (float): The amount by which top solutions are mutated each generation. generated in each generation. Default: 100000
mutation_strength (float): The percentage (e.g., 0.01 for 1%) by which
a solution is mutated. Default: 0.01
elite_ratio (float): The percentage (e.g., 0.05 for 5%) of the *best*
solutions to carry over to the next generation
unchanged (elitism). Default: 0.05
crossover_ratio (float): The percentage (e.g., 0.45 for 45%) of the next
generation to be created by "breeding" two
solutions from the parent pool. Default: 0.45
mutation_ratio (float): The percentage (e.g., 0.40 for 40%) of the next
generation to be created by mutating solutions
from the parent pool. Default: 0.40
selection_percentile (float): The top percentage (e.g., 0.66 for 66%)
of solutions to use as the parent pool
for crossover. A smaller value speeds
up single-root convergence; a larger
value helps find multiple roots.
Default: 0.66
blend_alpha (float): The expansion factor for Blend Crossover (BLX-alpha).
0.0 = average crossover (no expansion).
0.5 = 50% expansion beyond the parent range.
Default: 0.5
root_precision (int): The number of decimal places to round roots to
when clustering. A smaller number (e.g., 3)
groups roots more aggressively. A larger number
(e.g., 7) is more precise but may return
multiple near-identical roots. Default: 5
""" """
min_range: float = -100.0 min_range: float = 0.0
max_range: float = 100.0 max_range: float = 0.0
num_of_generations: int = 10 num_of_generations: int = 10
sample_size: int = 1000
data_size: int = 100000 data_size: int = 100000
mutation_percentage: float = 0.01 mutation_strength: float = 0.01
elite_ratio: float = 0.05
crossover_ratio: float = 0.45
mutation_ratio: float = 0.40
selection_percentile: float = 0.66
blend_alpha: float = 0.5
root_precision: int = 5
def __post_init__(self):
"""Validates the GA options after initialization."""
total_ratio = self.elite_ratio + self.crossover_ratio + self.mutation_ratio
if total_ratio > 1.0:
raise ValueError(
f"The sum of elite_ratio, crossover_ratio, and mutation_ratio must be <= 1.0, but got {total_ratio}"
)
if any(r < 0 for r in [self.elite_ratio, self.crossover_ratio, self.mutation_ratio]):
raise ValueError("GA ratios cannot be negative.")
if not (0 < self.selection_percentile <= 1.0):
raise ValueError(
f"selection_percentile must be between 0 (exclusive) and 1.0 (inclusive), but got {self.selection_percentile}"
)
if self.blend_alpha < 0:
raise ValueError(
f"blend_alpha cannot be negative, but got {self.blend_alpha}"
)
if self.root_precision > 15:
warnings.warn(
f"root_precision={self.root_precision} is greater than 15. "
"This demands an accuracy that is likely impossible for standard "
"64-bit floats (float64), which are limited to 15-16 significant digits. "
"The solver may fail to find any roots.",
UserWarning,
stacklevel=2
)
def _get_cauchy_bound(coeffs: np.ndarray) -> float:
"""
Calculates Cauchy's bound for the roots of a polynomial.
This provides a radius R such that all roots (real and complex)
have an absolute value less than or equal to R.
R = 1 + max(|c_n-1/c_n|, |c_n-2/c_n|, ..., |c_0/c_n|)
Where c_n is the leading coefficient (coeffs[0]).
"""
# Normalize all coefficients by the leading coefficient
normalized_coeffs = np.abs(coeffs[1:] / coeffs[0])
# The bound is 1 + the maximum of these normalized values
R = 1 + np.max(normalized_coeffs)
return R
class Function: class Function:
""" """
@@ -76,12 +178,13 @@ class Function:
self.coefficients: Optional[np.ndarray] = None self.coefficients: Optional[np.ndarray] = None
self._initialized = False self._initialized = False
def set_coeffs(self, coefficients: List[int]): def set_coeffs(self, coefficients: List[Union[int, float]]):
""" """
Sets the coefficients of the polynomial. Sets the coefficients of the polynomial.
Args: Args:
coefficients (List[int]): A list of integer coefficients. The list size coefficients (List[Union[int, float]]): A list of integer or float
coefficients. The list size
must be largest_exponent + 1. must be largest_exponent + 1.
Raises: Raises:
@@ -96,7 +199,13 @@ class Function:
if coefficients[0] == 0 and self._largest_exponent > 0: if coefficients[0] == 0 and self._largest_exponent > 0:
raise ValueError("The first constant (for the largest exponent) cannot be 0.") raise ValueError("The first constant (for the largest exponent) cannot be 0.")
self.coefficients = np.array(coefficients, dtype=np.int64) # Check if any coefficient is a float
is_float = any(isinstance(c, float) for c in coefficients)
# Choose the dtype based on the input
target_dtype = np.float64 if is_float else np.int64
self.coefficients = np.array(coefficients, dtype=target_dtype)
self._initialized = True self._initialized = True
def _check_initialized(self): def _check_initialized(self):
@@ -144,7 +253,7 @@ class Function:
if self._largest_exponent == 0: if self._largest_exponent == 0:
raise ValueError("Cannot differentiate a constant (Function of degree 0).") raise ValueError("Cannot differentiate a constant (Function of degree 0).")
return self.derivitive() return self.derivative()
def derivative(self) -> 'Function': def derivative(self) -> 'Function':
@@ -156,7 +265,9 @@ class Function:
""" """
self._check_initialized() self._check_initialized()
if self._largest_exponent == 0: if self._largest_exponent == 0:
raise ValueError("Cannot differentiate a constant (Function of degree 0).") diff_func = Function(0)
diff_func.set_coeffs([0])
return diff_func
derivative_coefficients = np.polyder(self.coefficients) derivative_coefficients = np.polyder(self.coefficients)
@@ -237,53 +348,164 @@ class Function:
def _solve_x_numpy(self, y_val: float, options: GA_Options) -> np.ndarray: def _solve_x_numpy(self, y_val: float, options: GA_Options) -> np.ndarray:
"""Genetic algorithm implementation using NumPy (CPU).""" """Genetic algorithm implementation using NumPy (CPU)."""
elite_ratio = options.elite_ratio
crossover_ratio = options.crossover_ratio
mutation_ratio = options.mutation_ratio
data_size = options.data_size
elite_size = int(data_size * elite_ratio)
crossover_size = int(data_size * crossover_ratio)
mutation_size = int(data_size * mutation_ratio)
random_size = data_size - elite_size - crossover_size - mutation_size
# Check if the user is using the default, non-expert range
user_range_is_default = (options.min_range == 0.0 and options.max_range == 0.0)
if user_range_is_default:
# User hasn't specified a custom range.
# We are the expert; use the smart, guaranteed bound.
bound = _get_cauchy_bound(self.coefficients)
min_r = -bound
max_r = bound
else:
# User has provided a custom range.
# Trust the expert; use their range.
min_r = options.min_range
max_r = options.max_range
# Create initial random solutions # Create initial random solutions
solutions = np.random.uniform(options.min_range, options.max_range, options.data_size) solutions = np.random.uniform(min_r, max_r, data_size)
# Pre-allocate ranks array
ranks = np.empty(data_size, dtype=np.float64)
for _ in range(options.num_of_generations): for _ in range(options.num_of_generations):
# Calculate fitness for all solutions (vectorized) # Calculate fitness for all solutions (vectorized)
y_calculated = np.polyval(self.coefficients, solutions) _calculate_ranks_numba(solutions, self.coefficients, y_val, ranks)
error = y_calculated - y_val
ranks = np.where(error == 0, np.finfo(float).max, np.abs(1.0 / error)) parent_pool_size = int(data_size * options.selection_percentile)
# Sort solutions by fitness (descending) # 1. Get indices for the elite solutions (O(N) operation)
sorted_indices = np.argsort(-ranks) # We find the 'elite_size'-th largest element.
solutions = solutions[sorted_indices] elite_indices = np.argpartition(-ranks, elite_size)[:elite_size]
# Keep only the top solutions # 2. Get indices for the parent pool (O(N) operation)
top_solutions = solutions[:options.sample_size] # We find the 'parent_pool_size'-th largest element.
parent_pool_indices = np.argpartition(-ranks, parent_pool_size)[:parent_pool_size]
# For the next generation, start with the mutated top solutions # --- Create the next generation ---
# and fill the rest with new random values.
# 1. Elitism: Keep the best solutions as-is
elite_solutions = solutions[elite_indices]
# 2. Crossover: Breed two parents to create a child
# Select from the fitter PARENT POOL
parents1_idx = np.random.choice(parent_pool_indices, crossover_size)
parents2_idx = np.random.choice(parent_pool_indices, crossover_size)
parents1 = solutions[parents1_idx]
parents2 = solutions[parents2_idx]
# Blend Crossover (BLX-alpha)
alpha = options.blend_alpha
# Find min/max for all parent pairs
p_min = np.minimum(parents1, parents2)
p_max = np.maximum(parents1, parents2)
# Calculate range (I)
parent_range = p_max - p_min
# Calculate new min/max for the expanded range
new_min = p_min - (alpha * parent_range)
new_max = p_max + (alpha * parent_range)
# Create a new random child within the expanded range
crossover_solutions = np.random.uniform(new_min, new_max, crossover_size)
# 3. Mutation:
# Select from the full list (indices 0 to data_size-1)
mutation_candidates = solutions[np.random.randint(0, data_size, mutation_size)]
# Use mutation_strength
mutation_factors = np.random.uniform( mutation_factors = np.random.uniform(
1 - options.mutation_percentage, 1 - options.mutation_strength,
1 + options.mutation_percentage, 1 + options.mutation_strength,
options.sample_size mutation_size
) )
mutated_solutions = top_solutions * mutation_factors mutated_solutions = mutation_candidates * mutation_factors
new_random_solutions = np.random.uniform( # 4. New Randoms: Add new blood to prevent getting stuck
options.min_range, options.max_range, options.data_size - options.sample_size random_solutions = np.random.uniform(min_r, max_r, random_size)
)
solutions = np.concatenate([mutated_solutions, new_random_solutions]) # Assemble the new generation
solutions = np.concatenate([
elite_solutions,
crossover_solutions,
mutated_solutions,
random_solutions
])
# Final sort of the best solutions from the last generation # --- Final Step: Return the best results ---
final_solutions = np.sort(solutions[:options.sample_size]) # After all generations, do one last ranking to find the best solutions
return final_solutions _calculate_ranks_numba(solutions, self.coefficients, y_val, ranks)
# 1. Define quality based on the user's desired precision
# (e.g., precision=5 -> rank > 1e6, precision=8 -> rank > 1e9)
# We add +1 for a buffer, ensuring we only get high-quality roots.
quality_threshold = 10**(options.root_precision + 1)
# 2. Get all solutions that meet this quality threshold
high_quality_solutions = solutions[ranks > quality_threshold]
if high_quality_solutions.size == 0:
# No roots found that meet the quality, return empty
return np.array([])
# 3. Cluster these high-quality solutions by rounding
rounded_solutions = np.round(high_quality_solutions, options.root_precision)
# 4. Return only the unique roots
unique_roots = np.unique(rounded_solutions)
return np.sort(unique_roots)
def _solve_x_cuda(self, y_val: float, options: GA_Options) -> np.ndarray: def _solve_x_cuda(self, y_val: float, options: GA_Options) -> np.ndarray:
"""Genetic algorithm implementation using CuPy (GPU/CUDA).""" """Genetic algorithm implementation using CuPy (GPU/CUDA)."""
# Load the raw CUDA kernel
fitness_gpu = cupy.RawKernel(_FITNESS_KERNEL, 'fitness_kernel')
# Move coefficients to GPU elite_ratio = options.elite_ratio
d_coefficients = cupy.array(self.coefficients, dtype=cupy.int64) crossover_ratio = options.crossover_ratio
mutation_ratio = options.mutation_ratio
data_size = options.data_size
elite_size = int(data_size * elite_ratio)
crossover_size = int(data_size * crossover_ratio)
mutation_size = int(data_size * mutation_ratio)
random_size = data_size - elite_size - crossover_size - mutation_size
# ALWAYS cast coefficients to float64 for the kernel.
fitness_gpu = cupy.RawKernel(_FITNESS_KERNEL_FLOAT, 'fitness_kernel')
d_coefficients = cupy.array(self.coefficients, dtype=cupy.float64)
# Check if the user is using the default, non-expert range
user_range_is_default = (options.min_range == 0.0 and options.max_range == 0.0)
if user_range_is_default:
# User hasn't specified a custom range.
# We are the expert; use the smart, guaranteed bound.
bound = _get_cauchy_bound(self.coefficients)
min_r = -bound
max_r = bound
else:
# User has provided a custom range.
# Trust the expert; use their range.
min_r = options.min_range
max_r = options.max_range
# Create initial random solutions on the GPU # Create initial random solutions on the GPU
d_solutions = cupy.random.uniform( d_solutions = cupy.random.uniform(
options.min_range, options.max_range, options.data_size, dtype=cupy.float64 min_r, max_r, options.data_size, dtype=cupy.float64
) )
d_ranks = cupy.empty(options.data_size, dtype=cupy.float64) d_ranks = cupy.empty(options.data_size, dtype=cupy.float64)
@@ -302,27 +524,89 @@ class Function:
sorted_indices = cupy.argsort(-d_ranks) sorted_indices = cupy.argsort(-d_ranks)
d_solutions = d_solutions[sorted_indices] d_solutions = d_solutions[sorted_indices]
if i + 1 == options.num_of_generations: # --- Create the next generation ---
break
# Get top solutions # 1. Elitism
d_top_solutions = d_solutions[:options.sample_size] d_elite_solutions = d_solutions[:elite_size]
# Mutate top solutions on the GPU # 2. Crossover
mutation_factors = cupy.random.uniform( parent_pool_size = int(data_size * options.selection_percentile)
1 - options.mutation_percentage, 1 + options.mutation_percentage, options.sample_size # Select from the fitter PARENT POOL
parent1_indices = cupy.random.randint(0, parent_pool_size, crossover_size)
parent2_indices = cupy.random.randint(0, parent_pool_size, crossover_size)
# Get parents directly from the sorted solutions array using the pool-sized indices
d_parents1 = d_solutions[parent1_indices]
d_parents2 = d_solutions[parent2_indices]
# Blend Crossover (BLX-alpha)
alpha = options.blend_alpha
# Find min/max for all parent pairs
d_p_min = cupy.minimum(d_parents1, d_parents2)
d_p_max = cupy.maximum(d_parents1, d_parents2)
# Calculate range (I)
d_parent_range = d_p_max - d_p_min
# Calculate new min/max for the expanded range
d_new_min = d_p_min - (alpha * d_parent_range)
d_new_max = d_p_max + (alpha * d_parent_range)
# Create a new random child within the expanded range
d_crossover_solutions = cupy.random.uniform(d_new_min, d_new_max, crossover_size)
# 3. Mutation
# Select from the full list (indices 0 to data_size-1)
mutation_indices = cupy.random.randint(0, data_size, mutation_size)
d_mutation_candidates = d_solutions[mutation_indices]
# Use mutation_strength (the new name)
d_mutation_factors = cupy.random.uniform(
1 - options.mutation_strength,
1 + options.mutation_strength,
mutation_size
) )
d_mutated = d_top_solutions * mutation_factors d_mutated_solutions = d_mutation_candidates * d_mutation_factors
# Create new random solutions for the rest # 4. New Randoms
d_new_random = cupy.random.uniform( d_random_solutions = cupy.random.uniform(
options.min_range, options.max_range, options.data_size - options.sample_size min_r, max_r, random_size, dtype=cupy.float64
) )
d_solutions = cupy.concatenate([d_mutated, d_new_random]) # Assemble the new generation
d_solutions = cupy.concatenate([
d_elite_solutions,
d_crossover_solutions,
d_mutated_solutions,
d_random_solutions
])
# Get the final sample, sort it, and copy back to CPU # --- Final Step: Return the best results ---
final_solutions_gpu = cupy.sort(d_solutions[:options.sample_size]) # After all generations, do one last ranking to find the best solutions
fitness_gpu(
(blocks_per_grid,), (threads_per_block,),
(d_coefficients, d_coefficients.size, d_solutions, d_ranks, d_solutions.size, y_val)
)
# 1. Define quality based on the user's desired precision
# (e.g., precision=5 -> rank > 1e6, precision=8 -> rank > 1e9)
# We add +1 for a buffer, ensuring we only get high-quality roots.
quality_threshold = 10**(options.root_precision + 1)
# 2. Get all solutions that meet this quality threshold
d_high_quality_solutions = d_solutions[d_ranks > quality_threshold]
if d_high_quality_solutions.size == 0:
return np.array([])
# 3. Cluster these high-quality solutions on the GPU by rounding
d_rounded_solutions = cupy.round(d_high_quality_solutions, options.root_precision)
# 4. Get only the unique roots
d_unique_roots = cupy.unique(d_rounded_solutions)
# Sort the unique roots and copy back to CPU
final_solutions_gpu = cupy.sort(d_unique_roots)
return final_solutions_gpu.get() return final_solutions_gpu.get()
@@ -337,12 +621,16 @@ class Function:
power = self._largest_exponent - i power = self._largest_exponent - i
# Coefficient part # Coefficient part
if c == 1 and power != 0: coeff_val = c
if c == int(c):
coeff_val = int(c)
if coeff_val == 1 and power != 0:
coeff = "" coeff = ""
elif c == -1 and power != 0: elif coeff_val == -1 and power != 0:
coeff = "-" coeff = "-"
else: else:
coeff = str(c) coeff = str(coeff_val)
# Variable part # Variable part
if power == 0: if power == 0:
@@ -356,7 +644,7 @@ class Function:
sign = "" sign = ""
if i > 0: if i > 0:
sign = " + " if c > 0 else " - " sign = " + " if c > 0 else " - "
coeff = str(abs(c)) coeff = str(abs(coeff_val))
if abs(c) == 1 and power != 0: if abs(c) == 1 and power != 0:
coeff = "" # Don't show 1 for non-constant terms coeff = "" # Don't show 1 for non-constant terms
@@ -395,7 +683,7 @@ class Function:
def _multiply_by_scalar(self, scalar: Union[int, float]) -> 'Function': def _multiply_by_scalar(self, scalar: Union[int, float]) -> 'Function':
"""Helper method to multiply the function by a scalar constant.""" """Helper method to multiply the function by a scalar constant."""
self._check_initialized() # It's good practice to check here too self._check_initialized()
if scalar == 0: if scalar == 0:
result_func = Function(0) result_func = Function(0)
@@ -440,11 +728,42 @@ class Function:
def __imul__(self, other: Union['Function', int, float]) -> 'Function': def __imul__(self, other: Union['Function', int, float]) -> 'Function':
"""Performs in-place multiplication by a scalar (func *= 3).""" """Performs in-place multiplication by a scalar (func *= 3)."""
self._check_initialized()
if isinstance(other, (int, float)):
if other == 0:
self.coefficients = np.array([0], dtype=self.coefficients.dtype)
self._largest_exponent = 0
else:
self.coefficients *= other self.coefficients *= other
elif isinstance(other, self.__class__):
other._check_initialized()
self.coefficients = np.polymul(self.coefficients, other.coefficients)
self._largest_exponent = len(self.coefficients) - 1
else:
return NotImplemented
return self return self
def __eq__(self, other: object) -> bool:
"""
Checks if two Function objects are equal by comparing
their coefficients.
"""
# Check if the 'other' object is even a Function
if not isinstance(other, Function):
return NotImplemented
def quadratic_solve(f: Function) -> Optional[List[float]]: # Ensure both are initialized before trying to access .coefficients
if not self._initialized or not other._initialized:
return False
return np.array_equal(self.coefficients, other.coefficients)
def quadratic_solve(self) -> Optional[List[float]]:
""" """
Calculates the real roots of a quadratic function using the quadratic formula. Calculates the real roots of a quadratic function using the quadratic formula.
@@ -454,11 +773,11 @@ def quadratic_solve(f: Function) -> Optional[List[float]]:
Returns: Returns:
Optional[List[float]]: A list containing the two real roots, or None if there are no real roots. Optional[List[float]]: A list containing the two real roots, or None if there are no real roots.
""" """
f._check_initialized() self._check_initialized()
if f.largest_exponent != 2: if self.largest_exponent != 2:
raise ValueError("Input function must be quadratic (degree 2).") raise ValueError("Input function must be quadratic (degree 2) to use quadratic_solve.")
a, b, c = f.coefficients a, b, c = self.coefficients
discriminant = (b**2) - (4*a*c) discriminant = (b**2) - (4*a*c)
@@ -466,9 +785,35 @@ def quadratic_solve(f: Function) -> Optional[List[float]]:
return None # No real roots return None # No real roots
sqrt_discriminant = math.sqrt(discriminant) sqrt_discriminant = math.sqrt(discriminant)
root1 = (-b + sqrt_discriminant) / (2 * a)
root2 = (-b - sqrt_discriminant) / (2 * a)
# 1. Calculate the first root.
# We use math.copysign(val, sign) to get the sign of b.
# This ensures (-b - sign*sqrt) is always an *addition*
# (or subtraction of a smaller from a larger number),
# avoiding catastrophic cancellation.
root1 = (-b - math.copysign(sqrt_discriminant, b)) / (2 * a)
# 2. Calculate the second root using Vieta's formulas.
# We know that root1 * root2 = c / a.
# This is just a division, which is numerically stable.
# Handle the edge case where c=0.
# If c=0, then root1 is 0.0, and root2 is -b/a
# We can't divide by root1=0, so we check.
if root1 == 0.0:
# If c is also 0, the other root is -b/a
if c == 0.0:
root2 = -b / a
else:
# This case (root1=0 but c!=0) shouldn't happen
# with real numbers, but it's safe to just
# return the one root we found.
return [0.0]
else:
# Standard case: Use Vieta's formula
root2 = (c / a) / root1
# Return roots in a consistent order
return [root1, root2] return [root1, root2]
# Example Usage # Example Usage
@@ -488,13 +833,17 @@ if __name__ == '__main__':
df1 = f1.derivative() df1 = f1.derivative()
print(f"Derivative of f1: {df1}") print(f"Derivative of f1: {df1}")
# Find the second derivative: 4
ddf1 = f1.nth_derivative(2)
print(f"Second derivative of f1: {ddf1}")
# --- Root Finding --- # --- Root Finding ---
# 1. Analytical solution for quadratic # 1. Analytical solution for quadratic
roots_analytic = quadratic_solve(f1) roots_analytic = f1.quadratic_solve()
print(f"Analytic roots of f1: {roots_analytic}") # Expected: -1, 2.5 print(f"Analytic roots of f1: {roots_analytic}") # Expected: -1, 2.5
# 2. Genetic algorithm solution # 2. Genetic algorithm solution
ga_opts = GA_Options(num_of_generations=20, data_size=50000, sample_size=10) ga_opts = GA_Options(num_of_generations=20, data_size=50000)
print("\nFinding roots with Genetic Algorithm (CPU)...") print("\nFinding roots with Genetic Algorithm (CPU)...")
roots_ga_cpu = f1.get_real_roots(ga_opts) roots_ga_cpu = f1.get_real_roots(ga_opts)
print(f"Approximate roots from GA (CPU): {roots_ga_cpu}") print(f"Approximate roots from GA (CPU): {roots_ga_cpu}")

View File

@@ -1,5 +1,6 @@
import pytest import pytest
import numpy as np import numpy as np
import numpy.testing as npt
# Try to import cupy to check for CUDA availability # Try to import cupy to check for CUDA availability
try: try:
@@ -8,7 +9,7 @@ try:
except ImportError: except ImportError:
_CUPY_AVAILABLE = False _CUPY_AVAILABLE = False
from polysolve import Function, GA_Options, quadratic_solve from polysolve import Function, GA_Options
@pytest.fixture @pytest.fixture
def quadratic_func() -> Function: def quadratic_func() -> Function:
@@ -36,6 +37,12 @@ def m_func_2() -> Function:
f.set_coeffs([5, -4]) f.set_coeffs([5, -4])
return f return f
@pytest.fixture
def base_func():
f = Function(2)
f.set_coeffs([1, 2, 3])
return f
# --- Core Functionality Tests --- # --- Core Functionality Tests ---
def test_solve_y(quadratic_func): def test_solve_y(quadratic_func):
@@ -60,7 +67,7 @@ def test_nth_derivative(quadratic_func):
def test_quadratic_solve(quadratic_func): def test_quadratic_solve(quadratic_func):
"""Tests the analytical quadratic solver for exact roots.""" """Tests the analytical quadratic solver for exact roots."""
roots = quadratic_solve(quadratic_func) roots = quadratic_func.quadratic_solve()
# Sorting ensures consistent order for comparison # Sorting ensures consistent order for comparison
assert sorted(roots) == [-1.0, 2.5] assert sorted(roots) == [-1.0, 2.5]
@@ -92,7 +99,33 @@ def test_function_multiplication(m_func_1, m_func_2):
# (2x^2 + 3x + 1) * (5x -4) = 10x^3 + 7x^2 - 7x -4 # (2x^2 + 3x + 1) * (5x -4) = 10x^3 + 7x^2 - 7x -4
result = m_func_1 * m_func_2 result = m_func_1 * m_func_2
assert result.largest_exponent == 3 assert result.largest_exponent == 3
assert np.array_equal(result.coefficients, [19, 7, -7, -4]) assert np.array_equal(result.coefficients, [10, 7, -7, -4])
def test_equality(base_func):
"""Tests the __eq__ method for the Function class."""
# 1. Test for equality with a new, identical object
f_identical = Function(2)
f_identical.set_coeffs([1, 2, 3])
assert base_func == f_identical
# 2. Test for inequality (different coefficients)
f_different = Function(2)
f_different.set_coeffs([1, 9, 3])
assert base_func != f_different
# 3. Test for inequality (different degree)
f_diff_degree = Function(1)
f_diff_degree.set_coeffs([1, 2])
assert base_func != f_diff_degree
# 4. Test against a different type
assert base_func != "some_string"
assert base_func != 123
# 5. Test against an uninitialized Function
f_uninitialized = Function(2)
assert base_func != f_uninitialized
# --- Genetic Algorithm Root-Finding Tests --- # --- Genetic Algorithm Root-Finding Tests ---
@@ -101,7 +134,7 @@ def test_get_real_roots_numpy(quadratic_func):
Tests that the NumPy-based genetic algorithm approximates the roots correctly. Tests that the NumPy-based genetic algorithm approximates the roots correctly.
""" """
# Using more generations for higher accuracy in testing # Using more generations for higher accuracy in testing
ga_opts = GA_Options(num_of_generations=25, data_size=50000) ga_opts = GA_Options(num_of_generations=50, data_size=200000, selection_percentile=0.66, root_precision=3)
roots = quadratic_func.get_real_roots(ga_opts, use_cuda=False) roots = quadratic_func.get_real_roots(ga_opts, use_cuda=False)
@@ -109,11 +142,7 @@ def test_get_real_roots_numpy(quadratic_func):
# We don't know which order they'll be in, so we check for presence. # We don't know which order they'll be in, so we check for presence.
expected_roots = np.array([-1.0, 2.5]) expected_roots = np.array([-1.0, 2.5])
# Check that at least one found root is close to -1.0 npt.assert_allclose(np.sort(roots), np.sort(expected_roots), atol=1e-2)
assert np.any(np.isclose(roots, expected_roots[0], atol=1e-2))
# Check that at least one found root is close to 2.5
assert np.any(np.isclose(roots, expected_roots[1], atol=1e-2))
@pytest.mark.skipif(not _CUPY_AVAILABLE, reason="CuPy is not installed, skipping CUDA test.") @pytest.mark.skipif(not _CUPY_AVAILABLE, reason="CuPy is not installed, skipping CUDA test.")
@@ -124,13 +153,12 @@ def test_get_real_roots_cuda(quadratic_func):
It will be skipped automatically if CuPy is not available. It will be skipped automatically if CuPy is not available.
""" """
ga_opts = GA_Options(num_of_generations=25, data_size=50000) ga_opts = GA_Options(num_of_generations=50, data_size=200000, selection_percentile=0.66, root_precision=3)
roots = quadratic_func.get_real_roots(ga_opts, use_cuda=True) roots = quadratic_func.get_real_roots(ga_opts, use_cuda=True)
expected_roots = np.array([-1.0, 2.5]) expected_roots = np.array([-1.0, 2.5])
# Verify that the CUDA implementation also finds the correct roots within tolerance. # Verify that the CUDA implementation also finds the correct roots within tolerance.
assert np.any(np.isclose(roots, expected_roots[0], atol=1e-2)) npt.assert_allclose(np.sort(roots), np.sort(expected_roots), atol=1e-2)
assert np.any(np.isclose(roots, expected_roots[1], atol=1e-2))