71 Commits

Author SHA1 Message Date
Dev Singh
c4e071b87b fix: issue #55
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-10-09 18:37:50 +00:00
Dev Singh
6987adf5b4 Update publish-analysis.yml 2020-10-09 18:35:48 +00:00
Arthur Lu
764dab01f6 reflected doc changes to README.md (#48)
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-10-05 09:49:39 -05:00
Dev Singh
56f5e5262c deps: remove dnspython (#47)
Signed-off-by: Dev Singh <dev@devksingh.com>

Co-authored-by: Arthur Lu <learthurgo@gmail.com>
2020-09-28 18:53:32 -05:00
Arthur Lu
56a5578f35 Merge pull request #46 from titanscouting/multithread-testing
Implement Multithreading in Superscript
2020-09-28 17:46:29 -05:00
Dev Singh
c48c512cf6 Implement fitting to circle using LSC and HyperFit (#45)
* chore: add pylint to devcontainer

Signed-off-by: Dev Singh <dev@devksingh.com>

* feat: init LSC fitting

cuda and cpu-based LSC fitting using cupy and numpy

Signed-off-by: Dev Singh <dev@devksingh.com>

* docs: add changelog entry and module to class list

Signed-off-by: Dev Singh <dev@devksingh.com>

* docs: fix typo in comment

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: only import cupy if cuda available

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: move to own file, abandon cupy

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: remove numba dep

Signed-off-by: Dev Singh <dev@devksingh.com>

* deps: remove cupy dep

Signed-off-by: Dev Singh <dev@devksingh.com>

* feat: add tests

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: correct indentation

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: variable names

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: add self when refering to coords

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: numpy ordering

Signed-off-by: Dev Singh <dev@devksingh.com>

* docs: remove version bump, nomaintain

add notice that module is not actively maintained, may be removed in future release

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: remove hyperfit as not being impled

Signed-off-by: Dev Singh <dev@devksingh.com>
2020-09-24 21:06:30 -05:00
Dev Singh
d15aa045de docs: create security reporting guidelines (#44)
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-09-24 13:09:34 -05:00
Arthur Lu
b32083c6da added tra-analysis to data-analysis requirements
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-24 13:14:13 +00:00
Arthur Lu
a999c755a1 Merge branch 'multithread-testing' of https://github.com/titanscouting/red-alliance-analysis into multithread-testing 2020-09-26 20:57:55 +00:00
Arthur Lu
e3241fa34d superscript.py v 0.8.2
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-26 20:57:39 +00:00
Dev Singh
97f3271de3 Merge branch 'master' into multithread-testing 2020-09-26 15:28:14 -05:00
Arthur Lu
2804d03593 superscript.py v 0.8.1
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-21 07:38:18 +00:00
Arthur Lu
adbc749c47 added max-threads key in config
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-21 07:21:59 +00:00
Arthur Lu
ec9bac7830 superscript.py v 0.8.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-21 05:59:15 +00:00
Arthur Lu
b9a2e680bc Merge pull request #43 from titanscouting/master-staged
Pull changes from master staged to master for release
2020-09-19 21:06:42 -05:00
Arthur Lu
467444ed9b Merge branch 'master' into master-staged 2020-09-19 20:05:33 -05:00
Arthur Lu
fa7216d4e0 modified setup.py for analysis package v 2.1.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-20 00:50:14 +00:00
Arthur Lu
27a86e568b depreciated nonfunctional scripts in data-analysis
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-20 00:47:33 +00:00
Arthur Lu
16502c5259 superscript.py v 0.7.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-20 00:45:38 +00:00
Arthur Lu
ff9ad078e5 analysis.py v 2.3.1
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-19 23:14:46 +00:00
Arthur Lu
97334d1f66 edited README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-19 22:40:20 +00:00
Arthur Lu
f566f4ec71 Merge pull request #42 from titanscouting/devksingh4-patch-1
docs: add documentation links
2020-09-19 17:07:57 -05:00
Arthur Lu
cd869c0a8e analysis.py v 2.3.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-19 22:04:24 +00:00
Arthur Lu
f1982eb93d analysis.py v 2.2.3
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-18 21:55:59 +00:00
Arthur Lu
3763cb041f analysis.py v 2.2.2
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-17 02:11:44 +00:00
Dev Singh
2a201a61c7 docs: add documentation links 2020-09-16 16:54:49 -05:00
Arthur Lu
73a16b8397 added depreciated config files to gitignore
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-16 21:24:50 +00:00
Arthur Lu
0e7255ab99 changed && to ; in devcontainer.json
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-15 23:24:50 +00:00
Arthur Lu
5efaee5176 Merge pull request #41 from titanscouting/master-staged
merge eol fix in master-staged to master
2020-08-13 12:04:54 -05:00
Arthur Lu
1a1be8ee6a fixed eol issue with docker in gitattributes
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-13 17:01:08 +00:00
Arthur Lu
cab05fbc63 Merge commit '4b664acffb5777614043a83ef8e08368e21303ce' into master-staged 2020-08-13 17:00:31 +00:00
Dev Singh
4b664acffb Modernize VSCode extensions in dev env, set correct copyright assignment (#40)
* modernize extensions

Signed-off-by: Dev Singh <dev@devksingh.com>

* copyright assigment should be to titan scouting

Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-12 21:59:04 -05:00
Arthur Lu
292f9faeef Merge pull request #39 from titanscouting/master-staged
merge README changes from master-staged to master
2020-08-10 20:49:01 -05:00
Arthur Lu
468bd48b07 fixed readme with proper pip installation
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-11 01:36:30 +00:00
Arthur Lu
4c3f16f13b Merge pull request #38 from titanscouting/master
pull master into master-staged
2020-08-10 20:33:28 -05:00
Arthur Lu
8545a0d984 Merge pull request #36 from titanscouting/tra-service
merge changes from tra-service to master
2020-08-10 19:40:28 -05:00
Arthur Lu
6debc07786 modified README
simplified devcontainer.json

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-11 00:29:23 +00:00
Arthur Lu
bc5b07bb8d readded old config files
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 23:32:50 +00:00
Arthur Lu
9b73147c4d fixed analysis reference in superscript_old
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 23:20:43 +00:00
Arthur Lu
2f84debda7 removed old bins under analysis-master/dist/
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 21:37:41 +00:00
Arthur Lu
c803208eb8 analysis.py v 2.2.1
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 21:25:25 +00:00
Arthur Lu
135350293c Merge branch 'master' into tra-service 2020-08-10 16:11:38 -05:00
Arthur Lu
9a3181a92b renamed analysis folder to tra_analysis
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 21:01:50 +00:00
Dev Singh
73da5fa68b docs
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:53:22 -05:00
Dev Singh
7be57f7e7e build v2.0.3
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:52:49 -05:00
Arthur Lu
3db3dda315 Merge pull request #33 from titanscout2022/Demo-for-Issue#32
Merge Changes Proposed in Issue#32
2020-08-02 17:27:26 -05:00
Arthur Lu
a59e509bc8 made changes described in Issue#32
changed setup.py to also reflect versioning changes

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-30 19:05:07 +00:00
Arthur Lu
ad521368bd filled out Contributing section in README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-20 19:07:32 -05:00
Arthur Lu
5e52155fd0 Merge pull request #31 from titanscout2022/master
merge changes from master into tra-service
2020-07-18 23:25:55 -05:00
Arthur Lu
daa5b48426 readded old superscript.py (v 0.0.5.002)
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-11 21:21:56 +00:00
Arthur Lu
b2cf594869 readded tra.py as a fallback option
made changes to tra-cli.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 23:15:34 +00:00
Arthur Lu
bcd6c66a08 fixed more bugs with tra-cli.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 21:47:54 +00:00
Arthur Lu
b646e22378 fixed bugs with tra-cli.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 21:32:43 +00:00
Arthur Lu
51f14de0d2 fixed latest.whl to follow format for wheel files
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 20:56:13 +00:00
Arthur Lu
266caf78c3 started on tra-cli.py
modified tasks.py to work properly

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 20:23:53 +00:00
Arthur Lu
fa478314da added data-analysis requirements to devcontainer build
added auto pip intsall latest analysis.py whl

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 18:25:41 +00:00
Arthur Lu
8a212a21df moved core functions in tasks.py to class Tasker
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 18:19:58 +00:00
Arthur Lu
236c28c3be renamed tra;py to tasks.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 17:46:40 +00:00
Arthur Lu
7c2f058feb added help message to status command
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-26 01:34:47 +00:00
Arthur Lu
e84783ee44 populated tra.py to be a CLI application
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-25 22:17:08 +00:00
Arthur Lu
09b703d2a7 removed extra words
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:56:00 +00:00
Arthur Lu
098326584a removed more extra lines
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:54:48 +00:00
Arthur Lu
e5c7718f10 fixed extra hline
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:52:25 +00:00
Arthur Lu
a3ffdd89d0 fixed line breaks
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:51:57 +00:00
Arthur Lu
2fc11285ba fixed Prerequisites in README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:35:02 +00:00
Arthur Lu
9dd38fcec8 added OS and python versions supproted
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:30:01 +00:00
Arthur Lu
90f747f3fc revamped README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 16:42:58 +00:00
Arthur Lu
869d7c288b fixed naming in tra.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 22:51:58 -05:00
Arthur Lu
dc4f5ab40e another bug fix
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 22:49:38 -05:00
Arthur Lu
a739007222 quick bug fix to tra.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 22:48:50 -05:00
Arthur Lu
ba06b9293e added test.py to .gitignore
prepared tra.py for threading implement

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 19:43:59 -05:00
21 changed files with 686 additions and 405 deletions

View File

@@ -21,7 +21,8 @@
},
"extensions": [
"mhutchie.git-graph",
"donjayamanne.jupyter",
"ms-python.python",
"waderyan.gitblame"
],
"postCreateCommand": "pip install -r analysis-master/requirements.txt"
"postCreateCommand": "apt install vim -y ; pip install -r data-analysis/requirements.txt ; pip install -r analysis-master/requirements.txt ; pip install pylint ; pip install tra-analysis"
}

4
.gitattributes vendored
View File

@@ -1,2 +1,4 @@
# Auto detect text files and perform LF normalization
* text=auto
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf

View File

@@ -4,8 +4,9 @@
name: Upload Analysis Package
on:
release:
types: [published, edited]
push:
tags:
- 'v*'
jobs:
deploy:
@@ -34,3 +35,23 @@ jobs:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}
packages_dir: analysis-master/dist/
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
body: See PyPI
draft: false
prerelease: false
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_name: ${{ github.ref }}
asset_content_type: application/octet-stream

4
.gitignore vendored
View File

@@ -31,9 +31,11 @@ data-analysis/__pycache__/
analysis-master/__pycache__/
analysis-master/.pytest_cache/
data-analysis/.pytest_cache/
data-analysis/test.py
analysis-master/tra_analysis.egg-info
analysis-master/tra_analysis/__pycache__
analysis-master/tra_analysis/.ipynb_checkpoints
.pytest_cache
analysis-master/tra_analysis/metrics/__pycache__
analysis-master/dist
analysis-master/dist
data-analysis/config/

View File

@@ -1,6 +1,6 @@
BSD 3-Clause License
Copyright (c) 2020, Titan Robotics FRC 2022
Copyright (c) 2020, Titan Scouting
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1,3 +1,3 @@
Arthur Lu <learthurgo@gmail.com>
Jacob Levine <jacoblevine18@gmail.com>
Dev Singh <dev@devksingh.com>
Dev Singh <dev@devksingh.com>

103
README.md
View File

@@ -1,5 +1,102 @@
# red-alliance-analysis
# Red Alliance Analysis &middot; ![GitHub release (latest by date)](https://img.shields.io/github/v/release/titanscout2022/red-alliance-analysis)
Titan Robotics 2022 Strategy Team Repository for Data Analysis Tools. Included with these tools are the backend data analysis engine formatted as a python package, associated binaries for the analysis package, and premade scripts that can be pulled directly from this repository and will integrate with other Red Alliance applications to quickly deploy FRC scouting tools.
# Installing
`pip install tra_analysis`
---
# `tra-analysis`
`tra-analysis` is a higher level package for data processing and analysis. It is a python library that combines popular data science tools like numpy, scipy, and sklearn along with other tools to create an easy-to-use data analysis engine. tra-analysis includes analysis in all ranges of complexity from basic statistics like mean, median, mode to complex kernel based classifiers and allows user to more quickly deploy these algorithms. The package also includes performance metrics for score based applications including elo, glicko2, and trueskill ranking systems.
At the core of the tra-analysis package is the modularity of each analytical tool. The package encapsulates the setup code for the included data science tools. For example, there are many packages that allow users to generate many different types of regressions. With the tra-analysis package, one function can be called to generate many regressions and sort them by accuracy.
## Prerequisites
---
* Python >= 3.6
* Pip which can be installed by running\
`curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py`\
`python get-pip.py`\
after installing python, or with a package manager on linux. Refer to the [pip installation instructions](https://pip.pypa.io/en/stable/installing/) for more information.
## Installing
---
#### Standard Platforms
For the latest version of tra-analysis, run `pip install tra-analysis` or `pip install tra_analysis`. The requirements for tra-analysis should be automatically installed.
#### Exotic Platforms (Android)
[Termux](https://termux.com/) is recommended for a linux environemnt on Android. Consult the [documentation](https://titanscouting.github.io/analysis/general/installation#exotic-platforms-android) for advice on installing the prerequisites. After installing the prerequisites, the package should be installed normally with `pip install tra-analysis` or `pip install tra_analysis`.
## Use
---
tra-analysis operates like any other python package. Consult the [documentation](https://titanscouting.github.io/analysis/tra_analysis/) for more information.
## Supported Platforms
---
Although any modern 64 bit platform should be supported, the following platforms have been tested to be working:
* AMD64 (Tested on Zen, Zen+, and Zen 2)
* Intel 64/x86_64/x64 (Tested on Kaby Lake)
* ARM64 (Tested on Broadcom BCM2836 SoC, Broadcom BCM2711 SoC)
The following OSes have been tested to be working:
* Linux Kernel 3.16, 4.4, 4.15, 4.19, 5.4
* Ubuntu 16.04, 18.04, 20.04
* Debian (and Debian derivaives) Jessie, Buster
* Windows 7, 10
The following python versions are supported:
* python 3.6 (not tested)
* python 3.7
* python 3.8
---
# `data-analysis`
To facilitate data analysis of collected scouting data in a user firendly tool, we created the data-analysis application. At its core it uses the tra-analysis package to conduct any number of user selected tests on data collected from the TRA scouting app. It uploads these tests back to MongoDB where it can be viewed from the app at any time.
The data-analysis application also uses the TRA API to interface with MongoDB and uses the TBA API to collect additional data (match win/loss).
The application can be configured with a configuration tool or by editing the config.json directly.
## Prerequisites
---
Before installing and using data-analysis, make sure that you have installed the folowing prerequisites:
- A common operating system like **Windows** or (*most*) distributions of **Linux**. BSD may work but has not been tested nor is it reccomended.
- [Python](https://www.python.org/) version **3.6** or higher
- [Pip](https://pip.pypa.io/en/stable/) (installation instructions [here](https://pip.pypa.io/en/stable/installing/))
## Installing Requirements
---
Once navigated to the data-analysis folder run `pip install -r requirements.txt` to install all of the required python libraries.
## Scripts
---
The data-analysis application is a collection of various scripts and one config file. For users, only the main application `superscript.py` and the config file `config.json` are important.
To run the data-analysis application, navigate to the data-analysis folder once all requirements have been installed and run `python superscript.py`. If you encounter the error:
`pymongo.errors.ConfigurationError: Empty host (or extra comma in host list).`
don't worry, you may have just not configured the application correctly, but would otherwise work. Refer to [the documentation](https://titanscouting.github.io/analysis/data_analysis/Config) to learn how to configure data-analysis.
# Contributing
Read our included contributing guidelines (`CONTRIBUTING.md`) for more information and feel free to reach out to any current maintainer for more information.
# Build Statuses
![Analysis Unit Tests](https://github.com/titanscout2022/red-alliance-analysis/workflows/Analysis%20Unit%20Tests/badge.svg)
![Superscript Unit Tests](https://github.com/titanscout2022/red-alliance-analysis/workflows/Superscript%20Unit%20Tests/badge.svg?branch=master)

6
SECURITY.md Normal file
View File

@@ -0,0 +1,6 @@
# Security Policy
## Reporting a Vulnerability
Please email `titanscout2022@gmail.com` to report a vulnerability.

View File

@@ -8,7 +8,7 @@ with open("requirements.txt", 'r') as file:
setuptools.setup(
name="tra_analysis",
version="2.0.2",
version="2.1.0",
author="The Titan Scouting Team",
author_email="titanscout2022@gmail.com",
description="Analysis package developed by Titan Scouting for The Red Alliance",
@@ -17,7 +17,7 @@ setuptools.setup(
url="https://github.com/titanscout2022/tr2022-strategy",
packages=setuptools.find_packages(),
install_requires=requirements,
license = "GNU General Public License v3.0",
license = "BSD 3-Clause License",
classifiers=[
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",

View File

@@ -1,8 +1,11 @@
from tra_analysis import analysis as an
from tra_analysis import metrics
from tra_analysis import fits
def test_():
test_data_linear = [1, 3, 6, 7, 9]
x_data_circular = []
y_data_circular = []
y_data_ccu = [1, 3, 7, 14, 21]
y_data_ccd = [1, 5, 7, 8.5, 8.66]
test_data_scrambled = [-32, 34, 19, 72, -65, -11, -43, 6, 85, -17, -98, -26, 12, 20, 9, -92, -40, 98, -78, 17, -20, 49, 93, -27, -24, -66, 40, 84, 1, -64, -68, -25, -42, -46, -76, 43, -3, 30, -14, -34, -55, -13, 41, -30, 0, -61, 48, 23, 60, 87, 80, 77, 53, 73, 79, 24, -52, 82, 8, -44, 65, 47, -77, 94, 7, 37, -79, 36, -94, 91, 59, 10, 97, -38, -67, 83, 54, 31, -95, -63, 16, -45, 21, -12, 66, -48, -18, -96, -90, -21, -83, -74, 39, 64, 69, -97, 13, 55, 27, -39]
@@ -28,4 +31,5 @@ def test_():
assert all(a == b for a, b in zip(an.Sort().shellsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().bubblesort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().cyclesort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().cocktailsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().cocktailsort(test_data_scrambled), test_data_sorted))
assert fits.CircleFit(x=[0,0,-1,1], y=[1, -1, 0, 0]).LSC() == (0.0, 0.0, 1.0, 0.0)

View File

@@ -0,0 +1,35 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"string = \"3+4+5\"\n",
"re.sub(\"\\d+[+]{1}\\d+\", string, sum([int(i) for i in re.split(\"[+]{1}\", re.search(\"\\d+[+]{1}\\d+\", string).group())]))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -7,13 +7,24 @@
# current benchmark of optimization: 1.33 times faster
# setup:
__version__ = "2.0.2"
__version__ = "2.3.1"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
2.0.2:
- rename analysis imports to tra_analysis for PyPI publishing
1.2.2.000:
2.3.1:
- fixed bugs in Array class
2.3.0:
- overhauled Array class
2.2.3:
- fixed spelling of RandomForest
- made n_neighbors required for KNN
- made n_classifiers required for SVM
2.2.2:
- fixed 2.2.1 changelog entry
- changed regression to return dictionary
2.2.1:
- changed all references to parent package analysis to tra_analysis
2.2.0:
- added Sort class
- added several array sorting functions to Sort class including:
- quick sort
@@ -29,25 +40,25 @@ __changelog__ = """changelog:
- tested all sorting algorithms with both lists and numpy arrays
- depreciated sort function from Array class
- added warnings as an import
1.2.1.004:
2.1.4:
- added sort and search functions to Array class
1.2.1.003:
2.1.3:
- changed output of basic_stats and histo_analysis to libraries
- fixed __all__
1.2.1.002:
2.1.2:
- renamed ArrayTest class to Array
1.2.1.001:
2.1.1:
- added add, mul, neg, and inv functions to ArrayTest class
- added normalize function to ArrayTest class
- added dot and cross functions to ArrayTest class
1.2.1.000:
2.1.0:
- added ArrayTest class
- added elementwise mean, median, standard deviation, variance, min, max functions to ArrayTest class
- added elementwise_stats to ArrayTest which encapsulates elementwise statistics
- appended to __all__ to reflect changes
1.2.0.006:
2.0.6:
- renamed func functions in regression to lin, log, exp, and sig
1.2.0.005:
2.0.5:
- moved random_forrest_regressor and random_forrest_classifier to RandomForrest class
- renamed Metrics to Metric
- renamed RegressionMetrics to RegressionMetric
@@ -55,166 +66,166 @@ __changelog__ = """changelog:
- renamed CorrelationTests to CorrelationTest
- renamed StatisticalTests to StatisticalTest
- reflected rafactoring to all mentions of above classes/functions
1.2.0.004:
2.0.4:
- fixed __all__ to reflected the correct functions and classes
- fixed CorrelationTests and StatisticalTests class functions to require self invocation
- added missing math import
- fixed KNN class functions to require self invocation
- fixed Metrics class functions to require self invocation
- various spelling fixes in CorrelationTests and StatisticalTests
1.2.0.003:
2.0.3:
- bug fixes with CorrelationTests and StatisticalTests
- moved glicko2 and trueskill to the metrics subpackage
- moved elo to a new metrics subpackage
1.2.0.002:
2.0.2:
- fixed docs
1.2.0.001:
2.0.1:
- fixed docs
1.2.0.000:
2.0.0:
- cleaned up wild card imports with scipy and sklearn
- added CorrelationTests class
- added StatisticalTests class
- added several correlation tests to CorrelationTests
- added several statistical tests to StatisticalTests
1.1.13.009:
1.13.9:
- moved elo, glicko2, trueskill functions under class Metrics
1.1.13.008:
1.13.8:
- moved Glicko2 to a seperate package
1.1.13.007:
1.13.7:
- fixed bug with trueskill
1.1.13.006:
1.13.6:
- cleaned up imports
1.1.13.005:
1.13.5:
- cleaned up package
1.1.13.004:
1.13.4:
- small fixes to regression to improve performance
1.1.13.003:
1.13.3:
- filtered nans from regression
1.1.13.002:
1.13.2:
- removed torch requirement, and moved Regression back to regression.py
1.1.13.001:
1.13.1:
- bug fix with linear regression not returning a proper value
- cleaned up regression
- fixed bug with polynomial regressions
1.1.13.000:
1.13.0:
- fixed all regressions to now properly work
1.1.12.006:
1.12.6:
- fixed bg with a division by zero in histo_analysis
1.1.12.005:
1.12.5:
- fixed numba issues by removing numba from elo, glicko2 and trueskill
1.1.12.004:
1.12.4:
- renamed gliko to glicko
1.1.12.003:
1.12.3:
- removed depreciated code
1.1.12.002:
1.12.2:
- removed team first time trueskill instantiation in favor of integration in superscript.py
1.1.12.001:
1.12.1:
- improved readibility of regression outputs by stripping tensor data
- used map with lambda to acheive the improved readibility
- lost numba jit support with regression, and generated_jit hangs at execution
- TODO: reimplement correct numba integration in regression
1.1.12.000:
1.12.0:
- temporarily fixed polynomial regressions by using sklearn's PolynomialFeatures
1.1.11.010:
1.11.010:
- alphabeticaly ordered import lists
1.1.11.009:
1.11.9:
- bug fixes
1.1.11.008:
1.11.8:
- bug fixes
1.1.11.007:
1.11.7:
- bug fixes
1.1.11.006:
1.11.6:
- tested min and max
- bug fixes
1.1.11.005:
1.11.5:
- added min and max in basic_stats
1.1.11.004:
1.11.4:
- bug fixes
1.1.11.003:
1.11.3:
- bug fixes
1.1.11.002:
1.11.2:
- consolidated metrics
- fixed __all__
1.1.11.001:
1.11.1:
- added test/train split to RandomForestClassifier and RandomForestRegressor
1.1.11.000:
1.11.0:
- added RandomForestClassifier and RandomForestRegressor
- note: untested
1.1.10.000:
1.10.0:
- added numba.jit to remaining functions
1.1.9.002:
1.9.2:
- kernelized PCA and KNN
1.1.9.001:
1.9.1:
- fixed bugs with SVM and NaiveBayes
1.1.9.000:
1.9.0:
- added SVM class, subclasses, and functions
- note: untested
1.1.8.000:
1.8.0:
- added NaiveBayes classification engine
- note: untested
1.1.7.000:
1.7.0:
- added knn()
- added confusion matrix to decisiontree()
1.1.6.002:
1.6.2:
- changed layout of __changelog to be vscode friendly
1.1.6.001:
1.6.1:
- added additional hyperparameters to decisiontree()
1.1.6.000:
1.6.0:
- fixed __version__
- fixed __all__ order
- added decisiontree()
1.1.5.003:
1.5.3:
- added pca
1.1.5.002:
1.5.2:
- reduced import list
- added kmeans clustering engine
1.1.5.001:
1.5.1:
- simplified regression by using .to(device)
1.1.5.000:
1.5.0:
- added polynomial regression to regression(); untested
1.1.4.000:
1.4.0:
- added trueskill()
1.1.3.002:
1.3.2:
- renamed regression class to Regression, regression_engine() to regression gliko2_engine class to Gliko2
1.1.3.001:
1.3.1:
- changed glicko2() to return tuple instead of array
1.1.3.000:
1.3.0:
- added glicko2_engine class and glicko()
- verified glicko2() accuracy
1.1.2.003:
1.2.3:
- fixed elo()
1.1.2.002:
1.2.2:
- added elo()
- elo() has bugs to be fixed
1.1.2.001:
1.2.1:
- readded regrression import
1.1.2.000:
1.2.0:
- integrated regression.py as regression class
- removed regression import
- fixed metadata for regression class
- fixed metadata for analysis class
1.1.1.001:
1.1.1:
- regression_engine() bug fixes, now actaully regresses
1.1.1.000:
1.1.0:
- added regression_engine()
- added all regressions except polynomial
1.1.0.007:
1.0.7:
- updated _init_device()
1.1.0.006:
1.0.6:
- removed useless try statements
1.1.0.005:
1.0.5:
- removed impossible outcomes
1.1.0.004:
1.0.4:
- added performance metrics (r^2, mse, rms)
1.1.0.003:
1.0.3:
- resolved nopython mode for mean, median, stdev, variance
1.1.0.002:
1.0.2:
- snapped (removed) majority of uneeded imports
- forced object mode (bad) on all jit
- TODO: stop numba complaining about not being able to compile in nopython mode
1.1.0.001:
1.0.1:
- removed from sklearn import * to resolve uneeded wildcard imports
1.1.0.000:
1.0.0:
- removed c_entities,nc_entities,obstacles,objectives from __all__
- applied numba.jit to all functions
- depreciated and removed stdev_z_split
@@ -223,93 +234,93 @@ __changelog__ = """changelog:
- depreciated and removed all nonessential functions (basic_analysis, benchmark, strip_data)
- optimized z_normalize using sklearn.preprocessing.normalize
- TODO: implement kernel/function based pytorch regression optimizer
1.0.9.000:
0.9.0:
- refactored
- numpyed everything
- removed stats in favor of numpy functions
1.0.8.005:
0.8.5:
- minor fixes
1.0.8.004:
0.8.4:
- removed a few unused dependencies
1.0.8.003:
0.8.3:
- added p_value function
1.0.8.002:
- updated __all__ correctly to contain changes made in v 1.0.8.000 and v 1.0.8.001
1.0.8.001:
0.8.2:
- updated __all__ correctly to contain changes made in v 0.8.0 and v 0.8.1
0.8.1:
- refactors
- bugfixes
1.0.8.000:
0.8.0:
- depreciated histo_analysis_old
- depreciated debug
- altered basic_analysis to take array data instead of filepath
- refactor
- optimization
1.0.7.002:
0.7.2:
- bug fixes
1.0.7.001:
0.7.1:
- bug fixes
1.0.7.000:
0.7.0:
- added tanh_regression (logistical regression)
- bug fixes
1.0.6.005:
0.6.5:
- added z_normalize function to normalize dataset
- bug fixes
1.0.6.004:
0.6.4:
- bug fixes
1.0.6.003:
0.6.3:
- bug fixes
1.0.6.002:
0.6.2:
- bug fixes
1.0.6.001:
0.6.1:
- corrected __all__ to contain all of the functions
1.0.6.000:
0.6.0:
- added calc_overfit, which calculates two measures of overfit, error and performance
- added calculating overfit to optimize_regression
1.0.5.000:
0.5.0:
- added optimize_regression function, which is a sample function to find the optimal regressions
- optimize_regression function filters out some overfit funtions (functions with r^2 = 1)
- planned addition: overfit detection in the optimize_regression function
1.0.4.002:
0.4.2:
- added __changelog__
- updated debug function with log and exponential regressions
1.0.4.001:
0.4.1:
- added log regressions
- added exponential regressions
- added log_regression and exp_regression to __all__
1.0.3.008:
0.3.8:
- added debug function to further consolidate functions
1.0.3.007:
0.3.7:
- added builtin benchmark function
- added builtin random (linear) data generation function
- added device initialization (_init_device)
1.0.3.006:
0.3.6:
- reorganized the imports list to be in alphabetical order
- added search and regurgitate functions to c_entities, nc_entities, obstacles, objectives
1.0.3.005:
0.3.5:
- major bug fixes
- updated historical analysis
- depreciated old historical analysis
1.0.3.004:
0.3.4:
- added __version__, __author__, __all__
- added polynomial regression
- added root mean squared function
- added r squared function
1.0.3.003:
0.3.3:
- bug fixes
- added c_entities
1.0.3.002:
0.3.2:
- bug fixes
- added nc_entities, obstacles, objectives
- consolidated statistics.py to analysis.py
1.0.3.001:
0.3.1:
- compiled 1d, column, and row basic stats into basic stats function
1.0.3.000:
0.3.0:
- added historical analysis function
1.0.2.xxx:
0.2.x:
- added z score test
1.0.1.xxx:
0.1.x:
- major bug fixes
1.0.0.xxx:
0.0.x:
- added loading csv
- added 1d, column, row basic stats
"""
@@ -344,7 +355,7 @@ __all__ = [
# now back to your regularly scheduled programming:
# imports (now in alphabetical order! v 1.0.3.006):
# imports (now in alphabetical order! v 0.3.006):
import csv
from tra_analysis.metrics import elo as Elo
@@ -424,7 +435,7 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
X = np.array(inputs)
y = np.array(outputs)
regressions = []
regressions = {}
if 'lin' in args: # formula: ax + b
@@ -437,7 +448,7 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
popt, pcov = scipy.optimize.curve_fit(lin, X, y)
coeffs = popt.flatten().tolist()
regressions.append(str(coeffs[0]) + "*x+" + str(coeffs[1]))
regressions["lin"] = (str(coeffs[0]) + "*x+" + str(coeffs[1]))
except Exception as e:
@@ -454,7 +465,7 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
popt, pcov = scipy.optimize.curve_fit(log, X, y)
coeffs = popt.flatten().tolist()
regressions.append(str(coeffs[0]) + "*log(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
regressions["log"] = (str(coeffs[0]) + "*log(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
except Exception as e:
@@ -471,7 +482,7 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
popt, pcov = scipy.optimize.curve_fit(exp, X, y)
coeffs = popt.flatten().tolist()
regressions.append(str(coeffs[0]) + "*e^(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
regressions["exp"] = (str(coeffs[0]) + "*e^(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
except Exception as e:
@@ -482,7 +493,7 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
inputs = np.array([inputs])
outputs = np.array([outputs])
plys = []
plys = {}
limit = len(outputs[0])
for i in range(2, limit):
@@ -500,9 +511,9 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
for param in params:
temp += "(" + str(param) + "*x^" + str(counter) + ")"
counter += 1
plys.append(temp)
plys["x^" + str(i)] = (temp)
regressions.append(plys)
regressions["ply"] = (plys)
if 'sig' in args: # formula: a tanh (b(x + c)) + d
@@ -515,7 +526,7 @@ def regression(inputs, outputs, args): # inputs, outputs expects N-D array
popt, pcov = scipy.optimize.curve_fit(sig, X, y)
coeffs = popt.flatten().tolist()
regressions.append(str(coeffs[0]) + "*tanh(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
regressions["sig"] = (str(coeffs[0]) + "*tanh(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
except Exception as e:
@@ -642,7 +653,7 @@ def decisiontree(data, labels, test_size = 0.3, criterion = "gini", splitter = "
class KNN:
def knn_classifier(self, data, labels, test_size = 0.3, algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=5, p=2, weights='uniform'): #expects *2d data and 1d labels post-scaling
def knn_classifier(self, data, labels, n_neighbors, test_size = 0.3, algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, p=2, weights='uniform'): #expects *2d data and 1d labels post-scaling
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.neighbors.KNeighborsClassifier()
@@ -651,7 +662,7 @@ class KNN:
return model, ClassificationMetric(predictions, labels_test)
def knn_regressor(self, data, outputs, test_size, n_neighbors = 5, weights = "uniform", algorithm = "auto", leaf_size = 30, p = 2, metric = "minkowski", metric_params = None, n_jobs = None):
def knn_regressor(self, data, outputs, n_neighbors, test_size = 0.3, weights = "uniform", algorithm = "auto", leaf_size = 30, p = 2, metric = "minkowski", metric_params = None, n_jobs = None):
data_train, data_test, outputs_train, outputs_test = sklearn.model_selection.train_test_split(data, outputs, test_size=test_size, random_state=1)
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors = n_neighbors, weights = weights, algorithm = algorithm, leaf_size = leaf_size, p = p, metric = metric, metric_params = metric_params, n_jobs = n_jobs)
@@ -754,9 +765,9 @@ class SVM:
return RegressionMetric(predictions, test_outputs)
class RandomForrest:
class RandomForest:
def random_forest_classifier(self, data, labels, test_size, n_estimators="warn", criterion="gini", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None):
def random_forest_classifier(self, data, labels, test_size, n_estimators, criterion="gini", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
kernel = sklearn.ensemble.RandomForestClassifier(n_estimators = n_estimators, criterion = criterion, max_depth = max_depth, min_samples_split = min_samples_split, min_samples_leaf = min_samples_leaf, min_weight_fraction_leaf = min_weight_fraction_leaf, max_leaf_nodes = max_leaf_nodes, min_impurity_decrease = min_impurity_decrease, bootstrap = bootstrap, oob_score = oob_score, n_jobs = n_jobs, random_state = random_state, verbose = verbose, warm_start = warm_start, class_weight = class_weight)
@@ -765,7 +776,7 @@ class RandomForrest:
return kernel, ClassificationMetric(predictions, labels_test)
def random_forest_regressor(self, data, outputs, test_size, n_estimators="warn", criterion="mse", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False):
def random_forest_regressor(self, data, outputs, test_size, n_estimators, criterion="mse", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False):
data_train, data_test, outputs_train, outputs_test = sklearn.model_selection.train_test_split(data, outputs, test_size=test_size, random_state=1)
kernel = sklearn.ensemble.RandomForestRegressor(n_estimators = n_estimators, criterion = criterion, max_depth = max_depth, min_samples_split = min_samples_split, min_weight_fraction_leaf = min_weight_fraction_leaf, max_features = max_features, max_leaf_nodes = max_leaf_nodes, min_impurity_decrease = min_impurity_decrease, min_impurity_split = min_impurity_split, bootstrap = bootstrap, oob_score = oob_score, n_jobs = n_jobs, random_state = random_state, verbose = verbose, warm_start = warm_start)
@@ -779,7 +790,7 @@ class CorrelationTest:
def anova_oneway(self, *args): #expects arrays of samples
results = scipy.stats.f_oneway(*args)
return {"F-value": results[0], "p-value": results[1]}
return {"f-value": results[0], "p-value": results[1]}
def pearson(self, x, y):
@@ -978,81 +989,112 @@ class StatisticalTest:
return {"z-score": results[0], "p-value": results[1]}
class Array(): # tests on nd arrays independent of basic_stats
def __init__(self, narray):
self.array = np.array(narray)
def __str__(self):
return str(self.array)
def elementwise_mean(self, *args): # expects arrays that are size normalized
def elementwise_mean(self, *args, axis = 0): # expects arrays that are size normalized
if len(*args) == 0:
return np.mean(self.array, axis = axis)
else:
return np.mean([*args], axis = axis)
return np.mean([*args], axis = 0)
def elementwise_median(self, *args, axis = 0):
def elementwise_median(self, *args):
if len(*args) == 0:
return np.median(self.array, axis = axis)
else:
return np.median([*args], axis = axis)
return np.median([*args], axis = 0)
def elementwise_stdev(self, *args, axis = 0):
def elementwise_stdev(self, *args):
if len(*args) == 0:
return np.std(self.array, axis = axis)
else:
return np.std([*args], axis = axis)
return np.std([*args], axis = 0)
def elementwise_variance(self, *args, axis = 0):
def elementwise_variance(self, *args):
if len(*args) == 0:
return np.var(self.array, axis = axis)
else:
return np.var([*args], axis = axis)
return np.var([*args], axis = 0)
def elementwise_npmin(self, *args, axis = 0):
def elementwise_npmin(self, *args):
if len(*args) == 0:
return np.amin(self.array, axis = axis)
else:
return np.amin([*args], axis = axis)
return np.amin([*args], axis = 0)
def elementwise_npmax(self, *args, axis = 0):
def elementwise_npmax(self, *args):
if len(*args) == 0:
return np.amax(self.array, axis = axis)
else:
return np.amax([*args], axis = axis)
return np.amax([*args], axis = 0)
def elementwise_stats(self, *args, axis = 0):
def elementwise_stats(self, *args):
_mean = self.elementwise_mean(*args)
_median = self.elementwise_median(*args)
_stdev = self.elementwise_stdev(*args)
_variance = self.elementwise_variance(*args)
_min = self.elementwise_npmin(*args)
_max = self.elementwise_npmax(*args)
_mean = self.elementwise_mean(*args, axis = axis)
_median = self.elementwise_median(*args, axis = axis)
_stdev = self.elementwise_stdev(*args, axis = axis)
_variance = self.elementwise_variance(*args, axis = axis)
_min = self.elementwise_npmin(*args, axis = axis)
_max = self.elementwise_npmax(*args, axis = axis)
return _mean, _median, _stdev, _variance, _min, _max
def __getitem__(self, key):
return self.array[key]
def __setitem__(self, key, value):
self.array[key] == value
def normalize(self, array):
a = np.atleast_1d(np.linalg.norm(array))
a[a==0] = 1
return array / np.expand_dims(a, -1)
def add(self, *args):
def __add__(self, other):
temp = np.array([])
return self.array + other.array
for a in args:
temp += a
def __sub__(self, other):
return self.array - other.array
def __neg__(self):
return temp
return -self.array
def mul(self, *args):
def __abs__(self):
temp = np.array([])
return abs(self.array)
for a in args:
temp *= a
return temp
def __invert__(self):
def neg(self, array):
return -array
return 1/self.array
def inv(self, array):
def __mul__(self, other):
return 1/array
return self.array.dot(other.array)
def dot(self, a, b):
def __rmul__(self, other):
return np.dot(a, b)
return self.array.dot(other.array)
def cross(self, a, b):
def cross(self, other):
return np.cross(a, b)
return np.cross(self.array, other.array)
def sort(self, array): # depreciated
warnings.warn("Array.sort has been depreciated in favor of Sort")

View File

@@ -0,0 +1,85 @@
# Titan Robotics Team 2022: CPU fitting models
# Written by Dev Singh
# Notes:
# this module is cuda-optimized (as appropriate) and vectorized (except for one small part)
# setup:
__version__ = "0.0.1"
# changelog should be viewed using print(analysis.fits.__changelog__)
__changelog__ = """changelog:
0.0.1:
- initial release, add circle fitting with LSC
"""
__author__ = (
"Dev Singh <dev@devksingh.com>"
)
__all__ = [
'CircleFit'
]
import numpy as np
class CircleFit:
"""Class to fit data to a circle using the Least Square Circle (LSC) method"""
# For more information on the LSC method, see:
# http://www.dtcenter.org/sites/default/files/community-code/met/docs/write-ups/circle_fit.pdf
def __init__(self, x, y, xy=None):
self.ournp = np #todo: implement cupy correctly
if type(x) == list:
x = np.array(x)
if type(y) == list:
y = np.array(y)
if type(xy) == list:
xy = np.array(xy)
if xy != None:
self.coords = xy
else:
# following block combines x and y into one array if not already done
self.coords = self.ournp.vstack(([x.T], [y.T])).T
def calc_R(x, y, xc, yc):
"""Returns distance between center and point"""
return self.ournp.sqrt((x-xc)**2 + (y-yc)**2)
def f(c, x, y):
"""Returns distance between point and circle at c"""
Ri = calc_R(x, y, *c)
return Ri - Ri.mean()
def LSC(self):
"""Fits given data to a circle and returns the center, radius, and variance"""
x = self.coords[:, 0]
y = self.coords[:, 1]
# guessing at a center
x_m = self.ournp.mean(x)
y_m = self.ournp.mean(y)
# calculation of the reduced coordinates
u = x - x_m
v = y - y_m
# linear system defining the center (uc, vc) in reduced coordinates:
# Suu * uc + Suv * vc = (Suuu + Suvv)/2
# Suv * uc + Svv * vc = (Suuv + Svvv)/2
Suv = self.ournp.sum(u*v)
Suu = self.ournp.sum(u**2)
Svv = self.ournp.sum(v**2)
Suuv = self.ournp.sum(u**2 * v)
Suvv = self.ournp.sum(u * v**2)
Suuu = self.ournp.sum(u**3)
Svvv = self.ournp.sum(v**3)
# Solving the linear system
A = self.ournp.array([ [ Suu, Suv ], [Suv, Svv]])
B = self.ournp.array([ Suuu + Suvv, Svvv + Suuv ])/2.0
uc, vc = self.ournp.linalg.solve(A, B)
xc_1 = x_m + uc
yc_1 = y_m + vc
# Calculate the distances from center (xc_1, yc_1)
Ri_1 = self.ournp.sqrt((x-xc_1)**2 + (y-yc_1)**2)
R_1 = self.ournp.mean(Ri_1)
# calculate residual error
residu_1 = self.ournp.sum((Ri_1-R_1)**2)
return (xc_1, yc_1, R_1, residu_1)

View File

@@ -1,31 +1,32 @@
# Titan Robotics Team 2022: CUDA-based Regressions Module
# Not actively maintained, may be removed in future release
# Written by Arthur Lu & Jacob Levine
# Notes:
# this module has been automatically inegrated into analysis.py, and should be callable as a class from the package
# this module is cuda-optimized and vectorized (except for one small part)
# this module is cuda-optimized (as appropriate) and vectorized (except for one small part)
# setup:
__version__ = "1.0.0.004"
__version__ = "0.0.4"
# changelog should be viewed using print(analysis.regression.__changelog__)
__changelog__ = """
1.0.0.004:
0.0.4:
- bug fixes
- fixed changelog
1.0.0.003:
0.0.3:
- bug fixes
1.0.0.002:
0.0.2:
-Added more parameters to log, exponential, polynomial
-Added SigmoidalRegKernelArthur, because Arthur apparently needs
to train the scaling and shifting of sigmoids
1.0.0.001:
0.0.1:
-initial release, with linear, log, exponential, polynomial, and sigmoid kernels
-already vectorized (except for polynomial generation) and CUDA-optimized
"""
__author__ = (
"Jacob Levine <jlevine@imsa.edu>",
"Arthur Lu <learthurgo@gmail.com>"
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
@@ -40,14 +41,15 @@ __all__ = [
'ExpRegKernel',
'SigmoidalRegKernelArthur',
'SGDTrain',
'CustomTrain'
'CustomTrain',
'CircleFit'
]
import torch
global device
device = "cuda:0" if torch.torch.cuda.is_available() else "cpu"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
#todo: document completely
@@ -217,4 +219,4 @@ def CustomTrain(self, kernel, optim, data, ground, loss=torch.nn.MSELoss(), iter
ls=loss(pred,ground_cuda)
ls.backward()
optim.step()
return kernel
return kernel

View File

@@ -7,23 +7,23 @@
# this module learns from its mistakes far faster than 2022's captains
# setup:
__version__ = "2.0.1.001"
__version__ = "1.1.1"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
2.0.1.001:
1.1.1:
- removed matplotlib import
- removed graphloss()
2.0.1.000:
1.1.0:
- added net, dataset, dataloader, and stdtrain template definitions
- added graphloss function
2.0.0.001:
1.0.1:
- added clear functions
2.0.0.000:
1.0.0:
- complete rewrite planned
- depreciated 1.0.0.xxx versions
- added simple training loop
1.0.0.xxx:
0.0.x:
-added generation of ANNS, basic SGD training
"""

View File

@@ -6,13 +6,13 @@
# fancy
# setup:
__version__ = "1.0.0.001"
__version__ = "0.0.1"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
1.0.0.001:
0.0.1:
- added graphhistogram function as a fragment of visualize_pit.py
1.0.0.000:
0.0.0:
- created visualization.py
- added graphloss()
- added imports

View File

@@ -1,4 +1,5 @@
{
"max-threads": 0.5,
"team": "",
"competition": "",
"key":{

View File

@@ -1,4 +1,4 @@
requests
pymongo
pandas
dnspython
tra-analysis

View File

@@ -3,19 +3,27 @@
# Notes:
# setup:
__version__ = "0.0.6.002"
__version__ = "0.8.2"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
0.0.6.003:
- rename analysis imports to tra_analysis for PyPI publishing
0.0.6.002:
0.8.2:
- readded while true to main function
- added more thread config options
0.8.1:
- optimized matchloop further by bypassing GIL
0.8.0:
- added multithreading to matchloop
- tweaked user log
0.7.0:
- finished implementing main function
0.6.2:
- integrated get_team_rankings.py as get_team_metrics() function
- integrated visualize_pit.py as graph_pit_histogram() function
0.0.6.001:
0.6.1:
- bug fixes with analysis.Metric() calls
- modified metric functions to use config.json defined default values
0.0.6.000:
0.6.0:
- removed main function
- changed load_config function
- added save_config function
@@ -26,66 +34,66 @@ __changelog__ = """changelog:
- renamed metricsloop to metricloop
- split push to database functions amon push_match, push_metric, push_pit
- moved
0.0.5.002:
0.5.2:
- made changes due to refactoring of analysis
0.0.5.001:
0.5.1:
- text fixes
- removed matplotlib requirement
0.0.5.000:
0.5.0:
- improved user interface
0.0.4.002:
0.4.2:
- removed unessasary code
0.0.4.001:
0.4.1:
- fixed bug where X range for regression was determined before sanitization
- better sanitized data
0.0.4.000:
0.4.0:
- fixed spelling issue in __changelog__
- addressed nan bug in regression
- fixed errors on line 335 with metrics calling incorrect key "glicko2"
- fixed errors in metrics computing
0.0.3.000:
0.3.0:
- added analysis to pit data
0.0.2.001:
0.2.1:
- minor stability patches
- implemented db syncing for timestamps
- fixed bugs
0.0.2.000:
0.2.0:
- finalized testing and small fixes
0.0.1.004:
0.1.4:
- finished metrics implement, trueskill is bugged
0.0.1.003:
0.1.3:
- working
0.0.1.002:
0.1.2:
- started implement of metrics
0.0.1.001:
0.1.1:
- cleaned up imports
0.0.1.000:
0.1.0:
- tested working, can push to database
0.0.0.009:
0.0.9:
- tested working
- prints out stats for the time being, will push to database later
0.0.0.008:
0.0.8:
- added data import
- removed tba import
- finished main method
0.0.0.007:
0.0.7:
- added load_config
- optimized simpleloop for readibility
- added __all__ entries
- added simplestats engine
- pending testing
0.0.0.006:
0.0.6:
- fixes
0.0.0.005:
0.0.5:
- imported pickle
- created custom database object
0.0.0.004:
0.0.4:
- fixed simpleloop to actually return a vector
0.0.0.003:
0.0.3:
- added metricsloop which is unfinished
0.0.0.002:
0.0.2:
- added simpleloop which is untested until data is provided
0.0.0.001:
0.0.1:
- created script
- added analysis, numba, numpy imports
"""
@@ -114,14 +122,99 @@ __all__ = [
from tra_analysis import analysis as an
import data as d
from collections import defaultdict
import json
import math
import numpy as np
import os
from os import system, name
from pathlib import Path
from multiprocessing import Pool
import matplotlib.pyplot as plt
from concurrent.futures import ThreadPoolExecutor
import time
import warnings
global exec_threads
def main():
global exec_threads
warnings.filterwarnings("ignore")
while (True):
current_time = time.time()
print("[OK] time: " + str(current_time))
config = load_config("config.json")
competition = config["competition"]
match_tests = config["statistics"]["match"]
pit_tests = config["statistics"]["pit"]
metrics_tests = config["statistics"]["metric"]
print("[OK] configs loaded")
print("[OK] starting threads")
cfg_max_threads = config["max-threads"]
sys_max_threads = os.cpu_count()
if cfg_max_threads > -sys_max_threads and cfg_max_threads < 0 :
alloc_processes = sys_max_threads + cfg_max_threads
elif cfg_max_threads > 0 and cfg_max_threads < 1:
alloc_processes = math.floor(cfg_max_threads * sys_max_threads)
elif cfg_max_threads > 1 and cfg_max_threads <= sys_max_threads:
alloc_processes = cfg_max_threads
elif cfg_max_threads == 0:
alloc_processes = sys_max_threads
else:
print("[Err] Invalid number of processes, must be between -" + str(sys_max_threads) + " and " + str(sys_max_threads))
exit()
exec_threads = Pool(processes = alloc_processes)
print("[OK] " + str(alloc_processes) + " threads started")
apikey = config["key"]["database"]
tbakey = config["key"]["tba"]
print("[OK] loaded keys")
previous_time = get_previous_time(apikey)
print("[OK] analysis backtimed to: " + str(previous_time))
print("[OK] loading data")
start = time.time()
match_data = load_match(apikey, competition)
pit_data = load_pit(apikey, competition)
print("[OK] loaded data in " + str(time.time() - start) + " seconds")
print("[OK] running match stats")
start = time.time()
matchloop(apikey, competition, match_data, match_tests)
print("[OK] finished match stats in " + str(time.time() - start) + " seconds")
print("[OK] running team metrics")
start = time.time()
metricloop(tbakey, apikey, competition, previous_time, metrics_tests)
print("[OK] finished team metrics in " + str(time.time() - start) + " seconds")
print("[OK] running pit analysis")
start = time.time()
pitloop(apikey, competition, pit_data, pit_tests)
print("[OK] finished pit analysis in " + str(time.time() - start) + " seconds")
set_current_time(apikey, current_time)
print("[OK] finished all tests, looping")
clear()
def clear():
# for windows
if name == 'nt':
_ = system('cls')
# for mac and linux(here, os.name is 'posix')
else:
_ = system('clear')
def load_config(file):
config_vector = {}
@@ -150,52 +243,86 @@ def get_previous_time(apikey):
return previous_time
def set_current_time(apikey, current_time):
d.set_analysis_flags(apikey, "latest_update", {"latest_update":current_time})
def load_match(apikey, competition):
return d.get_match_data_formatted(apikey, competition)
def simplestats(data_test):
data = np.array(data_test[0])
data = data[np.isfinite(data)]
ranges = list(range(len(data)))
test = data_test[1]
if test == "basic_stats":
return an.basic_stats(data)
if test == "historical_analysis":
return an.histo_analysis([ranges, data])
if test == "regression_linear":
return an.regression(ranges, data, ['lin'])
if test == "regression_logarithmic":
return an.regression(ranges, data, ['log'])
if test == "regression_exponential":
return an.regression(ranges, data, ['exp'])
if test == "regression_polynomial":
return an.regression(ranges, data, ['ply'])
if test == "regression_sigmoidal":
return an.regression(ranges, data, ['sig'])
def matchloop(apikey, competition, data, tests): # expects 3D array with [Team][Variable][Match]
def simplestats(data, test):
global exec_threads
data = np.array(data)
data = data[np.isfinite(data)]
ranges = list(range(len(data)))
if test == "basic_stats":
return an.basic_stats(data)
if test == "historical_analysis":
return an.histo_analysis([ranges, data])
if test == "regression_linear":
return an.regression(ranges, data, ['lin'])
if test == "regression_logarithmic":
return an.regression(ranges, data, ['log'])
if test == "regression_exponential":
return an.regression(ranges, data, ['exp'])
if test == "regression_polynomial":
return an.regression(ranges, data, ['ply'])
if test == "regression_sigmoidal":
return an.regression(ranges, data, ['sig'])
class AutoVivification(dict):
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
value = self[item] = type(self)()
return value
return_vector = {}
team_filtered = []
variable_filtered = []
variable_data = []
test_filtered = []
result_filtered = []
return_vector = AutoVivification()
for team in data:
variable_vector = {}
for variable in data[team]:
test_vector = {}
variable_data = data[team][variable]
if variable in tests:
for test in tests[variable]:
test_vector[test] = simplestats(variable_data, test)
else:
pass
variable_vector[variable] = test_vector
return_vector[team] = variable_vector
team_filtered.append(team)
variable_filtered.append(variable)
variable_data.append((data[team][variable], test))
test_filtered.append(test)
result_filtered = exec_threads.map(simplestats, variable_data)
i = 0
result_filtered = list(result_filtered)
for result in result_filtered:
return_vector[team_filtered[i]][variable_filtered[i]][test_filtered[i]] = result
i += 1
push_match(apikey, competition, return_vector)
@@ -404,4 +531,6 @@ def graph_pit_histogram(apikey, competition, figsize=(80,15)):
i+=1
plt.show()
plt.show()
main()

View File

@@ -1,55 +0,0 @@
import threading
from multiprocessing import Process, Queue
import time
from os import system
class testcls():
i = 0
j = 0
t1_en = True
t2_en = True
def main(self):
t1 = Process(name = "task1", target = self.task1)
t2 = Process(name = "task2", target = self.task2)
t1.start()
t2.start()
#print(self.i)
#print(self.j)
def task1(self):
self.i += 1
time.sleep(1)
if(self.i < 10):
t1 = Process(name = "task1", target = self.task1)
t1.start()
def task2(self):
self.j -= 1
time.sleep(1)
if(self.j > -10):
t2 = t2 = Process(name = "task2", target = self.task2)
t2.start()
"""
if __name__ == "__main__":
tmain = threading.Thread(name = "main", target = main)
tmain.start()
t = 0
while(True):
system("clear")
for thread in threading.enumerate():
if thread.getName() != "MainThread":
print(thread.getName())
print(str(len(threading.enumerate())))
print(i)
print(j)
time.sleep(0.1)
t += 1
if(t == 100):
t1_en = False
t2_en = False
"""

View File

@@ -1,91 +0,0 @@
import json
import superscript as su
import threading
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
match = False
metric = False
pit = False
match_enable = True
metric_enable = True
pit_enable = True
config = {}
def main():
global match
global metric
global pit
global match_enable
global metric_enable
global pit_enable
global config
config = su.load_config("config.json")
while(True):
if match_enable == True and match == False:
def target():
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["match"]
data = su.load_match(apikey, competition)
su.matchloop(apikey, competition, data, tests)
match = False
return
match = True
task = threading.Thread(name = "match", target=target)
task.start()
if metric_enable == True and metric == False:
def target():
apikey = config["key"]["database"]
tbakey = config["key"]["tba"]
competition = config["competition"]
metric = config["statistics"]["metric"]
timestamp = su.get_previous_time(apikey)
su.metricloop(tbakey, apikey, competition, timestamp, metric)
metric = False
return
match = True
task = threading.Thread(name = "metric", target=target)
task.start()
if pit_enable == True and pit == False:
def target():
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["pit"]
data = su.load_pit(apikey, competition)
su.pitloop(apikey, competition, data, tests)
pit = False
return
pit = True
task = threading.Thread(name = "pit", target=target)
task.start()
task = threading.Thread(name = "main", target=main)
task.start()