812 Commits

Author SHA1 Message Date
Arthur Lu
5153fc3f82 Merge pull request #81 from titanscouting/fix-publish
removed problematic classifier
2021-04-30 20:54:16 -07:00
Arthur Lu
8bf754f382 removed problematic classifier 2021-05-01 03:47:10 +00:00
Arthur Lu
dde3a448c2 Merge pull request #79 from titanscouting/fix-publish
attempt 2 to fix publish-analysis
2021-04-30 20:36:20 -07:00
Arthur Lu
1e234efcba attempt 2 to fix publish-analysis 2021-04-30 22:59:34 +00:00
Dev Singh
34a834c7bc Merge pull request #78 from titanscouting/fix-publish
Install deps on publish-analysis
2021-04-30 17:43:20 -05:00
Arthur Lu
4910c335f1 install deps on publish-analysis 2021-04-30 22:40:49 +00:00
Arthur Lu
9f71ab3aad tra-analysis v 3.0.0 aggregate PR (#73)
* reflected doc changes to README.md

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* tra_analysis v 2.1.0-alpha.1

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* changed setup.py to use __version__ from source
added Topic and keywords

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* updated Supported Platforms in README.md

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* moved required files back to parent

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* moved security back to parent

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* moved security back to parent
moved contributing back to parent

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* add PR template

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* moved to parent folder

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* moved meta files to .github folder

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* Analysis.py v 3.0.1

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* updated test_analysis for submodules, and added missing numpy import in Sort.py

* fixed item one of Issue #58

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* readded cache searching in postCreateCommand

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* added myself as an author

* feat: created kivy gui boilerplate

* added Kivy to requirements.txt

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* feat: gui with placeholders

* fix: changed config.json path

* migrated docker base image to debian

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* style: spaces to tabs

* migrated to ubuntu

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* fixed issues

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* fix: docker build?

* fix: use ubuntu bionic

* fix: get kivy installed

* @ltcptgeneral can't spell

* optim dockerfile for not installing unused packages

* install basic stuff while building the container

* use prebuilt image for development

* install pylint on base image

* rename and use new kivy

* tests: added tests for Array and CorrelationTest

Both are not working due to errors

* use new thing

* use 20.04 base

* symlink pip3 to pip

* use pip instead of pip3

* equation.Expression.py v 0.0.1-alpha
added corresponding .pyc to .gitignore

* parser.py v 0.0.2-alpha

* added pyparsing to requirements.txt

* parser v 0.0.4-alpha

* Equation v 0.0.1-alpha

* added Equation to tra_analysis imports

* tests: New unit tests for submoduling (#66)

* feat: created kivy gui boilerplate

* migrated docker base image to debian

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* migrated to ubuntu

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* fixed issues

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* fix: docker build?

* fix: use ubuntu bionic

* fix: get kivy installed

* @ltcptgeneral can't spell

* optim dockerfile for not installing unused packages

* install basic stuff while building the container

* use prebuilt image for development

* install pylint on base image

* rename and use new kivy

* tests: added tests for Array and CorrelationTest

Both are not working due to errors

* fix: Array no longer has *args and CorrelationTest functions no longer have self in the arguments

* use new thing

* use 20.04 base

* symlink pip3 to pip

* use pip instead of pip3

* tra_analysis v 2.1.0-alpha.2
SVM v 1.0.1
added unvalidated SVM unit tests

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* fixed version number

Signed-off-by: ltcptgeneral <learthurgo@gmail.com>

* tests: added tests for ClassificationMetric

* partially fixed and commented out svm unit tests

* fixed some SVM unit tests

* added installing pytest to devcontainer.json

* fix: small fixes to KNN

Namely, removing self from parameters and passing correct arguments to KNeighborsClassifier constructor

* fix, test: Added tests for KNN and NaiveBayes.

Also made some small fixes in KNN, NaiveBayes, and RegressionMetric

* test: finished unit tests except for StatisticalTest

Also made various small fixes and style changes

* StatisticalTest v 1.0.1

* fixed RegressionMetric unit test
temporarily disabled CorrelationTest unit tests

* tra_analysis v 2.1.0-alpha.3

* readded __all__

* fix: floating point issues in unit tests for CorrelationTest

Co-authored-by: AGawde05 <agawde05@gmail.com>
Co-authored-by: ltcptgeneral <learthurgo@gmail.com>
Co-authored-by: Dev Singh <dev@devksingh.com>
Co-authored-by: jzpan1 <panzhenyu2014@gmail.com>

* fixed depreciated escape sequences

* ficed tests, indent, import in test_analysis

* changed version to 3.0.0
added backwards compatibility

* ficed pytest install in container

* removed GUI changes

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* incremented version to rc.1 (release candidate 1)

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* fixed NaiveBayes __changelog__

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* fix: __setitem__  == to single =

* Array v 1.0.1

* Revert "Array v 1.0.1"

This reverts commit 59783b79f7.

* Array v 1.0.1

* Array.py v 1.0.2
added more Array unit tests

* cleaned .gitignore
tra_analysis v 3.0.0-rc2

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* added *.pyc to gitignore
finished subdividing test_analysis

* feat: gui layout + basic func

* Froze and removed superscript (data-analysis)

* remove data-analysis deps install for devcontainer

* tukey pairwise comparison and multicomparison but no critical q-values

* quick patch for devcontainer.json

* better fix for devcontainer.json

* fixed some styling in StatisticalTest
removed print statement in StatisticalTest unit tests

* update analysis tests to be more effecient

* don't use loop for test_nativebayes

* removed useless secondary docker files

* tra-analysis v 3.0.0

Co-authored-by: James Pan <panzhenyu2014@gmail.com>
Co-authored-by: AGawde05 <agawde05@gmail.com>
Co-authored-by: zpan1 <72054510+zpan1@users.noreply.github.com>
Co-authored-by: Dev Singh <dev@devksingh.com>
Co-authored-by: = <=>
Co-authored-by: Dev Singh <dsingh@imsa.edu>
Co-authored-by: zpan1 <zpan@imsa.edu>
2021-04-28 19:33:50 -05:00
Arthur Lu
764dab01f6 reflected doc changes to README.md (#48)
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-10-05 09:49:39 -05:00
Dev Singh
56f5e5262c deps: remove dnspython (#47)
Signed-off-by: Dev Singh <dev@devksingh.com>

Co-authored-by: Arthur Lu <learthurgo@gmail.com>
2020-09-28 18:53:32 -05:00
Arthur Lu
56a5578f35 Merge pull request #46 from titanscouting/multithread-testing
Implement Multithreading in Superscript
2020-09-28 17:46:29 -05:00
Dev Singh
c48c512cf6 Implement fitting to circle using LSC and HyperFit (#45)
* chore: add pylint to devcontainer

Signed-off-by: Dev Singh <dev@devksingh.com>

* feat: init LSC fitting

cuda and cpu-based LSC fitting using cupy and numpy

Signed-off-by: Dev Singh <dev@devksingh.com>

* docs: add changelog entry and module to class list

Signed-off-by: Dev Singh <dev@devksingh.com>

* docs: fix typo in comment

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: only import cupy if cuda available

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: move to own file, abandon cupy

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: remove numba dep

Signed-off-by: Dev Singh <dev@devksingh.com>

* deps: remove cupy dep

Signed-off-by: Dev Singh <dev@devksingh.com>

* feat: add tests

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: correct indentation

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: variable names

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: add self when refering to coords

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: numpy ordering

Signed-off-by: Dev Singh <dev@devksingh.com>

* docs: remove version bump, nomaintain

add notice that module is not actively maintained, may be removed in future release

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix: remove hyperfit as not being impled

Signed-off-by: Dev Singh <dev@devksingh.com>
2020-09-24 21:06:30 -05:00
Dev Singh
d15aa045de docs: create security reporting guidelines (#44)
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-09-24 13:09:34 -05:00
Arthur Lu
b32083c6da added tra-analysis to data-analysis requirements
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-24 13:14:13 +00:00
Arthur Lu
a999c755a1 Merge branch 'multithread-testing' of https://github.com/titanscouting/red-alliance-analysis into multithread-testing 2020-09-26 20:57:55 +00:00
Arthur Lu
e3241fa34d superscript.py v 0.8.2
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-26 20:57:39 +00:00
Dev Singh
97f3271de3 Merge branch 'master' into multithread-testing 2020-09-26 15:28:14 -05:00
Arthur Lu
2804d03593 superscript.py v 0.8.1
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-21 07:38:18 +00:00
Arthur Lu
adbc749c47 added max-threads key in config
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-21 07:21:59 +00:00
Arthur Lu
ec9bac7830 superscript.py v 0.8.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-21 05:59:15 +00:00
Arthur Lu
b9a2e680bc Merge pull request #43 from titanscouting/master-staged
Pull changes from master staged to master for release
2020-09-19 21:06:42 -05:00
Arthur Lu
467444ed9b Merge branch 'master' into master-staged 2020-09-19 20:05:33 -05:00
Arthur Lu
fa7216d4e0 modified setup.py for analysis package v 2.1.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-20 00:50:14 +00:00
Arthur Lu
27a86e568b depreciated nonfunctional scripts in data-analysis
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-20 00:47:33 +00:00
Arthur Lu
16502c5259 superscript.py v 0.7.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-20 00:45:38 +00:00
Arthur Lu
ff9ad078e5 analysis.py v 2.3.1
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-19 23:14:46 +00:00
Arthur Lu
97334d1f66 edited README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-19 22:40:20 +00:00
Arthur Lu
f566f4ec71 Merge pull request #42 from titanscouting/devksingh4-patch-1
docs: add documentation links
2020-09-19 17:07:57 -05:00
Arthur Lu
cd869c0a8e analysis.py v 2.3.0
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-19 22:04:24 +00:00
Arthur Lu
f1982eb93d analysis.py v 2.2.3
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-18 21:55:59 +00:00
Arthur Lu
3763cb041f analysis.py v 2.2.2
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-17 02:11:44 +00:00
Dev Singh
2a201a61c7 docs: add documentation links 2020-09-16 16:54:49 -05:00
Arthur Lu
73a16b8397 added depreciated config files to gitignore
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-16 21:24:50 +00:00
Arthur Lu
0e7255ab99 changed && to ; in devcontainer.json
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-09-15 23:24:50 +00:00
Arthur Lu
5efaee5176 Merge pull request #41 from titanscouting/master-staged
merge eol fix in master-staged to master
2020-08-13 12:04:54 -05:00
Arthur Lu
1a1be8ee6a fixed eol issue with docker in gitattributes
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-13 17:01:08 +00:00
Arthur Lu
cab05fbc63 Merge commit '4b664acffb5777614043a83ef8e08368e21303ce' into master-staged 2020-08-13 17:00:31 +00:00
Dev Singh
4b664acffb Modernize VSCode extensions in dev env, set correct copyright assignment (#40)
* modernize extensions

Signed-off-by: Dev Singh <dev@devksingh.com>

* copyright assigment should be to titan scouting

Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-12 21:59:04 -05:00
Arthur Lu
292f9faeef Merge pull request #39 from titanscouting/master-staged
merge README changes from master-staged to master
2020-08-10 20:49:01 -05:00
Arthur Lu
468bd48b07 fixed readme with proper pip installation
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-11 01:36:30 +00:00
Arthur Lu
4c3f16f13b Merge pull request #38 from titanscouting/master
pull master into master-staged
2020-08-10 20:33:28 -05:00
Arthur Lu
8545a0d984 Merge pull request #36 from titanscouting/tra-service
merge changes from tra-service to master
2020-08-10 19:40:28 -05:00
Arthur Lu
6debc07786 modified README
simplified devcontainer.json

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-11 00:29:23 +00:00
Arthur Lu
bc5b07bb8d readded old config files
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 23:32:50 +00:00
Arthur Lu
9b73147c4d fixed analysis reference in superscript_old
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 23:20:43 +00:00
Arthur Lu
2f84debda7 removed old bins under analysis-master/dist/
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 21:37:41 +00:00
Arthur Lu
c803208eb8 analysis.py v 2.2.1
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 21:25:25 +00:00
Arthur Lu
135350293c Merge branch 'master' into tra-service 2020-08-10 16:11:38 -05:00
Arthur Lu
9a3181a92b renamed analysis folder to tra_analysis
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-08-10 21:01:50 +00:00
Dev Singh
73da5fa68b docs
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:53:22 -05:00
Dev Singh
7be57f7e7e build v2.0.3
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:52:49 -05:00
Dev Singh
7d64e67ad3 run on publish
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:46:07 -05:00
Dev Singh
def639284f remove bad if statement
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:43:16 -05:00
Dev Singh
18430208ff Merge branch 'master' of https://github.com/titanscout2022/red-alliance-analysis 2020-08-10 14:42:58 -05:00
Dev Singh
ba5fb2d72b build on release only (#35)
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:40:22 -05:00
Dev Singh
f34452d584 build on release only
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:35:38 -05:00
Dev Singh
5fd5e32cb1 Implement CD with building on tags to PyPI (#34)
* Create python-publish.yml

* populated publish-analysis.yml
moved legacy versions of analysis to seperate subfolder

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* attempt to fix issue with publish action

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* another attempt o fix publish-analysis.yml

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* this should work now

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* pypa can't take just one package so i'm trying all

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* this should totally work now

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* trying removing custom dir

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* rename analysis to tra_analysis, bump version to 2.0.0

* remove old packages which are already on github releases

* remove pycache

* removed ipynb_checkpoints

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* build

* do the dir thing

* trying removing custom dir

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
Signed-off-by: Dev Singh <dev@devksingh.com>

* rename analysis to tra_analysis, bump version to 2.0.0

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove old packages which are already on github releases

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove pycache

Signed-off-by: Dev Singh <dev@devksingh.com>

* build

Signed-off-by: Dev Singh <dev@devksingh.com>

* removed ipynb_checkpoints

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
Signed-off-by: Dev Singh <dev@devksingh.com>

* do the dir thing

Signed-off-by: Dev Singh <dev@devksingh.com>

* Revert "do the dir thing"

This reverts commit 2eb7ffca8d.

* correct dir

* set correct yaml positions

Signed-off-by: Dev Singh <dev@devksingh.com>

* attempt to set correct dir

Signed-off-by: Dev Singh <dev@devksingh.com>

* run on tags only

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove all caches from vcs

Signed-off-by: Dev Singh <dev@devksingh.com>

* bump version for testing

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove broke build

Signed-off-by: Dev Singh <dev@devksingh.com>

* dont upload dists to github

Signed-off-by: Dev Singh <dev@devksingh.com>

* bump to 2.0.2 for testing

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix yaml

Signed-off-by: Dev Singh <dev@devksingh.com>

* update docs

Signed-off-by: Dev Singh <dev@devksingh.com>

* add to readme

Signed-off-by: Dev Singh <dev@devksingh.com>

* run only on master

Signed-off-by: Dev Singh <dev@devksingh.com>

Co-authored-by: Arthur Lu <learthurgo@gmail.com>
Co-authored-by: Dev Singh <dsingh@CentaurusRidge.localdomain>
2020-08-10 14:29:51 -05:00
Arthur Lu
3db3dda315 Merge pull request #33 from titanscout2022/Demo-for-Issue#32
Merge Changes Proposed in Issue#32
2020-08-02 17:27:26 -05:00
Arthur Lu
a59e509bc8 made changes described in Issue#32
changed setup.py to also reflect versioning changes

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-30 19:05:07 +00:00
Arthur Lu
ad521368bd filled out Contributing section in README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-20 19:07:32 -05:00
Dev Singh
edbfa5184a Update MAINTAINERS (#29)
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-07-19 11:52:11 -05:00
Arthur Lu
5e52155fd0 Merge pull request #31 from titanscout2022/master
merge changes from master into tra-service
2020-07-18 23:25:55 -05:00
Arthur Lu
635f736a69 Merge pull request #28 from titanscout2022/master-staged
Merge analysis.py updates to master
2020-07-12 18:26:03 -05:00
Arthur Lu
16fb21001a added negatives to analysis unit tests
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-12 13:57:24 -05:00
Arthur Lu
69559e9e4a Merge branch 'master' into master-staged 2020-07-11 17:03:50 -05:00
Arthur Lu
430822cdeb added unit tests for analysis.Sort algorithms
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-11 21:53:16 +00:00
Arthur Lu
daa5b48426 readded old superscript.py (v 0.0.5.002)
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-11 21:21:56 +00:00
Arthur Lu
648ac945ac analysis v 1.2.2.000
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-05 05:30:48 +00:00
Arthur Lu
b2cf594869 readded tra.py as a fallback option
made changes to tra-cli.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 23:15:34 +00:00
Arthur Lu
bcd6c66a08 fixed more bugs with tra-cli.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 21:47:54 +00:00
Arthur Lu
b646e22378 fixed bugs with tra-cli.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 21:32:43 +00:00
Arthur Lu
51f14de0d2 fixed latest.whl to follow format for wheel files
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 20:56:13 +00:00
Arthur Lu
266caf78c3 started on tra-cli.py
modified tasks.py to work properly

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 20:23:53 +00:00
Arthur Lu
fa478314da added data-analysis requirements to devcontainer build
added auto pip intsall latest analysis.py whl

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 18:25:41 +00:00
Arthur Lu
8a212a21df moved core functions in tasks.py to class Tasker
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 18:19:58 +00:00
Arthur Lu
236c28c3be renamed tra;py to tasks.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-06-10 17:46:40 +00:00
Arthur Lu
7c2f058feb added help message to status command
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-26 01:34:47 +00:00
Arthur Lu
e84783ee44 populated tra.py to be a CLI application
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-25 22:17:08 +00:00
Arthur Lu
09b703d2a7 removed extra words
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:56:00 +00:00
Arthur Lu
098326584a removed more extra lines
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:54:48 +00:00
Arthur Lu
e5c7718f10 fixed extra hline
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:52:25 +00:00
Arthur Lu
a3ffdd89d0 fixed line breaks
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:51:57 +00:00
Arthur Lu
2fc11285ba fixed Prerequisites in README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:35:02 +00:00
Arthur Lu
9dd38fcec8 added OS and python versions supproted
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 17:30:01 +00:00
Arthur Lu
d59d069943 analysis.py v 1.2.1.004 (#27)
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 11:49:04 -05:00
Arthur Lu
90f747f3fc revamped README.md
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 16:42:58 +00:00
Arthur Lu
869d7c288b fixed naming in tra.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 22:51:58 -05:00
Arthur Lu
dc4f5ab40e another bug fix
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 22:49:38 -05:00
Arthur Lu
a739007222 quick bug fix to tra.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 22:48:50 -05:00
Arthur Lu
ba06b9293e added test.py to .gitignore
prepared tra.py for threading implement

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-23 19:43:59 -05:00
Arthur Lu
1d5a67c4f7 analysis.py v 1.2.1.004
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-22 00:37:39 +00:00
Arthur Lu
e4ab0487d0 Merge pull request #26 from titanscout2022/master
Merge master into master-staged
2020-05-21 19:36:56 -05:00
Arthur Lu
4f439d6094 Merge service-dev changes with master (#24)
* added config.json
removed old config files

Signed-off-by: Arthur <learthurgo@gmail.com>

* superscript.py v 0.0.6.000

Signed-off-by: Arthur <learthurgo@gmail.com>

* changed data.py

Signed-off-by: Arthur <learthurgo@gmail.com>

* changes to config.json

Signed-off-by: Arthur <learthurgo@gmail.com>

* removed cells from visualize_pit.py

Signed-off-by: Arthur <learthurgo@gmail.com>

* more changes to visualize_pit.py

Signed-off-by: Arthur <learthurgo@gmail.com>

* added analysis-master/metrics/__pycache__ to git ignore
moved pit configs in config.json to the borrom
superscript.py v 0.0.6.001

Signed-off-by: Arthur <learthurgo@gmail.com>

* removed old database key

Signed-off-by: Arthur <learthurgo@gmail.com>

* adjusted config files

Signed-off-by: Arthur <learthurgo@gmail.com>

* Delete config-pop.json

* fixed .gitignore

Signed-off-by: Arthur <learthurgo@gmail.com>

* analysis.py 1.2.1.003
added team kv pair to config.json

Signed-off-by: Arthur <learthurgo@gmail.com>

* superscript.py v 0.0.6.002

Signed-off-by: Arthur <learthurgo@gmail.com>

* finished app.py API
made minute changes to parentheses use in various packages

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* bug fixes in app.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* bug fixes in app.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* made changes to .gitignore

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* made changes to .gitignore

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* deleted a __pycache__ folder from metrics

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* more changes to .gitignore

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* additions to app.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* renamed app.py to api.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* removed extranneous files

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* renamed api.py to tra.py
removed rest api calls from tra.py

* renamed api.py to tra.py
removed rest api calls from tra.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* removed flask import from tra.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* changes to devcontainer.json

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* fixed unit tests to be correct
removed some tests regressions because of potential function overflow
removed trueskill unit test because of slight deviation chance

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-20 08:52:38 -05:00
Arthur Lu
ae64c7f10e Merge pull request #25 from titanscout2022/master-staged
fixed bug in analysis unit tests
2020-05-19 13:19:47 -05:00
Arthur Lu
d1dfe3b01b Merge branch 'master' into master-staged 2020-05-19 13:19:40 -05:00
Arthur Lu
3dd24dcd30 fixed bug in analysis unit tests
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-19 18:19:02 +00:00
Arthur Lu
2be67b2cc3 Merge pull request #23 from titanscout2022/master-staged
Merge minor .gitignore changes
2020-05-18 16:31:50 -05:00
Arthur Lu
f91159c49c added data-analysis/.pytest_cache/ to .gitignore
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:28:42 -05:00
Arthur Lu
df046d4806 Merge pull request #22 from titanscout2022/master
Reflect master to master-staged
2020-05-18 16:28:05 -05:00
Arthur Lu
c838c4fc15 Merge pull request #21 from titanscout2022/CI-tools
CI tools
2020-05-18 16:18:48 -05:00
Arthur Lu
cbf5d18332 i swear its working now
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:14:16 -05:00
Arthur Lu
641905e87a finally fixed issues
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:12:22 -05:00
Arthur Lu
3daa12a3da changes
superscript testing still refuses to collect any tests

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:07:02 -05:00
Arthur Lu
3c4fe7ab46 still not working
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:01:02 -05:00
Arthur Lu
4e3f6b4480 readded pytest install
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:59:34 -05:00
Arthur Lu
414ffdf96c removed flake8 import from unit tests
fixed superscript unit tests

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:58:17 -05:00
Arthur Lu
6296f78ff5 removed lint checks because it was the stupid
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:54:15 -05:00
Arthur Lu
7ae64d5dbf lint refused to exclude metrics
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:51:51 -05:00
Arthur Lu
fd2ac12dad excluded imports
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:49:52 -05:00
Arthur Lu
0f2bbd1a16 more fixes
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:44:39 -05:00
Arthur Lu
83bc7fa743 Merge branch 'CI-tools' of https://github.com/titanscout2022/red-alliance-analysis into CI-tools
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:44:20 -05:00
Arthur Lu
83eabce8cd also ignored regression.py
added temporary unit test for superscript.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:43:53 -05:00
Arthur Lu
e2e73986a2 also ignored regression.py
added temporary unit test for superscript.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:43:36 -05:00
Arthur Lu
91ae1c0df6 attempted fixes by excluding titanlearn
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:39:59 -05:00
Arthur Lu
efad5bd71c maybe its a versioning issue?
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:32:24 -05:00
Arthur Lu
3d5e0aac59 Revert "trying python3 and pip3"
This reverts commit 7937fb6ee6.
2020-05-18 15:29:51 -05:00
Arthur Lu
7937fb6ee6 trying python3 and pip3
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:27:56 -05:00
Arthur Lu
871ecb5561 attempt to fix working directory issue
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:25:19 -05:00
Arthur Lu
7d738ca51e another attempt
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:11:24 -05:00
Arthur Lu
eeee957d23 attempt to fix working directory issues
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:07:42 -05:00
Arthur Lu
f55f3cb7d1 populated analysis unit test
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 14:59:24 -05:00
Arthur Lu
dd11689c8c reverted indentation to github default
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:15:43 -05:00
Arthur Lu
1c4b1d1971 more indentation fixes
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:12:15 -05:00
Arthur Lu
94a7aae491 changed indentation to spaces
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:09:29 -05:00
Arthur Lu
26f4224caa fixed indents
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:07:44 -05:00
Arthur Lu
386b7c75ee added items to .gitignore
renamed pythonpackage.yml to ut-analysis.yml
populated ut-analysis.yml
fixed spelling
added ut-superscript.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:04:31 -05:00
Arthur Lu
27feb0bf93 moved unit-test.py outside the analysis folder
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 19:41:19 -05:00
Arthur Lu
233440f03d removed pythonapp because it is redundant
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 19:40:35 -05:00
Arthur Lu
37c247aa46 created unit-test.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 19:33:56 -05:00
Arthur Lu
eeb5e46814 Merge pull request #19 from titanscout2022/CI-package
merge
2020-05-16 19:31:25 -05:00
Arthur Lu
4739c439f0 Create pythonpackage.yml 2020-05-16 19:30:52 -05:00
Arthur Lu
2e41326373 Create pythonapp.yml 2020-05-16 19:29:14 -05:00
Arthur Lu
e8ba8e1008 Merge pull request #18 from titanscout2022/master-staged
analysis.py v 1.2.1.003
2020-05-15 16:06:02 -05:00
Arthur Lu
dd49f6724f Merge branch 'master' into master-staged 2020-05-15 16:05:52 -05:00
Arthur Lu
b376f7c0c5 Merge pull request #17 from titanscout2022/equation.py-testing
merge equation.py-testing with master
2020-05-15 16:01:41 -05:00
Arthur Lu
4213386035 Merge branch 'master' into master-staged 2020-05-15 14:54:24 -05:00
Arthur Lu
3fdae646b8 analysis.py v 1.2.1.003
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-15 14:48:26 -05:00
Arthur Lu
8f8fb6c156 analysis.py v 1.2.2.000
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-14 23:36:28 -05:00
Arthur Lu
30b39aafff Merge pull request #16 from titanscout2022/master
pull recent changes into equation.py-testing
2020-05-14 23:22:03 -05:00
ltcptgeneral
77353c87e3 Merge pull request #15 from titanscout2022/master-staged
mirrored .gitignore changes from gui-dev
2020-05-14 19:29:44 -05:00
ltcptgeneral
ca2ebe5f6d Merge branch 'master' into master-staged 2020-05-14 19:18:34 -05:00
Arthur
55c7589c7d mirrored .gitignore changes from gui-dev
Signed-off-by: Arthur <learthurgo@gmail.com>
2020-05-14 19:17:26 -05:00
ltcptgeneral
6cff61cbe4 Merge pull request #13 from titanscout2022/devksingh4-bsd-license
Switch to BSD License
2020-05-13 13:19:10 -05:00
Dev Singh
5474081523 Update LICENSE 2020-05-13 12:04:59 -05:00
Dev Singh
4c25a5ce09 Contributing guideline changes (#11)
* changes

* more changes

* more changes

* contributing.md: sync with other contributor docs

Signed-off-by: Dev Singh <dev@singhk.dev>

* Create MAINTAINERS

Signed-off-by: Dev Singh <dev@singhk.dev>

* arthur's changes

* changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* more changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* more changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* contributing.md: sync with other contributor docs

Signed-off-by: Dev Singh <dev@singhk.dev>
Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* Create MAINTAINERS

Signed-off-by: Dev Singh <dev@singhk.dev>
Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* arthur's changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

Co-authored-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>
2020-05-13 11:56:52 -05:00
ltcptgeneral
3451bac6f5 Merge pull request #12 from titanscout2022/master-staged
analysis.py v 1.2.1.002
2020-05-13 11:44:25 -05:00
ltcptgeneral
7e37dd72bb analysis.py v 1.2.1.002
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-13 11:35:46 -05:00
ltcptgeneral
a9014c5d34 changed data analysis folder to data-analysis
added egg-info and build folders to git ignore
deleted egg-info and build folders
2020-05-12 20:54:19 -05:00
ltcptgeneral
230e98a745 9 2020-05-12 20:48:45 -05:00
ltcptgeneral
1c6ecb149b Merge branch 'equation.py-testing' of https://github.com/titanscout2022/tr2022-strategy into equation.py-testing 2020-05-12 20:46:51 -05:00
ltcptgeneral
6d544a434e readded equation.ipynb 2020-05-12 20:46:42 -05:00
ltcptgeneral
5a1aa780ff readded equation.ipynb 2020-05-12 20:43:31 -05:00
ltcptgeneral
952981cdb9 bug fixes 2020-05-12 20:39:23 -05:00
ltcptgeneral
6fee42f6d2 bug fixes 2020-05-12 20:21:11 -05:00
ltcptgeneral
24f8961500 analysis.py v 1.2.1.001 2020-05-12 20:19:58 -05:00
ltcptgeneral
db8fbbf068 visualization.py v 1.0.0.001 2020-05-05 22:37:32 -05:00
ltcptgeneral
64ae1b7026 analysis.py v 1.2.1.000 2020-05-04 14:50:36 -05:00
ltcptgeneral
4498387ac5 analysis.py v 1.2.0.006 2020-05-04 11:59:25 -05:00
ltcptgeneral
7a362476c9 fixed indent part 2 2020-05-01 23:16:32 -05:00
ltcptgeneral
b79cedae68 fixed indentation 2020-05-01 23:14:19 -05:00
ltcptgeneral
2bcd4236bb moved equation.ipynb to correct folder 2020-05-01 23:06:32 -05:00
ltcptgeneral
0cc35dc02d Merge pull request #10 from titanscout2022/master
merge file changes from master into equation.py-testing
2020-05-01 23:04:33 -05:00
ltcptgeneral
43bb9ef2bb analysis.py v 1.2.0.005 2020-05-01 22:59:54 -05:00
ltcptgeneral
3ab1d0f50a converted space indentation to tab indentation 2020-05-01 16:15:07 -05:00
ltcptgeneral
88e7c52c8b fixes 2020-05-01 16:07:57 -05:00
ltcptgeneral
b345bfb95b reconsolidated arm64 and amd64 versions 2020-05-01 15:52:27 -05:00
ltcptgeneral
aeb4990c81 analysis pkg v 1.0.0.12
analysis.py v 1.2.0.004
2020-04-30 16:03:37 -05:00
ltcptgeneral
0a721ca500 8 2020-04-30 15:22:37 -05:00
ltcptgeneral
37a4a0085e 7 2020-04-29 23:02:02 -05:00
ltcptgeneral
429d3eb42c 6 2020-04-29 22:34:43 -05:00
ltcptgeneral
60ffe7645b 5 2020-04-29 19:01:55 -05:00
ltcptgeneral
adfa6f5cc0 4 2020-04-29 17:24:59 -05:00
ltcptgeneral
f9c25dad09 3 2020-04-29 12:58:44 -05:00
ltcptgeneral
b1d5834ff1 2 2020-04-29 00:35:19 -05:00
ltcptgeneral
357d4977eb 1 2020-04-29 00:34:16 -05:00
ltcptgeneral
4545f5721a analysis.py v 1.2.0.003 2020-04-28 04:00:19 +00:00
ltcptgeneral
8d703b10b3 analysis.py v 1.2.0.002 2020-04-22 03:32:34 +00:00
ltcptgeneral
df305f30f0 analysis.py v 1.2.0.001 2020-04-21 04:08:00 +00:00
ltcptgeneral
a123b71ac9 Merge pull request #9 from titanscout2022/fix
testing release 1.2 of analysis.py
2020-04-20 00:10:24 -05:00
ltcptgeneral
a02668e59c Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-04-20 05:07:17 +00:00
ltcptgeneral
4d6372f620 removed depreciated files to seperate repository 2020-04-20 05:07:07 +00:00
ltcptgeneral
9d0b6e68d8 Update README.md 2020-04-20 00:02:35 -05:00
ltcptgeneral
b8d51811e0 testing release 1.2 of analysis.py 2020-04-13 19:58:04 +00:00
ltcptgeneral
7a58cd08e2 analysis pkg v 1.0.0.11
analysis.py v 1.1.13.009
superscript.py v 0.0.5.002
2020-04-12 02:51:40 +00:00
ltcptgeneral
337fae68ee analysis pkg v 1.0.0.10
analysis.py v 1.1.13.008
superscript.py v 0.0.5.001
2020-04-09 22:16:26 -05:00
art
5e71d05626 removed app from dep 2020-04-05 21:42:12 +00:00
art
01df42aa49 added gitgraph to vscode container 2020-04-05 21:36:12 +00:00
ltcptgeneral
33eea153c1 Merge pull request #8 from titanscout2022/containerization-testing
Containerization testing
2020-04-05 16:32:40 -05:00
art
114eee5d57 finalized changes to docker implements 2020-04-05 21:29:16 +00:00
ltcptgeneral
06f008746a Merge pull request #7 from titanscout2022/master
merge
2020-04-05 14:57:56 -05:00
art
4f9c4e0dbb verified and tested docker files 2020-04-05 19:53:01 +00:00
art
5697e8b79e created dockerfiles 2020-04-05 19:04:07 +00:00
ltcptgeneral
e054e66743 started on dockerfile 2020-04-05 12:46:21 -05:00
ltcptgeneral
c914bd3754 removed unessasary comment 2020-04-04 11:59:19 -05:00
ltcptgeneral
6c08885a53 created two new analysis variants
the existing amd64
new unpopulated arm64
2020-04-04 00:09:40 -05:00
ltcptgeneral
375befd0c4 analysis pkg v 1.0.0.9 2020-03-17 20:03:49 -05:00
ltcptgeneral
893d1fb1d0 analysis.py v 1.1.13.007 2020-03-16 22:05:52 -05:00
ltcptgeneral
6a426ae4cd a 2020-03-10 00:45:42 -05:00
ltcptgeneral
50c064ffa4 worked 2020-03-09 22:58:51 -05:00
ltcptgeneral
1b0a9967c8 test1 2020-03-09 22:58:11 -05:00
ltcptgeneral
2605f7c29f Merge pull request #6 from titanscout2022/testing
Testing
2020-03-09 20:42:30 -05:00
ltcptgeneral
6f5a3edd88 superscript.py v 0.0.5.000 2020-03-09 20:35:11 -05:00
ltcptgeneral
457146b0e4 working 2020-03-09 20:29:44 -05:00
ltcptgeneral
f7fd8ffcf9 working 2020-03-09 20:18:30 -05:00
art
77bc792426 removed unessasary stuff 2020-03-09 10:29:59 -05:00
ltcptgeneral
39146cc555 Merge pull request #5 from titanscout2022/comp-edits
Comp edits
2020-03-09 10:28:48 -05:00
ltcptgeneral
04141bbec8 analysis.py v 1.1.13.006
regression.py v 1.0.0.003
analysis pkg v 1.0.0.8
2020-03-08 16:48:19 -05:00
ltcptgeneral
40e5899972 added get_team_rakings.py 2020-03-08 14:26:21 -05:00
ltcptgeneral
025c7f9b3c a 2020-03-06 21:39:46 -06:00
Dev Singh
2daa09c040 hi 2020-03-06 21:21:37 -06:00
ltcptgeneral
9776136649 superscript.py v 0.0.4.002 2020-03-06 21:09:16 -06:00
Dev Singh
68d27a6302 add reqs 2020-03-06 20:44:40 -06:00
Dev Singh
7fc18b7c35 add Procfile 2020-03-06 20:41:53 -06:00
ltcptgeneral
9b412b51a8 analysis pkg v 1.0.0.7 2020-03-06 20:32:41 -06:00
ltcptgeneral
b6ac05a66e Merge pull request #4 from titanscout2022/comp-edits
Comp edits merge
2020-03-06 20:29:50 -06:00
Dev Singh
435c8a7bc6 tiny brain fix 2020-03-06 14:52:41 -06:00
Dev Singh
a69b18354b ultimate carl the fat kid brain working 2020-03-06 14:50:54 -06:00
Dev Singh
7b9e6921d0 ultra galaxybrain working 2020-03-06 14:44:13 -06:00
Dev Singh
fb2800cf9e fix 2020-03-06 13:12:01 -06:00
Dev Singh
12cbb21077 super ultra working 2020-03-06 12:43:01 -06:00
Dev Singh
46d1a48999 even more working 2020-03-06 12:21:17 -06:00
Dev Singh
ad0a761d53 more working 2020-03-06 12:18:42 -06:00
Dev Singh
43f503a38d working 2020-03-06 12:15:35 -06:00
Dev Singh
d38744438b working 2020-03-06 11:50:07 -06:00
Dev Singh
eb8914aa26 maybe working 2020-03-06 11:27:32 -06:00
Dev Singh
283140094f a 2020-03-06 11:18:02 -06:00
Dev Singh
66ac1c304e testing part 2 better electric boogaloo 2020-03-06 11:16:24 -06:00
Dev Singh
0eb9e07711 testing 2020-03-06 11:14:10 -06:00
Dev Singh
f56c85b298 10:57 2020-03-06 10:57:39 -06:00
Dev Singh
6a9a17c5b4 10:43 2020-03-06 10:43:45 -06:00
Dev Singh
e24c49bedb 10:25 2020-03-06 10:25:20 -06:00
Dev Singh
2daed73aaa 10:21 unverified 2020-03-06 10:21:23 -06:00
art
8ebdb3b89b superscript.py v 0.0.3.000 2020-03-05 22:52:02 -06:00
art
a0e1293361 analysis.py v 1.1.13.001
analysis pkg v 1.0.0.006
2020-03-05 13:18:33 -06:00
art
b669e55283 analysis pkg v 1.0.0.005 2020-03-05 12:44:09 -06:00
art
3e38446eae analysis pkg v 1.0.0.004 2020-03-05 12:29:58 -06:00
art
dac0a4a0cd analysis.py v 1.1.13.000 2020-03-05 12:28:16 -06:00
art
897ba03078 removed unessasaary folders and files 2020-03-05 11:17:49 -06:00
ltcptgeneral
e815a2fbf7 superscript.py v 0.0.2.001 2020-03-04 23:59:50 -06:00
ltcptgeneral
941383de4b analysis.py v 1.1.12.006
analysis pkg v 1.0.0.003
2020-03-04 21:20:00 -06:00
ltcptgeneral
5771c7957e a 2020-03-04 20:15:03 -06:00
ltcptgeneral
72c233649d superscript.py v 0.0.1.004 2020-03-04 20:12:09 -06:00
ltcptgeneral
c7031361b0 analysis.py v 1.1.12.005
analysis pkg v 1.0.0.002
2020-03-04 18:55:45 -06:00
ltcptgeneral
59508574c9 analysis pkg 1.0.0.001 2020-03-04 17:54:30 -06:00
ltcptgeneral
d57d1ebc6d analysis.py v 1.1.12.004 2020-03-04 17:52:07 -06:00
ltcptgeneral
70b2ff1151 superscript.py v 0.0.1.003 2020-03-04 16:53:25 -06:00
ltcptgeneral
c3746539b3 superscript.py v 0.0.1.002 2020-03-04 15:57:20 -06:00
ltcptgeneral
405ab3ac74 a 2020-03-04 13:47:56 -06:00
ltcptgeneral
94dd51566a refactors 2020-03-04 13:42:54 -06:00
ltcptgeneral
b5718a500a a 2020-03-04 12:58:57 -06:00
ltcptgeneral
2eaa390f2f d 2020-03-04 12:37:58 -06:00
art
9c666b95be moved analysis-master out of data analysis 2020-03-03 22:38:34 -06:00
art
dfc01a13bd c 2020-03-03 21:04:47 -06:00
art
d4328e6027 changed setup.py back 2020-03-03 21:04:17 -06:00
art
f9a3150438 b 2020-03-03 21:00:26 -06:00
art
6decf183dd a 2020-03-03 20:59:52 -06:00
art
67f940eadb made license explit in setup.py 2020-03-03 20:55:46 -06:00
art
56d0578d86 recompiled analysis.py 2020-03-03 20:48:50 -06:00
art
5e9e90507b packagefied analysis (finally) 2020-03-03 20:30:54 -06:00
art
8b4c50827c added setup.py 2020-03-03 20:24:49 -06:00
art
f8cdd73655 created __init__.py for analysis package 2020-03-03 20:17:05 -06:00
art
74dc02ca99 superscript.py v 0.0.1.001 2020-03-03 20:10:29 -06:00
art
5915827d15 superscript.py v 0.0.1.000 2020-03-03 19:39:58 -06:00
art
f9b0343aa1 moved app in dep 2020-03-03 18:48:17 -06:00
art
938caa75d1 superscript.py v 0.0.0.009
changes to config.csv
2020-03-03 18:40:35 -06:00
art
df66d28959 changes, testing 2020-03-03 18:13:03 -06:00
art
2710642f15 superscript.pv v 0.0.0.008
data.py created
2020-03-03 18:02:24 -06:00
art
51b3dd91b5 removed \n s 2020-03-03 16:27:30 -06:00
art
d00cf142c0 superscript.py v 0.0.0.007 2020-03-03 16:01:07 -06:00
art
ae8706ac08 superscript.py v 0.0.0.006 2020-03-03 15:42:37 -06:00
ltcptgeneral
5305e4a30f Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-02-29 01:06:06 -06:00
ltcptgeneral
908a1cd368 a 2020-02-29 01:05:58 -06:00
art
19e0044e0e a 2020-02-26 08:58:27 -06:00
Dev Singh
7ad43e970f Create LICENSE 2020-02-23 13:19:40 -06:00
Dev Singh
fbb3fde754 why are we unlicense? 2020-02-23 13:18:37 -06:00
art
81c81bed94 a 2020-02-20 19:29:21 -06:00
art
f3fc4cefd0 something changed 2020-02-20 19:27:09 -06:00
art
376ea248a4 a 2020-02-20 19:22:06 -06:00
art
9824f9349d fixed jacob 2020-02-20 19:19:20 -06:00
art
eb90582db8 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-02-20 19:12:48 -06:00
art
bad9e497b1 a 2020-02-20 19:12:33 -06:00
jlevine18
c3b993cfce tba_match_result_request.py v0.0.1 2020-02-19 21:50:56 -06:00
art
2cb5c54d8b dep 2020-02-19 19:54:59 -06:00
art
7f705915f0 fixes 2020-02-19 19:53:23 -06:00
art
2a8a21b82a something 2020-02-19 19:52:31 -06:00
art
2e09cba94e superscript v 0.0.0.005 2020-02-19 19:51:45 -06:00
art
de9d151ad6 superscript.py v 0.0.0.004 2020-02-19 19:21:48 -06:00
art
452b55ac6f fix 2020-02-18 20:38:12 -06:00
art
fe31db07f9 analysis.py v 1.1.12.003 2020-02-18 20:32:35 -06:00
art
52d79ea25e analysis.py v 1.1.12.002, superscript.py
v 0.0.0.003
2020-02-18 20:29:22 -06:00
art
20833b29c1 superscript.py v 0.0.0.002 2020-02-18 19:54:09 -06:00
art
978a9a9a25 doc 2020-02-18 16:16:57 -06:00
art
9da4322aa9 analysis.py v 1.1.12.000 2020-02-18 15:25:23 -06:00
art
5bdd77ddc6 superscript v 0.0.0.001 2020-02-18 11:31:20 -06:00
art
2782dc006c fix 2020-01-17 10:21:15 -06:00
art
de6c582b8f analysis.py v 1.1.11.010 2020-01-17 10:18:28 -06:00
art
32bc329e91 something changed idk what 2020-01-08 15:01:33 -06:00
art
4e50a79614 analysis.py v 1.1.11.009 2020-01-07 15:55:49 -06:00
ltcptgeneral
190fbf6cac analysis?py v 1.1.11.008 2020-01-06 23:48:28 -06:00
art
a8bf4e46e9 created superscript.py 2020-01-06 14:55:36 -06:00
ltcptgeneral
478c793917 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-01-05 19:08:06 -06:00
ltcptgeneral
4b44c7978a whatever 2020-01-05 19:06:54 -06:00
art
0fbb958dd9 regression v 1.0.0.003 2020-01-04 10:19:31 -06:00
art
031e45ac19 analysis.py v 1.1.11.007 2020-01-04 10:13:25 -06:00
art
96bf376b70 analysis.py v 1.1.11.006 2020-01-04 10:04:20 -06:00
art
eca8d4efc1 quick fix 2020-01-04 09:57:06 -06:00
art
d5a7f52b83 spelling 2019-12-23 12:49:38 -06:00
art
ae4ecbd67c analysis.py v 1.1.11.005 2019-12-23 12:48:13 -06:00
ltcptgeneral
0ba3a56ea7 analysis.py v 1.1.11.004 2019-11-16 16:21:06 -06:00
art
1717cc17a1 analysis.py 1.1.11.003 2019-11-11 10:04:12 -06:00
ltcptgeneral
947f7422dc spelling fix 2019-11-10 13:59:59 -06:00
ltcptgeneral
cf14005b67 analysis?py v 1.1.11.002 2019-11-10 02:04:48 -06:00
ltcptgeneral
08ff6aec8e analysis.py v 1.1.11.001 2019-11-10 01:38:39 -06:00
art
234f54ef5d analysis.py v 1.1.11.000 2019-11-08 13:20:38 -06:00
art
df42ae734e analysis.py v 1.1.10.00 2019-11-08 12:41:37 -06:00
art
4979c4b414 analysis.py v 1.1.9.002 2019-11-08 12:26:42 -06:00
art
d6cc419c40 test 2019-11-08 09:50:54 -06:00
ltcptgeneral
a73ce4080c quick fix 2019-11-06 15:33:56 -06:00
ltcptgeneral
456836bdb8 analysis.py 1.1.9.001 2019-11-06 15:32:21 -06:00
ltcptgeneral
a51f1f134d analysis.py v 1.1.9.000 2019-11-06 15:26:13 -06:00
art
db9ce0c25a quick fix 2019-11-05 16:25:53 -06:00
art
92c8b9c8c3 fixed indentation 2019-11-05 13:45:35 -06:00
art
06b0acb9f8 analysis.py v 1.1.8.000 2019-11-05 13:38:49 -06:00
art
7c957d9ddc analysis.py v 1.1.7.000 2019-11-05 13:14:08 -06:00
art
efab5bfde8 analysis.py v 1.1.6.002 2019-11-05 12:56:53 -06:00
art
c886ca8e3f analysis.py v 1.1.6.001 2019-11-05 12:53:39 -06:00
art
2cf7d73c9c analysis.py v 1.1.6.000 2019-11-05 12:47:04 -06:00
art
f12cbcc847 f 2019-11-04 10:14:28 -06:00
art
df6c184b84 quick fix 2019-11-04 10:10:29 -06:00
art
1ea7306eeb __all__ fixes 2019-11-04 10:08:28 -06:00
art
bb41c26531 something changed, idk 2019-11-01 13:12:01 -05:00
art
1d4b2bd49d visualization v 1.0.0.000, titanlearn v 2.0.1.001 2019-11-01 13:08:32 -05:00
art
8dd2440f08 analysis.py v 1.1.5.001 2019-10-31 11:03:52 -05:00
art
ab9b38da95 titanlearn v 2.0.1.000 2019-10-29 14:21:53 -05:00
art
dacf12f8a4 quick fix 2019-10-29 12:27:16 -05:00
art
3894eb481c fixes 2019-10-29 12:25:18 -05:00
art
0198d6896b restructured file management part 3 2019-10-29 10:53:11 -05:00
art
6902521d6b restructured file management part 2 2019-10-29 10:50:10 -05:00
art
590e8424e7 restructured file management 2019-10-29 10:37:23 -05:00
art
bc6916ab15 quick fix 2019-10-29 10:07:56 -05:00
art
2590a40827 depreciated files, titanlearn v 2.0.0.001 2019-10-29 10:04:56 -05:00
art
68006de8c0 titanlearn.py v 2.0.0.000 2019-10-29 09:41:49 -05:00
art
9f0d366408 depreciated 2019 superscripts and company 2019-10-29 09:23:00 -05:00
art
2bdb15a2b3 analysis.py v 1.1.5.001 2019-10-25 09:50:02 -05:00
art
56b575a753 analysis.py v 1,1,5,001 2019-10-25 09:19:18 -05:00
ltcptgeneral
ff2f0787ae analysis.py v 1.1.5.000 2019-10-09 23:58:08 -05:00
jlevine18
7c121d48fc fix PolyRegKernel 2019-10-09 22:23:56 -05:00
art
8eac3d5af1 ok fixed half of it 2019-10-08 13:49:19 -05:00
art
f47be637a0 jacob fix poly regression! 2019-10-08 13:35:32 -05:00
art
c824087335 removed extra import 2019-10-08 12:58:04 -05:00
art
a92dacc7ff added import math 2019-10-08 09:30:07 -05:00
art
37c3430433 removed regression import 2019-10-07 12:58:57 -05:00
ltcptgeneral
3bcf832db0 fix 2019-10-06 19:12:58 -05:00
art
591ddbde9d refactor 2019-10-05 16:53:03 -05:00
art
eaa0bcd5d8 quick fixes 2019-10-05 16:51:11 -05:00
art
45abb9e24d analysis.py v 1.1.4.000 2019-10-05 16:18:49 -05:00
art
a853e9b02b quick change 2019-10-04 10:37:29 -05:00
art
af20fb0fa7 comments 2019-10-04 10:36:44 -05:00
art
3a17ac5154 analysis.py v 1.1.3.002 2019-10-04 10:34:31 -05:00
art
1cdeab4b6b quick fix 2019-10-04 09:28:25 -05:00
art
b2ce781961 quick refactor of glicko2() 2019-10-04 09:12:12 -05:00
art
400b5bb81e upload trueskill for testing purposes 2019-10-04 09:02:46 -05:00
art
fd7ab3a598 analysis.py v 1.1.3.001 2019-10-04 08:13:28 -05:00
ltcptgeneral
9175c2921a analysis.py v 1.1.3.000 2019-10-04 00:26:21 -05:00
ltcptgeneral
1d3de02763 Merge pull request #3 from titanscout2022/elo
Elo
2019-10-03 11:22:57 -05:00
art
b6299ce397 analysis.py v 1.1.2.003 2019-10-03 10:48:56 -05:00
art
8801a300c4 analysis.py v 1.1.2.002 2019-10-03 10:42:05 -05:00
art
acdcb42e6d quick tests 2019-10-02 20:57:09 -05:00
art
484adfcda8 stuff 2019-10-02 20:56:06 -05:00
art
4d01067a57 analysis.py v 1.1.2.001 2019-10-01 08:59:04 -05:00
ltcptgeneral
0991757ddb reduced random blank lines 2019-09-30 16:09:31 -05:00
ltcptgeneral
de0cb1a4e3 analysis.py v 1.1.2.000, quick fixes 2019-09-30 16:02:32 -05:00
ltcptgeneral
bca13420b2 fixes 2019-09-30 15:49:15 -05:00
ltcptgeneral
236ca3bcfd quick fix 2019-09-30 13:41:15 -05:00
ltcptgeneral
b2aa6357d8 analysis.py v 1.1.1.001 2019-09-30 13:37:19 -05:00
ltcptgeneral
941dd4838a analysis.py v 1.1.1.000 2019-09-30 10:11:53 -05:00
ltcptgeneral
91d727b6ad jacob forgot self.scal_mult 2019-09-27 10:13:17 -05:00
ltcptgeneral
2c00f5b26e Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-09-27 09:49:40 -05:00
jlevine18
4f981df7bb Add files via upload 2019-09-27 09:48:05 -05:00
ltcptgeneral
c24e51e2b6 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-09-27 09:41:07 -05:00
ltcptgeneral
f565744867 added testing files to gitignore 2019-09-27 09:40:50 -05:00
ltcptgeneral
d3ee8621f0 spelling fix 2019-09-26 19:22:44 -05:00
jlevine18
e38c12f765 cudaregress v 1.0.0.002 2019-09-26 13:35:37 -05:00
jlevine18
d71b45a8e9 wait arthur moved this 2019-09-26 13:34:42 -05:00
jlevine18
6f9527c726 cudaregress 1.0.0.002 2019-09-26 13:31:22 -05:00
ltcptgeneral
9a99b8de2a quick fix 2019-09-25 14:14:17 -05:00
ltcptgeneral
c32b0150bd analysis.py v 1.1.0.007 2019-09-25 14:11:20 -05:00
ltcptgeneral
86327e97f9 moved and renamed cudaregress.py to regression.py 2019-09-23 09:58:08 -05:00
jlevine18
4fd18ec7fe global vars to bugfix 2019-09-23 09:28:35 -05:00
jlevine18
dc6f896071 Set device bc I apparently forgot to do that 2019-09-23 00:01:31 -05:00
jlevine18
c5d087dada don't need the testing notebook up here anymore 2019-09-22 23:23:29 -05:00
jlevine18
bda2db7003 Add files via upload 2019-09-22 23:22:21 -05:00
jlevine18
53d4a0ecde added cudaregress.py package 2019-09-22 23:19:46 -05:00
ltcptgeneral
db19127d28 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-09-22 23:10:24 -05:00
jlevine18
3ec7e5fed5 added cuda to cudaregress notebook 2019-09-22 23:05:49 -05:00
ltcptgeneral
8bd07cbd32 quick fix 2019-09-22 21:54:28 -05:00
jlevine18
f5b9a678fc fix cuda regress testing notebook 2019-09-22 21:38:12 -05:00
jlevine18
1c8f8fdfe7 added cudaRegress testing notebook 2019-09-21 13:35:51 -05:00
ltcptgeneral
f63c473166 analysis.py v 1.1.0.006 2019-09-17 12:21:44 -05:00
ltcptgeneral
936354a1a2 analysis.py v 1.1.0.005 2019-09-17 08:46:47 -05:00
ltcptgeneral
43d059b477 analysis.py v 1.1.0.004 2019-09-16 11:11:27 -05:00
ltcptgeneral
173f9b3460 benchmarked 2019-09-13 15:09:33 -05:00
ltcptgeneral
eb51d876a5 analysis.py v 1.1.0.003 2019-09-13 14:38:24 -05:00
ltcptgeneral
bee1edbf25 quick fixes 2019-09-13 14:29:22 -05:00
ltcptgeneral
13c17b092a analysis.py v 1.1.0.002 2019-09-13 13:59:13 -05:00
ltcptgeneral
800601121e moved files to subfolder dep 2019-09-13 13:50:12 -05:00
ltcptgeneral
79e77af304 analysis.py v 1.1.0.001 2019-09-13 12:33:02 -05:00
ltcptgeneral
4d6273fa05 analysis.py v 1.1.0.000 2019-09-13 11:14:13 -05:00
ltcptgeneral
c9567f0d7c Rename analysis-better.py to analysis.py 2019-09-12 11:05:33 -05:00
ltcptgeneral
37d3c2b1d2 Rename analysis.py to analysis-dep.py 2019-09-12 11:04:54 -05:00
ltcptgeneral
b689dada3d analysis-better.py v 1.0.9.000
changelog:
    - refactored
    - numpyed everything
    - removed stats in favor of numpy functions
2019-04-09 09:43:42 -05:00
ltcptgeneral
e914d32b37 Create analysis-better.py 2019-04-09 09:30:37 -05:00
ltcptgeneral
5dc3fa344c Delete temp.txt 2019-04-08 09:38:27 -05:00
ltcptgeneral
c7859bf681 Update .gitignore 2019-04-08 09:34:49 -05:00
ltcptgeneral
620b6de028 quick fixes 2019-04-08 09:26:32 -05:00
ltcptgeneral
c1635f79fe Merge branch 'c' 2019-04-08 09:17:26 -05:00
ltcptgeneral
a9d3ef2b51 Create analysis.cp37-win_amd64.pyd 2019-04-08 09:17:16 -05:00
ltcptgeneral
aa107249fd cython working 2019-04-08 09:16:26 -05:00
ltcptgeneral
0c47283dd5 analysis in c working 2019-04-05 21:01:17 -05:00
ltcptgeneral
f49bb58215 started c-ifying analysis 2019-04-05 17:24:24 -05:00
ltcptgeneral
b91ad29ae4 Delete uuh.png 2019-04-03 14:43:59 -05:00
ltcptgeneral
8a869e037b fixed superscript 2019-04-03 14:39:22 -05:00
ltcptgeneral
20f082b760 beautified 2019-04-03 13:34:31 -05:00
ltcptgeneral
ef81273d4a Delete keytemp.json 2019-04-02 14:07:24 -05:00
ltcptgeneral
3761274ee3 Update .gitignore 2019-04-02 13:43:08 -05:00
ltcptgeneral
506c779d82 Merge branch 'multithread' 2019-04-02 13:40:02 -05:00
ltcptgeneral
892b57a1eb whtever 2019-04-01 13:22:37 -05:00
jlevine18
94cc4adbf9 teams for wisconsin regional 2019-03-28 07:54:08 -05:00
ltcptgeneral
2e189bcfa2 teams added 2019-03-27 23:40:05 -05:00
ltcptgeneral
a21d0b5ec6 Update tbarequest.cpython-37.pyc 2019-03-22 19:40:17 -05:00
ltcptgeneral
ebb5f3b09e Update scores.csv 2019-03-22 19:11:11 -05:00
ltcptgeneral
5c4bed42d6 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-22 15:22:05 -05:00
ltcptgeneral
c15d037109 something changed 2019-03-22 15:21:58 -05:00
jlevine18
56f704c464 Update tbarequest.py 2019-03-22 15:09:52 -05:00
jlevine18
14a0414265 add req_team_info 2019-03-22 14:54:55 -05:00
Jacob Levine
6dbdfe00fc fixed textArea bug 2019-03-22 12:39:16 -05:00
ltcptgeneral
00c9df4239 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-22 11:54:43 -05:00
ltcptgeneral
9c725887c5 created nishant only script 2019-03-22 11:54:40 -05:00
Jacob Levine
9562cc594f fixed another bug 2019-03-22 11:53:15 -05:00
Jacob Levine
56f6752ff7 fixed textArea bug 2019-03-22 11:50:03 -05:00
Jacob Levine
31c8c9ee86 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-22 11:28:45 -05:00
Jacob Levine
0f671daf30 added fields that Arthut needed 2019-03-22 11:28:22 -05:00
Archan Das
e1027a9562 Update teams.csv 2019-03-22 11:12:51 -05:00
Jacob Levine
628dac5835 web3\ 2019-03-22 09:18:59 -05:00
Jacob Levine
fed48bc999 web3\ 2019-03-22 09:16:17 -05:00
Jacob Levine
7be5e15e9a web3 2019-03-22 09:14:41 -05:00
Jacob Levine
365b9e1882 web4 2019-03-22 09:13:44 -05:00
Jacob Levine
e5bb5b6ef7 web3 2019-03-22 09:12:29 -05:00
Jacob Levine
f9e4a6c53d web3 2019-03-22 08:51:42 -05:00
Jacob Levine
e0099aab60 web2 2019-03-22 08:50:37 -05:00
Jacob Levine
925087886c web 2019-03-22 08:49:50 -05:00
Jacob Levine
35f8cd693e archan needs to import! 2019-03-22 08:46:04 -05:00
Jacob Levine
a795a89c2d final fixes 2019-03-22 08:44:42 -05:00
Jacob Levine
92602b3122 change letter 2019-03-22 08:39:01 -05:00
Jacob Levine
aa86a2af7b final fixes 2019-03-22 08:36:33 -05:00
Jacob Levine
6f1cf1828a update archan's script 2019-03-22 08:27:53 -05:00
Jacob Levine
bb5c38fbfe don't sort matches alphabetically, sort them numerically 2019-03-22 07:48:44 -05:00
Jacob Levine
169c1737b2 testing mistakes 2019-03-22 07:37:05 -05:00
Jacob Levine
8244efa09b ok seriously what is going on? 2019-03-22 07:34:00 -05:00
Jacob Levine
4c1abeb200 testing mistakes 2019-03-22 07:32:05 -05:00
Jacob Levine
b41683eaa9 testing mistakes 2019-03-22 07:29:47 -05:00
Jacob Levine
1f29718795 testing mistakes 2019-03-22 07:28:55 -05:00
Jacob Levine
5716a7957e ok 2019-03-22 07:28:11 -05:00
Jacob Levine
7c21c277dd testing mistakes 2019-03-22 07:26:08 -05:00
Jacob Levine
91ddbb5531 wtf 2019-03-22 07:24:27 -05:00
Jacob Levine
21be310e1f testing mistakes 2019-03-22 07:24:15 -05:00
Jacob Levine
0a687648e0 Revert "ok seriously what is going on?"
This reverts commit 8de7078240.
2019-03-22 07:17:26 -05:00
Jacob Levine
80c6b9ba67 Revert "testing mistakes"
This reverts commit 1f20ad7f37.
2019-03-22 07:16:36 -05:00
Jacob Levine
1f20ad7f37 testing mistakes 2019-03-22 07:15:44 -05:00
Jacob Levine
8de7078240 ok seriously what is going on? 2019-03-22 07:05:45 -05:00
Jacob Levine
b88a7f7aa8 wtf 2019-03-22 07:03:31 -05:00
Jacob Levine
313d627fa8 testing mistakes 2019-03-22 07:01:19 -05:00
Jacob Levine
cbe1d9a015 wtf 2019-03-22 06:57:27 -05:00
Jacob Levine
a4288e2a0d move so it doesnt crash 2019-03-22 06:53:28 -05:00
Jacob Levine
4f631a4b79 fix add script for text areas 2019-03-22 06:48:45 -05:00
Jacob Levine
e992483f35 dont be stupid 2019-03-22 01:06:05 -05:00
Jacob Levine
4671eacb6e chrome is still horrible 2019-03-22 01:04:18 -05:00
Jacob Levine
c76a5ddb5e case sensitive 2019-03-22 00:50:20 -05:00
Jacob Levine
1989ec5ad4 chrome is still horrible 2019-03-22 00:49:19 -05:00
Jacob Levine
0124d9db97 chrome is horrible 2019-03-22 00:46:19 -05:00
Jacob Levine
02ce675a0b dont be stupid 2019-03-22 00:40:27 -05:00
Jacob Levine
64da97bfdc dont be stupid 2019-03-22 00:38:37 -05:00
Jacob Levine
23d6eebff1 dont be stupid 2019-03-22 00:36:12 -05:00
Jacob Levine
8dcb59a15c bugfix 23 2019-03-22 00:33:50 -05:00
Jacob Levine
ebe25312b5 af 2019-03-22 00:31:31 -05:00
Jacob Levine
28b5c6868e bugfix 22 2019-03-22 00:30:36 -05:00
Jacob Levine
0c09631813 st 2019-03-22 00:29:34 -05:00
Jacob Levine
23d821b773 bugfix 21 2019-03-22 00:27:44 -05:00
Jacob Levine
cc958e0927 bugfix 20 2019-03-22 00:24:23 -05:00
Jacob Levine
7e23641591 bugfix 19 2019-03-22 00:21:46 -05:00
Jacob Levine
dc8bc17324 bugfix 18 2019-03-22 00:20:01 -05:00
Jacob Levine
23b16d2e92 bugfix 16,17 2019-03-22 00:16:32 -05:00
Jacob Levine
5200dbc4d7 ian stopped naming his questions 2019-03-22 00:09:24 -05:00
Jacob Levine
3fd42c46c9 bugfix 15 2019-03-22 00:08:00 -05:00
Jacob Levine
19015d79e6 bugfix 14 2019-03-22 00:05:35 -05:00
Jacob Levine
3eef220768 ESCAPE STRINGS 2019-03-22 00:03:25 -05:00
Jacob Levine
2f088898f8 minor fixes 2019-03-22 00:01:01 -05:00
Jacob Levine
f091dd9113 bugfix 10 2019-03-21 23:58:46 -05:00
Jacob Levine
2a32386a9e dont be stupid 2019-03-21 23:53:16 -05:00
Jacob Levine
45055b1505 minor fixes 2019-03-21 23:45:25 -05:00
Jacob Levine
afcda88760 dont be stupid 2019-03-21 23:40:46 -05:00
Jacob Levine
3ad0dcd851 fix fix bugfix 6 2019-03-21 23:36:29 -05:00
Jacob Levine
ac32743210 fix bugfix 6 2019-03-21 23:35:14 -05:00
Jacob Levine
978342c480 bugfix 6 2019-03-21 23:33:11 -05:00
Jacob Levine
8e6a927032 bugfix 5 2019-03-21 23:31:50 -05:00
Jacob Levine
0bf52e1c29 bugfix 4 2019-03-21 23:30:38 -05:00
Jacob Levine
06242f0b2a remove random 'm' 2019-03-21 23:29:01 -05:00
Jacob Levine
4398de71ba website for peoria 2019-03-21 23:28:18 -05:00
Jacob Levine
15f504ecc3 typo! 2019-03-21 23:26:45 -05:00
Jacob Levine
06be451456 website for peoria 2019-03-21 23:25:24 -05:00
Jacob Levine
e498f4275e readded css 2019-03-21 23:17:24 -05:00
Jacob Levine
1633ef7862 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-21 23:13:12 -05:00
Jacob Levine
2cff74aa54 website for peoria 2019-03-21 23:12:49 -05:00
Archan Das
040b4dc52a Add files via upload 2019-03-21 22:52:08 -05:00
Archan Das
35d8e5ff77 Add files via upload 2019-03-21 22:50:27 -05:00
ltcptgeneral
d3b39d8167 Delete test.py 2019-03-21 22:17:33 -05:00
ltcptgeneral
c7b3d7e9a3 superscript v 1.0.6.001
changelog:
- fixed multiple bugs
- works now
2019-03-21 18:02:51 -05:00
ltcptgeneral
10f8839bbd WORKING 2019-03-21 17:52:59 -05:00
ltcptgeneral
1eb568c807 Revert "beautified"
This reverts commit 0d8780b3c1.
2019-03-21 17:50:52 -05:00
ltcptgeneral
12cf4a55d7 Revert "yeeted"
This reverts commit 1f2edeba51.
2019-03-21 17:50:46 -05:00
ltcptgeneral
e81f6052e3 Revert "stuff"
This reverts commit 268b01fc93.
2019-03-21 17:50:37 -05:00
ltcptgeneral
bbebc4350c Revert "no"
This reverts commit ac7c169a27.
2019-03-21 17:50:32 -05:00
ltcptgeneral
ac7c169a27 no 2019-03-21 17:43:36 -05:00
ltcptgeneral
268b01fc93 stuff 2019-03-21 17:34:27 -05:00
ltcptgeneral
9c7647aba9 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-21 17:28:23 -05:00
ltcptgeneral
1f2edeba51 yeeted 2019-03-21 17:28:16 -05:00
jlevine18
b3781ada45 Delete Untitled.ipynb 2019-03-21 17:28:04 -05:00
Jacob Levine
0d8780b3c1 beautified 2019-03-21 17:27:31 -05:00
ltcptgeneral
64a89cc58f WORKING!!!! 2019-03-21 17:25:16 -05:00
ltcptgeneral
f092bd3cb1 Update superscript.py 2019-03-21 17:00:38 -05:00
ltcptgeneral
c4309f5679 Update superscript.py 2019-03-21 16:59:29 -05:00
ltcptgeneral
e19bb8dcc1 1 2019-03-21 16:58:37 -05:00
ltcptgeneral
c9436f15f8 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-21 16:57:02 -05:00
jlevine18
8c867dcf95 Update superscript.py 2019-03-21 16:55:04 -05:00
ltcptgeneral
d3e98391d4 Create superscript.py 2019-03-21 16:52:37 -05:00
ltcptgeneral
ef336eb454 a 2019-03-21 16:52:22 -05:00
ltcptgeneral
12f5536026 wtf2 2019-03-21 16:50:32 -05:00
Jacob Levine
6a0d8f4144 fixed null removal script 2019-03-21 16:48:02 -05:00
ltcptgeneral
7f80339fb4 working 2019-03-21 16:17:45 -05:00
ltcptgeneral
9ea074c99c WTF 2019-03-21 15:59:47 -05:00
ltcptgeneral
4188b4b1c3 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-21 15:25:46 -05:00
ltcptgeneral
41ea4e9ed8 test 2019-03-21 15:25:36 -05:00
jlevine18
9fe9084341 add opr request 2019-03-21 15:23:24 -05:00
ltcptgeneral
9f894428c1 Update test.py 2019-03-21 15:14:24 -05:00
ltcptgeneral
4227106b4f Update superscript.py 2019-03-21 15:07:24 -05:00
ltcptgeneral
82754ede58 wtf 2019-03-21 15:06:54 -05:00
ltcptgeneral
a1d0cd37b7 test 2019-03-21 14:38:53 -05:00
ltcptgeneral
5e13ca3b5e Update test.py 2019-03-20 22:15:31 -05:00
ltcptgeneral
7c96233f5b Update test.py 2019-03-20 21:36:49 -05:00
ltcptgeneral
3ecf08cf9b too much iteration 2019-03-20 20:18:55 -05:00
ltcptgeneral
04c561baea Update issue templates 2019-03-20 18:14:59 -05:00
ltcptgeneral
5eaf733651 Update superscript.py 2019-03-20 18:14:32 -05:00
ltcptgeneral
d0435a5528 Create LICENSE 2019-03-20 17:41:10 -05:00
ltcptgeneral
55cc572d5c Create CONTRIBUTING.md 2019-03-20 17:37:38 -05:00
ltcptgeneral
8577d4dafa Update README.md 2019-03-20 17:33:47 -05:00
ltcptgeneral
08aec2537e fix 0 2019-03-20 17:23:41 -05:00
ltcptgeneral
975db73aae key fix? 2019-03-20 16:53:53 -05:00
ltcptgeneral
6cb09240ab Update superscript.py 2019-03-20 16:38:42 -05:00
ltcptgeneral
c74b0f34a6 superscript.py - v 1.0.6.000
changelog:
- added pulldata function
- service now pulls in, computes data, and outputs data as planned
2019-03-20 16:16:48 -05:00
ltcptgeneral
3e47a232cc 1234567890 2019-03-20 14:10:47 -05:00
Jacob Levine
2e356405e1 bugfix 16 2019-03-18 21:06:13 -05:00
Jacob Levine
f59d94282d bugfix 15 2019-03-18 21:02:23 -05:00
Jacob Levine
b0ad3bdf9c bugfix 14 2019-03-18 20:47:16 -05:00
Jacob Levine
733c7cbfe7 bugfix 13 2019-03-18 19:20:27 -05:00
Jacob Levine
a95684213c bugfix 12 2019-03-18 19:16:17 -05:00
Jacob Levine
76b4107999 bugfix 11 2019-03-18 19:15:17 -05:00
Jacob Levine
b2bb2df3f0 bugfix 10, now with template literals 2019-03-18 19:10:18 -05:00
Jacob Levine
3ec4de4fb1 bugfix 9 2019-03-18 18:57:43 -05:00
Jacob Levine
bf1572765c bugfix 8 2019-03-18 18:53:41 -05:00
Jacob Levine
0717ed4979 bugfix 7 2019-03-18 18:41:20 -05:00
Jacob Levine
ab421e4170 bugfix 4 2019-03-18 18:38:45 -05:00
Jacob Levine
c5f6ecae68 bugfix 5 2019-03-18 18:35:59 -05:00
Jacob Levine
7dbffc940a bugfix 4 2019-03-18 18:28:47 -05:00
Jacob Levine
8f4e6e3510 bugfix 3 2019-03-18 18:27:46 -05:00
Jacob Levine
86325e7d2b bugfixes 2 2019-03-18 18:06:11 -05:00
Jacob Levine
cf6c6180d3 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-18 17:55:56 -05:00
Jacob Levine
da315ac908 bugfix 1 2019-03-18 17:54:34 -05:00
jlevine18
3b95963eb1 Merge pull request #1 from titanscout2022/signUps
Sign ups demo
2019-03-18 17:21:40 -05:00
Jacob Levine
1fdd80e31b multiform demo mk 1 2019-03-18 17:13:45 -05:00
Jacob Levine
926db38db9 continue with multi-form 2019-03-17 23:27:46 -05:00
Jacob Levine
f483cbbcfb Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-16 15:49:49 -05:00
Jacob Levine
0d111296af changed to signups. not complete yet 2019-03-16 15:47:56 -05:00
ltcptgeneral
9fb53f4297 Update titanlearn.py 2019-03-16 13:12:59 -05:00
ltcptgeneral
69ef08bfd4 1234567890 2019-03-10 11:42:43 -05:00
ltcptgeneral
0159f116c1 12345678 2019-03-09 16:27:36 -06:00
Jacob Levine
da6f2ce044 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-09 14:08:38 -06:00
Jacob Levine
053001186e added frc elo notebook 2019-03-09 14:05:47 -06:00
jlevine18
177e8ad783 Delete pullmatches.py 2019-03-08 22:19:11 -06:00
Jacob Levine
047f682030 added scoreboard 2019-03-08 22:05:35 -06:00
Jacob Levine
041db246b1 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-08 21:56:15 -06:00
Jacob Levine
54888a3988 added day 1 processing 2019-03-08 21:55:52 -06:00
ltcptgeneral
c726551ec7 Update superscript.py 2019-03-08 19:00:02 -06:00
ltcptgeneral
a36ba0413a superscript v 1.0.5.003
changelog:
- hotfix: actually pushes data correctly now
2019-03-08 17:43:38 -06:00
Jacob Levine
79d0bda1ef fix defaults 2019-03-08 12:54:41 -06:00
Jacob Levine
a7def3c367 reworked questions to comply with Ian's app 2019-03-08 12:48:10 -06:00
Jacob Levine
1ee9867ea6 fix typo 2019-03-08 10:54:14 -06:00
Jacob Levine
44f209f331 added strat options 2019-03-08 10:47:49 -06:00
Jacob Levine
274017806f sets timeout for reload 2019-03-07 23:37:54 -06:00
Jacob Levine
90adb6539a final fix for the night! 2019-03-07 23:33:58 -06:00
Jacob Levine
be4ec9ea51 bugfix 2019-03-07 23:30:33 -06:00
Jacob Levine
b89fab51c3 fix typo 2019-03-07 23:29:16 -06:00
Jacob Levine
6247c7997f added full functionality to scout 2019-03-07 23:26:30 -06:00
Jacob Levine
9baa4450b0 stylinh 2019-03-07 21:25:32 -06:00
Jacob Levine
2a449eba1a one of these times im going to actually catch it 2019-03-07 21:22:04 -06:00
Jacob Levine
dfd5366112 fix typo 2019-03-07 21:21:12 -06:00
Jacob Levine
dc180862df fix typo 2019-03-07 21:20:07 -06:00
Jacob Levine
9d9dcbbb71 fix typo 2019-03-07 21:18:13 -06:00
Jacob Levine
ed151f1707 sections 2019-03-07 21:16:54 -06:00
Jacob Levine
302f6b794d bugfix 2019-03-07 20:55:49 -06:00
Jacob Levine
1925943660 start scout 2019-03-07 20:54:55 -06:00
Jacob Levine
0e358a9a14 final fixes (hopefully this time) 2019-03-07 20:21:05 -06:00
Jacob Levine
2c9e553b57 fix typo 2019-03-07 20:19:58 -06:00
Jacob Levine
ee4ee316dd final page fix 2019-03-07 20:18:54 -06:00
Jacob Levine
12e39ecc84 fix typo 2019-03-07 20:17:10 -06:00
Jacob Levine
eb20ad907e fix mistake 2019-03-07 20:16:14 -06:00
Jacob Levine
61b286c258 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-07 20:14:05 -06:00
Jacob Levine
77231d00cc now you can leave teams 2019-03-07 20:13:32 -06:00
jlevine18
4322396088 arthur don't be stupid 2019-03-07 20:03:20 -06:00
Jacob Levine
c5dc49f442 final profile fix 2019-03-07 19:57:20 -06:00
Jacob Levine
0684f982b7 fix structure 2019-03-07 19:55:30 -06:00
Jacob Levine
b5d8851c44 fix data structure 2019-03-07 19:48:50 -06:00
Jacob Levine
b0782ed74e test bugfix 2019-03-07 19:47:35 -06:00
Jacob Levine
3e76c55801 testing... 2019-03-07 19:46:01 -06:00
Jacob Levine
834068244e test bugfix 2019-03-07 19:43:50 -06:00
Jacob Levine
d833d0a183 fix typo 2019-03-07 19:38:05 -06:00
Jacob Levine
1f50c6dd16 test bugfix 2019-03-07 19:37:06 -06:00
Jacob Levine
9ca336934a Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-07 19:32:14 -06:00
Jacob Levine
251390fddf fixed teamlogic 2019-03-07 19:31:34 -06:00
ltcptgeneral
aaa548fb65 hotfix 2000 2019-03-07 09:14:20 -06:00
ltcptgeneral
7710da503b 12 2019-03-06 20:05:50 -06:00
ltcptgeneral
18969b4179 Update superscript.py 2019-03-05 13:36:47 -06:00
ltcptgeneral
ecb6400b06 lotta bug fixes 2019-03-04 16:38:40 -06:00
ltcptgeneral
67393e0e09 1 2019-03-03 22:50:29 -06:00
ltcptgeneral
442d9a9682 Update analysis.py 2019-03-02 20:18:51 -06:00
ltcptgeneral
7434263165 titanscouting app v 1.0.0.003
simple bug fix
2019-03-02 19:58:00 -06:00
ltcptgeneral
d20d0e4e7a titanscouting app v 1.0.0.002 2019-03-02 19:47:31 -06:00
ltcptgeneral
836abc427a ryiop 2019-03-02 16:34:48 -06:00
ltcptgeneral
8cc6b2774e Create README.md 2019-03-02 16:34:12 -06:00
jlevine18
e98e66bdf0 tl.py 2019-03-02 08:18:28 -06:00
ltcptgeneral
791c4e82a5 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-01 13:49:36 -06:00
ltcptgeneral
110da31d50 Update titanlearn.py 2019-03-01 13:49:33 -06:00
jlevine18
0e9a706904 Update titanlearn.py 2019-03-01 12:25:41 -06:00
ltcptgeneral
28b5f9d6a2 dumb 2019-03-01 12:18:38 -06:00
ltcptgeneral
00af69a3f5 Update superscript.py 2019-02-28 13:39:35 -06:00
ltcptgeneral
e61403174d sfasf 2019-02-28 13:28:29 -06:00
ltcptgeneral
632a2472a2 bassbsabjasb 2019-02-28 13:13:52 -06:00
ltcptgeneral
d62a07a69e Update superscript.py 2019-02-28 09:04:37 -06:00
ltcptgeneral
85d4a29cf2 Update superscript.py 2019-02-27 14:01:25 -06:00
ltcptgeneral
6678e49cbf superscript.py - v 1.0.5.002
changelog:
- more information given
- performance improvements
2019-02-27 14:00:29 -06:00
ltcptgeneral
839c5d2943 superscript.py - v 1.0.5.001
changelog:
- grammar
2019-02-27 13:43:33 -06:00
ltcptgeneral
79b4cf1158 superscript.py - v 1.0.5.000
changelog:
- service now iterates forever
- ready for production other than pulling json data
2019-02-27 13:38:24 -06:00
ltcptgeneral
9b9d6bcd23 superscript.py - v 1.0.4.001
changelog:
- grammar fixes
2019-02-26 23:18:26 -06:00
ltcptgeneral
2b1dd3ed9b superscript.py - v 1.0.4.000
changelog:
- actually pushes to firebase
2019-02-26 19:39:56 -06:00
ltcptgeneral
7afe68e315 Update .gitignore 2019-02-26 19:10:53 -06:00
ltcptgeneral
0f58ce0fd7 security patch 2019-02-22 12:23:49 -06:00
ltcptgeneral
badcb373ae Update bdata.csv 2019-02-21 12:33:13 -06:00
ltcptgeneral
e5cf8a43d4 superscript.py - v 1.
changelog:
- processes data more efficiently
2019-02-20 22:59:17 -06:00
ltcptgeneral
aba4b44da4 superscript.py - v 1.0.3.000
changelog:
- actually processes data
2019-02-20 11:44:11 -06:00
ltcptgeneral
c4fa9c5f23 qwertyuiop 2019-02-19 13:21:06 -06:00
ltcptgeneral
22688de9e8 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-02-19 09:44:55 -06:00
ltcptgeneral
042efb2b5a superscript.py - v 1.0.2.000
changelog:
- added data reading from folder
- nearly crashed computer reading from 20 GiB of data
2019-02-19 09:44:51 -06:00
Jacob Levine
060a77f4b7 fix more typos 2019-02-12 21:00:43 -06:00
Jacob Levine
ffd64eb3d2 fix typos 2019-02-12 21:00:00 -06:00
Jacob Levine
4822be0ece fix typos 2019-02-12 20:55:56 -06:00
Jacob Levine
d3b71287c4 squash bugh 2019-02-12 20:52:03 -06:00
Jacob Levine
67ac98b9ab fix more typos 2019-02-12 20:49:23 -06:00
Jacob Levine
9e0c6e36ee can i set the world record for most typos 2019-02-12 20:48:35 -06:00
Jacob Levine
d0d431fb54 fix even more typos 2019-02-12 20:46:23 -06:00
Jacob Levine
718ca83a1d fix more typos 2019-02-12 20:44:21 -06:00
Jacob Levine
e0c159de00 fix typos 2019-02-12 20:42:49 -06:00
Jacob Levine
6652918ae8 I apparently don't know how to js 2019-02-12 20:41:43 -06:00
Jacob Levine
4f3ecf4361 fix more typos 2019-02-12 20:37:50 -06:00
Jacob Levine
dd5da3b1e8 fix typos 2019-02-12 20:34:05 -06:00
Jacob Levine
45a4387c68 started teams page 2019-02-12 20:20:30 -06:00
Jacob Levine
c6b2840e07 last style fixed before i do something else, for real this time 2019-02-09 15:53:39 -06:00
Jacob Levine
6362f50fd3 last style fixed before i do something else, for real this time 2019-02-09 15:50:34 -06:00
Jacob Levine
d5622c8672 last style fixed before i do something eks 2019-02-09 15:49:21 -06:00
Jacob Levine
3abc50cf7a js dom terms aren't very consistent 2019-02-09 15:44:46 -06:00
Jacob Levine
0f68468f14 fix style inconsistencies 2019-02-09 15:42:16 -06:00
Jacob Levine
6d45200ca3 other style 2019-02-09 15:36:59 -06:00
Jacob Levine
80aee80548 other style 2019-02-09 15:30:27 -06:00
Jacob Levine
3d27f3c127 margins aren't for tables 2019-02-09 15:29:21 -06:00
Jacob Levine
9fd7966c55 other style updates 2019-02-09 15:27:17 -06:00
Jacob Levine
4529ee32e2 no but this ugly html hack should 2019-02-09 15:25:25 -06:00
Jacob Levine
3a5629f0ba does making everything auto fix it? 2019-02-09 15:19:14 -06:00
Jacob Levine
fe74aea4de maybe we can fix it in js 2019-02-09 15:12:17 -06:00
Jacob Levine
76ac58dbab maybe we can fix it in js 2019-02-09 15:10:24 -06:00
Jacob Levine
db0ddec2c6 overflow-x 2019-02-09 14:57:55 -06:00
Jacob Levine
c6980ff71d time to actually start making this look legit 2019-02-09 14:54:03 -06:00
Jacob Levine
a4840003f5 what was i thinking? 2019-02-09 14:46:59 -06:00
Jacob Levine
aad41e57a9 even more styling, if you can call it that 2019-02-09 14:43:14 -06:00
Jacob Levine
24a8500588 more styling, if you can call it that 2019-02-09 14:41:31 -06:00
Jacob Levine
63c69ecc14 styling, if you can call it that 2019-02-09 14:39:32 -06:00
Jacob Levine
1c775fca2c you can now actually see the profile update page 2019-02-09 14:34:01 -06:00
Jacob Levine
1073bc458a typo fix 2019-02-09 14:32:52 -06:00
Jacob Levine
f8dafe61f8 revamped profile page 2019-02-09 14:30:58 -06:00
Jacob Levine
c97e51d9bd even more bugfix 2019-02-09 14:01:32 -06:00
Jacob Levine
2e779a95d2 more bugfix 2019-02-09 14:00:50 -06:00
Jacob Levine
0c609064a6 bugfix 2019-02-09 13:59:23 -06:00
Jacob Levine
059509e018 revamped sign-in, now that we have working checks 2019-02-09 13:57:48 -06:00
Jacob Levine
2c9951d2c9 ok this should fix 2019-02-09 13:33:14 -06:00
Jacob Levine
290110274b even more of a last-ditch effort to make js not multithread everything 2019-02-09 13:32:06 -06:00
Jacob Levine
7d02c6373c even more of a last-ditch effort to make js not multithread everything 2019-02-09 13:04:12 -06:00
Jacob Levine
0b0d36d660 last-ditch effort to make js not multithread everything 2019-02-09 13:01:14 -06:00
Jacob Levine
807c66dd3a ok this should fix 2019-02-09 12:46:20 -06:00
Jacob Levine
f0c0d646b5 ok this should fix 2019-02-09 12:41:52 -06:00
Jacob Levine
390f3d9c4d rephrased check script. are you happy now, JS? 2019-02-09 12:31:25 -06:00
Jacob Levine
19a9995875 i apparently can't type 2019-02-09 12:17:47 -06:00
Jacob Levine
95eab24247 adding standalone profile page 2019-02-09 12:14:55 -06:00
Jacob Levine
3da5a0cbd7 adding timeout 2019-02-09 11:43:47 -06:00
Jacob Levine
447e3e12a3 apperently window loads too fast for firebase 2019-02-09 11:38:57 -06:00
Jacob Levine
5b922fc10b squashing bugs 2019-02-09 11:33:24 -06:00
Jacob Levine
e661af1add Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-02-09 11:29:03 -06:00
Jacob Levine
192d023325 testing signout logic 2019-02-09 11:27:46 -06:00
ltcptgeneral
6b91fe9819 fixed copy paste oppsie 2019-02-08 15:42:33 -06:00
Jacob Levine
82231cb04b styling fixes 2019-02-06 18:20:31 -06:00
Jacob Levine
39dc72add2 onload scripts 2019-02-06 18:19:18 -06:00
Jacob Levine
ac158bf0a9 bugfixes 2019-02-06 18:12:39 -06:00
Jacob Levine
7b2915f4f2 styling fixes 2019-02-06 18:09:47 -06:00
Jacob Levine
64354dbe19 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-02-06 17:52:37 -06:00
Jacob Levine
901c8d25f8 added 3 other pages 2019-02-06 17:51:58 -06:00
ltcptgeneral
b346b01223 android app v 1.0.0.001 2019-02-06 17:43:38 -06:00
ltcptgeneral
73b419dfd6 android app v 1.0.0.000
finished android app
published source code
2019-02-06 17:06:25 -06:00
Jacob Levine
48f34f0472 revert some changes 2019-02-06 16:50:39 -06:00
Jacob Levine
e1769235f3 more styling 2019-02-06 16:45:56 -06:00
Jacob Levine
ac00138ca8 styling 2019-02-06 16:42:15 -06:00
Jacob Levine
28b5801bcc added sidebar 2019-02-06 16:21:41 -06:00
Jacob Levine
f2ed8ab04c sizing 2019-02-06 16:17:07 -06:00
Jacob Levine
781b4dc8b5 bugfix 2019-02-06 16:14:39 -06:00
Jacob Levine
19a236251a added sidebar 2019-02-06 16:08:28 -06:00
Jacob Levine
0d481b01df bugfix 2019-02-06 15:55:22 -06:00
jlevine18
5de2528d34 more bugfix 2019-02-06 15:37:27 -06:00
Jacob Levine
317ca72377 added info change functionality 2019-02-06 15:35:51 -06:00
Jacob Levine
c6e719240a bugfix 2019-02-06 15:25:15 -06:00
Jacob Levine
e554a1df99 reworked fix profile info 2019-02-06 15:22:09 -06:00
Jacob Levine
d9e7a1ed1e testing bugs 2019-02-06 15:04:31 -06:00
Jacob Levine
d968f10737 bugfix 2019-02-06 14:56:17 -06:00
Jacob Levine
dc80127dee bugfix 2019-02-06 14:51:31 -06:00
Jacob Levine
c591c84c75 added info change functionality 2019-02-06 14:46:41 -06:00
Jacob Levine
e290f5ae11 layout changes 2019-02-06 14:15:59 -06:00
Jacob Levine
b8d209b283 new fixes 2019-02-06 13:57:29 -06:00
Jacob Levine
f195b81974 added profile change functionality 2019-02-06 13:24:56 -06:00
ltcptgeneral
1293de346e analysis.py v 1.0.8.005, superscript.py v 1.0.1.000
changelog analysis.py:
- minor fixes
changelog superscript.py:
- added data reading from file
- added superstructure to code
2019-02-05 09:50:10 -06:00
ltcptgeneral
1b41c409cc created superscript.py, tbarequest.py v 1.0.1.000, edited repack_json.py
changelog tbarequest.py:
- fixed a simple error
2019-02-05 09:42:00 -06:00
ltcptgeneral
38d471113f Update .gitignore 2019-02-05 09:02:04 -06:00
ltcptgeneral
b31beb25be oof^2 2019-02-04 12:33:25 -06:00
ltcptgeneral
e3db22d262 Delete temp.txt 2019-02-04 10:50:43 -06:00
ltcptgeneral
e2d2e6687f oof 2019-02-04 10:50:07 -06:00
ltcptgeneral
b64ec05134 removed app bc jacob did fancy shit 2019-01-26 10:45:19 -06:00
ltcptgeneral
511e627899 Update workspace.xml 2019-01-26 10:40:35 -06:00
ltcptgeneral
ab0b2b9992 initialized app project 2019-01-26 10:32:00 -06:00
ltcptgeneral
0021eed5fb analysis.py - v 1.0.8.004
changelog
- removed a few unused dependencies
2019-01-26 10:11:54 -06:00
ltcptgeneral
8c35d8a3f6 yeeted histo_analysis_old() due to depreciation 2019-01-23 09:09:14 -06:00
ltcptgeneral
e5420844de yeeted useless comments 2019-01-22 22:42:37 -06:00
jlevine18
0fca5f58db ApiKey now changed and hidden-don't be stupid jake 2019-01-06 13:41:15 -06:00
Jacob Levine
07880038b0 folder move fix 2019-01-06 13:18:01 -06:00
Jacob Levine
d2d5d4c04e push all website files 2019-01-06 13:14:45 -06:00
jlevine18
d7301e26c3 Add files via upload 2019-01-06 13:02:35 -06:00
jlevine18
752b981e37 Rename website/functions/acorn to website/functions/node_modules/.bin/acorn 2019-01-06 12:57:46 -06:00
jlevine18
5f2db375f3 Add files via upload 2019-01-06 12:56:49 -06:00
jlevine18
cac1b4fba4 Add files via upload 2019-01-06 12:55:50 -06:00
jlevine18
236c4d02b6 Create index.js 2019-01-06 12:55:31 -06:00
jlevine18
8645eace5b Delete style.css 2019-01-06 12:54:41 -06:00
jlevine18
47cce54b3b Delete scripts.js 2019-01-06 12:54:35 -06:00
jlevine18
5a0fe35f86 Delete index.html 2019-01-06 12:54:29 -06:00
jlevine18
d3f8b474d0 upload website 2019-01-06 12:54:08 -06:00
ltcptgeneral
27145495e7 Update analysis.docs 2018-12-30 16:49:44 -06:00
ltcptgeneral
1a8da3fdd5 analysis.py - v 1.0.8.003
changelog:
- added p_value function
2018-12-29 16:28:41 -06:00
ltcptgeneral
444bfb5945 stuff 2018-12-26 17:08:04 -06:00
ltcptgeneral
cfee240e9c pineapple 2018-12-26 12:37:49 -06:00
ltcptgeneral
83a1dd5ced orange 2018-12-26 12:22:31 -06:00
ltcptgeneral
bf75e804cc bannana 2018-12-26 12:22:17 -06:00
ltcptgeneral
83e4f60a37 apple 2018-12-26 12:21:44 -06:00
bearacuda13
ae11605013 Add files via upload 2018-12-26 12:18:40 -06:00
bearacuda13
08b336cf15 Add files via upload 2018-12-26 12:14:05 -06:00
ltcptgeneral
eeeec86be6 temp 2018-12-26 12:06:42 -06:00
ltcptgeneral
9dbd897323 analysis.py - v 1.0.8.002
changelog:
- updated __all__ correctly to contain changes made in v 1.0.8.000 and v 1.0.8.001
2018-12-24 16:44:03 -06:00
jlevine18
71337c0fd5 fix other stupid mistakes 2018-12-24 14:50:04 -06:00
jlevine18
4e015180b6 fix syntax error 2018-12-24 14:42:54 -06:00
jlevine18
70591bc581 started ML module 2018-12-24 09:32:25 -06:00
jlevine18
288f97a3fd visualizer.py is now visualization.py 2018-12-21 11:10:18 -06:00
jlevine18
1126373bf2 Update tbarequest.py 2018-12-21 11:07:21 -06:00
jlevine18
fd0d43d29c added TBA requests module 2018-12-21 11:04:46 -06:00
jlevine18
cc6a7697cf Update visualization.py 2018-12-20 22:01:28 -06:00
jlevine18
2140ea8f77 started visualization module 2018-12-20 21:45:05 -06:00
ltcptgeneral
9dd5cc76f6 analysis.py - v 1.0.8.001
changelog:
- refactors
- bugfixes
2018-12-20 20:49:09 -06:00
ltcptgeneral
7b1e54eed8 refactor analysis.py 2018-12-20 15:05:43 -06:00
ltcptgeneral
188a7bbf1f Update data.csv 2018-12-20 12:21:26 -06:00
ltcptgeneral
b7a0c5286a analysis.py - v 1.0.8.000
changelog:
- depreciated histo_analysis_old
- depreciated debug
- altered basic_analysis to take array data instead of filepath
- refactor
- optimization
2018-12-20 12:21:22 -06:00
ltcptgeneral
32a2d6321c no change 2018-12-13 08:57:19 -06:00
ltcptgeneral
d2f6961693 Update analysis.cpython-37.pyc 2018-12-07 16:56:09 -06:00
ltcptgeneral
107076ac35 added visualizer.py, reorganized folders 2018-12-05 11:31:38 -06:00
ltcptgeneral
0b73460446 Update analysis.cpython-37.pyc 2018-12-04 19:05:13 -06:00
ltcptgeneral
39d5522650 Update analysis_docs.txt 2018-12-01 22:34:30 -06:00
ltcptgeneral
68d6c87589 Update analysis_docs.txt 2018-12-01 22:13:19 -06:00
ltcptgeneral
222c536631 created docs 2018-12-01 21:02:53 -06:00
ltcptgeneral
bd3f695938 a 2018-12-01 14:51:50 -06:00
ltcptgeneral
1b1a7c45bf Update analysis.cpython-37.pyc 2018-12-01 14:51:38 -06:00
ltcptgeneral
8a58fe28fa analysis.py - v 1.0.7.002
changelog:
	- bug fixes
2018-11-29 12:58:53 -06:00
ltcptgeneral
9c67e6f927 analysis.py - v 1.0.7.001
changelog:
	- bug fixes
2018-11-29 12:36:25 -06:00
ltcptgeneral
8d2dedc5a2 update analysis.py 2018-11-29 09:33:18 -06:00
ltcptgeneral
944cb31883 Update analysis.py
a quick update
2018-11-29 09:32:27 -06:00
ltcptgeneral
b38ffe1f08 Update requirements.txt 2018-11-29 09:31:55 -06:00
ltcptgeneral
19f89d3f35 updated stuff 2018-11-29 09:27:08 -06:00
ltcptgeneral
504fc92feb Create analysis.cpython-37.pyc 2018-11-29 09:04:17 -06:00
ltcptgeneral
5eb5e5ed8e removes stuff 2018-11-29 09:00:47 -06:00
ltcptgeneral
88be42de45 removed generate_data.py 2018-11-29 08:53:41 -06:00
ltcptgeneral
704a2d5808 analysis.py - v 1.0.7.000
changelog:
        - added tanh_regression (logistical regression)
	- bug fixes
2018-11-28 16:35:47 -06:00
ltcptgeneral
e915fe538e analysis.py - v 1.0.6.005
changelog:
        - added z_normalize function to normalize dataset
	- bug fixes
2018-11-28 14:29:32 -06:00
ltcptgeneral
5295bef18b Update analysis.cpython-37.pyc 2018-11-28 11:35:21 -06:00
ltcptgeneral
ae69eb7a40 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2018-11-28 11:12:53 -06:00
jlevine18
46f434b815 started website 2018-11-28 11:10:38 -06:00
jlevine18
cce111bd6a Create index.html 2018-11-28 11:06:04 -06:00
64 changed files with 5996 additions and 1084 deletions

7
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,7 @@
FROM ubuntu:20.04
WORKDIR /
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends tzdata
RUN apt-get install -y python3 python3-dev git python3-pip python3-kivy python-is-python3 libgl1-mesa-dev build-essential
RUN ln -s $(which pip3) /usr/bin/pip
RUN pip install pymongo pandas numpy scipy scikit-learn matplotlib pylint kivy

View File

@@ -0,0 +1,2 @@
FROM titanscout2022/tra-analysis-base:latest
WORKDIR /

View File

@@ -0,0 +1,28 @@
{
"name": "TRA Analysis Development Environment",
"build": {
"dockerfile": "dev-dockerfile",
},
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint",
"python.testing.pytestPath": "/usr/local/py-utils/bin/pytest"
},
"extensions": [
"mhutchie.git-graph",
"ms-python.python",
"waderyan.gitblame"
],
"postCreateCommand": "/usr/bin/pip3 install -r ${containerWorkspaceFolder}/analysis-master/requirements.txt && /usr/bin/pip3 install --no-cache-dir pylint && /usr/bin/pip3 install pytest"
}

4
.gitattributes vendored
View File

@@ -1,2 +1,4 @@
# Auto detect text files and perform LF normalization
* text=auto
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

7
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,7 @@
Fixes #
## Proposed Changes
-
-
-

40
.github/workflows/publish-analysis.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Upload Analysis Package
on:
release:
types: [published, edited]
jobs:
deploy:
runs-on: ubuntu-latest
env:
working-directory: ./analysis-master/
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
working-directory: ${{env.working-directory}}
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Install package deps
working-directory: ${{env.working-directory}}
run: |
pip install -r requirements.txt
- name: Build package
working-directory: ${{env.working-directory}}
run: |
python setup.py sdist bdist_wheel
- name: Publish package to PyPI
uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}
packages_dir: analysis-master/dist/

38
.github/workflows/ut-analysis.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Analysis Unit Tests
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8]
env:
working-directory: ./analysis-master/
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
working-directory: ${{ env.working-directory }}
- name: Test with pytest
run: |
pytest
working-directory: ${{ env.working-directory }}

9
.gitignore vendored
View File

@@ -1,2 +1,9 @@
/.vscode/
benchmark_data.csv
**/__pycache__/
**/.pytest_cache/
**/*.pyc
**/build/
**/*.egg-info/
**/dist/

66
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,66 @@
# Contributing Guidelines
This project accept contributions via GitHub pull requests.
This document outlines some of the
conventions on development workflow, commit message formatting, contact points,
and other resources to make it easier to get your contribution accepted.
## Certificate of Origin
By contributing to this project, you agree to the [Developer Certificate of
Origin (DCO)](https://developercertificate.org/). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution.
In order to show your agreement with the DCO you should include at the end of the commit message,
the following line: `Signed-off-by: John Doe <john.doe@example.com>`, using your real name.
This can be done easily using the [`-s`](https://github.com/git/git/blob/b2c150d3aa82f6583b9aadfecc5f8fa1c74aca09/Documentation/git-commit.txt#L154-L161) flag on the `git commit`.
Visual Studio code also has a flag to enable signoff on commits
If you find yourself pushed a few commits without `Signed-off-by`, you can still add it afterwards. Read this for help: [fix-DCO.md](https://github.com/src-d/guide/blob/master/developer-community/fix-DCO.md).
## Support Channels
The official support channel, for both users and contributors, is:
- GitHub issues: each repository has its own list of issues.
*Before opening a new issue or submitting a new pull request, it's helpful to
search the project - it's likely that another user has already reported the
issue you're facing, or it's a known issue that we're already aware of.
## How to Contribute
In general, please use conventional approaches to development and contribution such as:
* Create branches for additions or deletions, and or side projects
* Do not commit to master!
* Use Pull Requests (PRs) to indicate that an addition is ready to merge.
PRs are the main and exclusive way to contribute code to source{d} projects.
In order for a PR to be accepted it needs to pass this list of requirements:
- The contribution must be correctly explained with natural language and providing a minimum working example that reproduces it.
- All PRs must be written idiomaticly:
- for Node: formatted according to [AirBnB standards](https://github.com/airbnb/javascript), and no warnings from `eslint` using the AirBnB style guide
- for other languages, similar constraints apply.
- They should in general include tests, and those shall pass.
- In any case, all the PRs have to pass the personal evaluation of at least one of the [maintainers](MAINTAINERS) of the project.
### Format of the commit message
Every commit message should describe what was changed, under which context and, if applicable, the issue it relates to (mentioning a GitHub issue number when applicable):
For small changes, or changes to a testing or personal branch, the commit message should be a short changelog entry
For larger changes or for changes on branches that are more widely used, the commit message should simply reference an entry to some other changelog system. It is encouraged to use some sort of versioning system to log changes. Example commit messages:
```
superscript.py v 2.0.5.006
```
The format can be described more formally as follows:
```
<package> v <version number>
```

29
LICENSE Normal file
View File

@@ -0,0 +1,29 @@
BSD 3-Clause License
Copyright (c) 2020, Titan Scouting
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

3
MAINTAINERS Normal file
View File

@@ -0,0 +1,3 @@
Arthur Lu <learthurgo@gmail.com>
Jacob Levine <jacoblevine18@gmail.com>
Dev Singh <dev@devksingh.com>

70
README.md Normal file
View File

@@ -0,0 +1,70 @@
# Red Alliance Analysis &middot; ![GitHub release (latest by date)](https://img.shields.io/github/v/release/titanscout2022/red-alliance-analysis)
Titan Robotics 2022 Strategy Team Repository for Data Analysis Tools. Included with these tools are the backend data analysis engine formatted as a python package, associated binaries for the analysis package, and premade scripts that can be pulled directly from this repository and will integrate with other Red Alliance applications to quickly deploy FRC scouting tools.
---
# `tra-analysis`
`tra-analysis` is a higher level package for data processing and analysis. It is a python library that combines popular data science tools like numpy, scipy, and sklearn along with other tools to create an easy-to-use data analysis engine. tra-analysis includes analysis in all ranges of complexity from basic statistics like mean, median, mode to complex kernel based classifiers and allows user to more quickly deploy these algorithms. The package also includes performance metrics for score based applications including elo, glicko2, and trueskill ranking systems.
At the core of the tra-analysis package is the modularity of each analytical tool. The package encapsulates the setup code for the included data science tools. For example, there are many packages that allow users to generate many different types of regressions. With the tra-analysis package, one function can be called to generate many regressions and sort them by accuracy.
## Prerequisites
---
* Python >= 3.6
* Pip which can be installed by running\
`curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py`\
`python get-pip.py`\
after installing python, or with a package manager on linux. Refer to the [pip installation instructions](https://pip.pypa.io/en/stable/installing/) for more information.
## Installing
---
#### Standard Platforms
For the latest version of tra-analysis, run `pip install tra-analysis` or `pip install tra_analysis`. The requirements for tra-analysis should be automatically installed.
#### Exotic Platforms (Android)
[Termux](https://termux.com/) is recommended for a linux environemnt on Android. Consult the [documentation](https://titanscouting.github.io/analysis/general/installation#exotic-platforms-android) for advice on installing the prerequisites. After installing the prerequisites, the package should be installed normally with `pip install tra-analysis` or `pip install tra_analysis`.
## Use
---
tra-analysis operates like any other python package. Consult the [documentation](https://titanscouting.github.io/analysis/tra_analysis/) for more information.
## Supported Platforms
---
Although any modern 64 bit platform should be supported, the following platforms have been tested to be working:
* AMD64 (Tested on Zen, Zen+, and Zen 2)
* Intel 64/x86_64/x64 (Tested on Kaby Lake, Ice Lake)
* ARM64 (Tested on Broadcom BCM2836 SoC, Broadcom BCM2711 SoC)
The following OSes have been tested to be working:
* Linux Kernel 3.16, 4.4, 4.15, 4.19, 5.4
* Ubuntu 16.04, 18.04, 20.04
* Debian (and Debian derivaives) Jessie, Buster
* Windows 7, 10
The following python versions are supported:
* python 3.6 (not tested)
* python 3.7
* python 3.8
---
# `data-analysis`
Data analysis has been separated into its own [repository](https://github.com/titanscouting/tra-data-analysis).
# Contributing
Read our included contributing guidelines (`CONTRIBUTING.md`) for more information and feel free to reach out to any current maintainer for more information.
# Build Statuses
![Analysis Unit Tests](https://github.com/titanscout2022/red-alliance-analysis/workflows/Analysis%20Unit%20Tests/badge.svg)

6
SECURITY.md Normal file
View File

@@ -0,0 +1,6 @@
# Security Policy
## Reporting a Vulnerability
Please email `titanscout2022@gmail.com` to report a vulnerability.

1
analysis-master/build.sh Normal file
View File

@@ -0,0 +1 @@
python setup.py sdist bdist_wheel || python3 setup.py sdist bdist_wheel

View File

@@ -0,0 +1,6 @@
numpy
scipy
scikit-learn
six
matplotlib
pyparsing

28
analysis-master/setup.py Normal file
View File

@@ -0,0 +1,28 @@
import setuptools
import tra_analysis
requirements = []
with open("requirements.txt", 'r') as file:
for line in file:
requirements.append(line)
setuptools.setup(
name="tra_analysis",
version=tra_analysis.__version__,
author="The Titan Scouting Team",
author_email="titanscout2022@gmail.com",
description="Analysis package developed by Titan Scouting for The Red Alliance",
long_description="../README.md",
long_description_content_type="text/markdown",
url="https://github.com/titanscout2022/tr2022-strategy",
packages=setuptools.find_packages(),
install_requires=requirements,
license = "BSD 3-Clause License",
classifiers=[
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
],
python_requires='>=3.6',
keywords="data analysis tools"
)

View File

@@ -0,0 +1,233 @@
import numpy as np
import sklearn
from sklearn import metrics
from tra_analysis import Analysis as an
from tra_analysis import Array
from tra_analysis import ClassificationMetric
from tra_analysis import CorrelationTest
from tra_analysis import Fit
from tra_analysis import KNN
from tra_analysis import NaiveBayes
from tra_analysis import RandomForest
from tra_analysis import RegressionMetric
from tra_analysis import Sort
from tra_analysis import StatisticalTest
from tra_analysis import SVM
from tra_analysis.equation.parser import BNF
test_data_linear = [1, 3, 6, 7, 9]
test_data_linear2 = [2, 2, 5, 7, 13]
test_data_linear3 = [2, 5, 8, 6, 14]
test_data_array = Array(test_data_linear)
x_data_circular = []
y_data_circular = []
y_data_ccu = [1, 3, 7, 14, 21]
y_data_ccd = [1, 5, 7, 8.5, 8.66]
test_data_scrambled = [-32, 34, 19, 72, -65, -11, -43, 6, 85, -17, -98, -26, 12, 20, 9, -92, -40, 98, -78, 17, -20, 49, 93, -27, -24, -66, 40, 84, 1, -64, -68, -25, -42, -46, -76, 43, -3, 30, -14, -34, -55, -13, 41, -30, 0, -61, 48, 23, 60, 87, 80, 77, 53, 73, 79, 24, -52, 82, 8, -44, 65, 47, -77, 94, 7, 37, -79, 36, -94, 91, 59, 10, 97, -38, -67, 83, 54, 31, -95, -63, 16, -45, 21, -12, 66, -48, -18, -96, -90, -21, -83, -74, 39, 64, 69, -97, 13, 55, 27, -39]
test_data_sorted = [-98, -97, -96, -95, -94, -92, -90, -83, -79, -78, -77, -76, -74, -68, -67, -66, -65, -64, -63, -61, -55, -52, -48, -46, -45, -44, -43, -42, -40, -39, -38, -34, -32, -30, -27, -26, -25, -24, -21, -20, -18, -17, -14, -13, -12, -11, -3, 0, 1, 6, 7, 8, 9, 10, 12, 13, 16, 17, 19, 20, 21, 23, 24, 27, 30, 31, 34, 36, 37, 39, 40, 41, 43, 47, 48, 49, 53, 54, 55, 59, 60, 64, 65, 66, 69, 72, 73, 77, 79, 80, 82, 83, 84, 85, 87, 91, 93, 94, 97, 98]
test_data_2D_pairs = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
test_data_2D_positive = np.array([[23, 51], [21, 32], [15, 25], [17, 31]])
test_output = np.array([1, 3, 4, 5])
test_labels_2D_pairs = np.array([1, 1, 2, 2])
validation_data_2D_pairs = np.array([[-0.8, -1], [0.8, 1.2]])
validation_labels_2D_pairs = np.array([1, 2])
def test_basicstats():
assert an.basic_stats(test_data_linear) == {"mean": 5.2, "median": 6.0, "standard-deviation": 2.85657137141714, "variance": 8.16, "minimum": 1.0, "maximum": 9.0}
assert an.z_score(3.2, 6, 1.5) == -1.8666666666666665
assert an.z_normalize([test_data_linear], 1).tolist() == [[0.07537783614444091, 0.22613350843332272, 0.45226701686664544, 0.5276448530110863, 0.6784005252999682]]
def test_regression():
assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccu, ["lin"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccd, ["log"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccu, ["exp"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccu, ["ply"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccd, ["sig"])) == True
def test_metrics():
assert an.Metric().elo(1500, 1500, [1, 0], 400, 24) == 1512.0
assert an.Metric().glicko2(1500, 250, 0.06, [1500, 1400], [250, 240], [1, 0]) == (1478.864307445517, 195.99122679202452, 0.05999602937563585)
#assert an.Metric().trueskill([[(25, 8.33), (24, 8.25), (32, 7.5)], [(25, 8.33), (25, 8.33), (21, 6.5)]], [1, 0]) == [(metrics.trueskill.Rating(mu=21.346, sigma=7.875), metrics.trueskill.Rating(mu=20.415, sigma=7.808), metrics.trueskill.Rating(mu=29.037, sigma=7.170)), (metrics.trueskill.Rating(mu=28.654, sigma=7.875), metrics.trueskill.Rating(mu=28.654, sigma=7.875), metrics.trueskill.Rating(mu=23.225, sigma=6.287))]
def test_array():
assert test_data_array.elementwise_mean() == 5.2
assert test_data_array.elementwise_median() == 6.0
assert test_data_array.elementwise_stdev() == 2.85657137141714
assert test_data_array.elementwise_variance() == 8.16
assert test_data_array.elementwise_npmin() == 1
assert test_data_array.elementwise_npmax() == 9
assert test_data_array.elementwise_stats() == (5.2, 6.0, 2.85657137141714, 8.16, 1, 9)
for i in range(len(test_data_array)):
assert test_data_array[i] == test_data_linear[i]
test_data_array[0] = 100
expected = [100, 3, 6, 7, 9]
for i in range(len(test_data_array)):
assert test_data_array[i] == expected[i]
def test_classifmetric():
classif_metric = ClassificationMetric(test_data_linear2, test_data_linear)
assert classif_metric[0].all() == metrics.confusion_matrix(test_data_linear, test_data_linear2).all()
assert classif_metric[1] == metrics.classification_report(test_data_linear, test_data_linear2)
def test_correlationtest():
assert all(np.isclose(list(CorrelationTest.anova_oneway(test_data_linear, test_data_linear2).values()), [0.05825242718446602, 0.8153507906592907], rtol=1e-10))
assert all(np.isclose(list(CorrelationTest.pearson(test_data_linear, test_data_linear2).values()), [0.9153061540753287, 0.02920895440940868], rtol=1e-10))
assert all(np.isclose(list(CorrelationTest.spearman(test_data_linear, test_data_linear2).values()), [0.9746794344808964, 0.004818230468198537], rtol=1e-10))
assert all(np.isclose(list(CorrelationTest.point_biserial(test_data_linear, test_data_linear2).values()), [0.9153061540753287, 0.02920895440940868], rtol=1e-10))
assert all(np.isclose(list(CorrelationTest.kendall(test_data_linear, test_data_linear2).values()), [0.9486832980505137, 0.022977401503206086], rtol=1e-10))
assert all(np.isclose(list(CorrelationTest.kendall_weighted(test_data_linear, test_data_linear2).values()), [0.9750538072369643, np.nan], rtol=1e-10, equal_nan=True))
def test_fit():
assert Fit.CircleFit(x=[0,0,-1,1], y=[1, -1, 0, 0]).LSC() == (0.0, 0.0, 1.0, 0.0)
def test_knn():
model, metric = KNN.knn_classifier(test_data_2D_pairs, test_labels_2D_pairs, 2)
assert isinstance(model, sklearn.neighbors.KNeighborsClassifier)
assert np.array([[0,0], [2,0]]).all() == metric[0].all()
assert ' precision recall f1-score support\n\n 1 0.00 0.00 0.00 0.0\n 2 0.00 0.00 0.00 2.0\n\n accuracy 0.00 2.0\n macro avg 0.00 0.00 0.00 2.0\nweighted avg 0.00 0.00 0.00 2.0\n' == metric[1]
model, metric = KNN.knn_regressor(test_data_2D_pairs, test_output, 2)
assert isinstance(model, sklearn.neighbors.KNeighborsRegressor)
assert (-25.0, 6.5, 2.5495097567963922) == metric
def test_naivebayes():
model, metric = NaiveBayes.gaussian(test_data_2D_pairs, test_labels_2D_pairs)
assert isinstance(model, sklearn.naive_bayes.GaussianNB)
assert metric[0].all() == np.array([[0, 0], [2, 0]]).all()
model, metric = NaiveBayes.multinomial(test_data_2D_positive, test_labels_2D_pairs)
assert isinstance(model, sklearn.naive_bayes.MultinomialNB)
assert metric[0].all() == np.array([[0, 0], [2, 0]]).all()
model, metric = NaiveBayes.bernoulli(test_data_2D_pairs, test_labels_2D_pairs)
assert isinstance(model, sklearn.naive_bayes.BernoulliNB)
assert metric[0].all() == np.array([[0, 0], [2, 0]]).all()
model, metric = NaiveBayes.complement(test_data_2D_positive, test_labels_2D_pairs)
assert isinstance(model, sklearn.naive_bayes.ComplementNB)
assert metric[0].all() == np.array([[0, 0], [2, 0]]).all()
def test_randomforest():
model, metric = RandomForest.random_forest_classifier(test_data_2D_pairs, test_labels_2D_pairs, 0.3, 2)
assert isinstance(model, sklearn.ensemble.RandomForestClassifier)
assert metric[0].all() == np.array([[0, 0], [2, 0]]).all()
model, metric = RandomForest.random_forest_regressor(test_data_2D_pairs, test_labels_2D_pairs, 0.3, 2)
assert isinstance(model, sklearn.ensemble.RandomForestRegressor)
assert metric == (0.0, 1.0, 1.0)
def test_regressionmetric():
assert RegressionMetric(test_data_linear, test_data_linear2)== (0.7705314009661837, 3.8, 1.9493588689617927)
def test_sort():
sorts = [Sort.quicksort, Sort.mergesort, Sort.heapsort, Sort.introsort, Sort.insertionsort, Sort.timsort, Sort.selectionsort, Sort.shellsort, Sort.bubblesort, Sort.cyclesort, Sort.cocktailsort]
for sort in sorts:
assert all(a == b for a, b in zip(sort(test_data_scrambled), test_data_sorted))
def test_statisticaltest():
#print(StatisticalTest.tukey_multicomparison([test_data_linear, test_data_linear2, test_data_linear3]))
assert StatisticalTest.tukey_multicomparison([test_data_linear, test_data_linear2, test_data_linear3]) == \
{'group 1 and group 2': [0.32571517201527916, False], 'group 1 and group 3': [0.977145516045838, False], 'group 2 and group 3': [0.6514303440305589, False]}
#assert all(np.isclose([i[0] for i in list(StatisticalTest.tukey_multicomparison([test_data_linear, test_data_linear2, test_data_linear3]).values],
# [0.32571517201527916, 0.977145516045838, 0.6514303440305589]))
#assert [i[1] for i in StatisticalTest.tukey_multicomparison([test_data_linear, test_data_linear2, test_data_linear3]).values] == \
# [False, False, False]
def test_svm():
data = test_data_2D_pairs
labels = test_labels_2D_pairs
test_data = validation_data_2D_pairs
test_labels = validation_labels_2D_pairs
lin_kernel = SVM.PrebuiltKernel.Linear()
#ply_kernel = SVM.PrebuiltKernel.Polynomial(3, 0)
rbf_kernel = SVM.PrebuiltKernel.RBF('scale')
sig_kernel = SVM.PrebuiltKernel.Sigmoid(0)
lin_kernel = SVM.fit(lin_kernel, data, labels)
#ply_kernel = SVM.fit(ply_kernel, data, labels)
rbf_kernel = SVM.fit(rbf_kernel, data, labels)
sig_kernel = SVM.fit(sig_kernel, data, labels)
for i in range(len(test_data)):
assert lin_kernel.predict([test_data[i]]).tolist() == [test_labels[i]]
#for i in range(len(test_data)):
# assert ply_kernel.predict([test_data[i]]).tolist() == [test_labels[i]]
for i in range(len(test_data)):
assert rbf_kernel.predict([test_data[i]]).tolist() == [test_labels[i]]
for i in range(len(test_data)):
assert sig_kernel.predict([test_data[i]]).tolist() == [test_labels[i]]
def test_equation():
parser = BNF()
correctParse = {
"9": 9.0,
"-9": -9.0,
"--9": 9.0,
"-E": -2.718281828459045,
"9 + 3 + 6": 18.0,
"9 + 3 / 11": 9.272727272727273,
"(9 + 3)": 12.0,
"(9+3) / 11": 1.0909090909090908,
"9 - 12 - 6": -9.0,
"9 - (12 - 6)": 3.0,
"2*3.14159": 6.28318,
"3.1415926535*3.1415926535 / 10": 0.9869604400525172,
"PI * PI / 10": 0.9869604401089358,
"PI*PI/10": 0.9869604401089358,
"PI^2": 9.869604401089358,
"round(PI^2)": 10,
"6.02E23 * 8.048": 4.844896e+24,
"e / 3": 0.9060939428196817,
"sin(PI/2)": 1.0,
"10+sin(PI/4)^2": 10.5,
"trunc(E)": 2,
"trunc(-E)": -2,
"round(E)": 3,
"round(-E)": -3,
"E^PI": 23.140692632779263,
"exp(0)": 1.0,
"exp(1)": 2.718281828459045,
"2^3^2": 512.0,
"(2^3)^2": 64.0,
"2^3+2": 10.0,
"2^3+5": 13.0,
"2^9": 512.0,
"sgn(-2)": -1,
"sgn(0)": 0,
"sgn(0.1)": 1,
"sgn(cos(PI/4))": 1,
"sgn(cos(PI/2))": 0,
"sgn(cos(PI*3/4))": -1,
"+(sgn(cos(PI/4)))": 1,
"-(sgn(cos(PI/4)))": -1,
}
for key in list(correctParse.keys()):
assert parser.eval(key) == correctParse[key]

View File

@@ -0,0 +1,628 @@
# Titan Robotics Team 2022: Analysis Module
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import Analysis'
# this should be included in the local directory or environment variable
# this module has been optimized for multhreaded computing
# current benchmark of optimization: 1.33 times faster
# setup:
__version__ = "3.0.2"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
3.0.2:
- fixed __all__
3.0.1:
- removed numba dependency and calls
3.0.0:
- exported several submodules to their own files while preserving backwards compatibility:
- Array
- ClassificationMetric
- CorrelationTest
- KNN
- NaiveBayes
- RandomForest
- RegressionMetric
- Sort
- StatisticalTest
- SVM
- note: above listed submodules will not be supported in the future
- future changes to all submodules will be held in their respective changelogs
- future changes altering the parent package will be held in the __changelog__ of the parent package (in __init__.py)
- changed reference to module name to Analysis
2.3.1:
- fixed bugs in Array class
2.3.0:
- overhauled Array class
2.2.3:
- fixed spelling of RandomForest
- made n_neighbors required for KNN
- made n_classifiers required for SVM
2.2.2:
- fixed 2.2.1 changelog entry
- changed regression to return dictionary
2.2.1:
- changed all references to parent package analysis to tra_analysis
2.2.0:
- added Sort class
- added several array sorting functions to Sort class including:
- quick sort
- merge sort
- intro(spective) sort
- heap sort
- insertion sort
- tim sort
- selection sort
- bubble sort
- cycle sort
- cocktail sort
- tested all sorting algorithms with both lists and numpy arrays
- depreciated sort function from Array class
- added warnings as an import
2.1.4:
- added sort and search functions to Array class
2.1.3:
- changed output of basic_stats and histo_analysis to libraries
- fixed __all__
2.1.2:
- renamed ArrayTest class to Array
2.1.1:
- added add, mul, neg, and inv functions to ArrayTest class
- added normalize function to ArrayTest class
- added dot and cross functions to ArrayTest class
2.1.0:
- added ArrayTest class
- added elementwise mean, median, standard deviation, variance, min, max functions to ArrayTest class
- added elementwise_stats to ArrayTest which encapsulates elementwise statistics
- appended to __all__ to reflect changes
2.0.6:
- renamed func functions in regression to lin, log, exp, and sig
2.0.5:
- moved random_forrest_regressor and random_forrest_classifier to RandomForrest class
- renamed Metrics to Metric
- renamed RegressionMetrics to RegressionMetric
- renamed ClassificationMetrics to ClassificationMetric
- renamed CorrelationTests to CorrelationTest
- renamed StatisticalTests to StatisticalTest
- reflected rafactoring to all mentions of above classes/functions
2.0.4:
- fixed __all__ to reflected the correct functions and classes
- fixed CorrelationTests and StatisticalTests class functions to require self invocation
- added missing math import
- fixed KNN class functions to require self invocation
- fixed Metrics class functions to require self invocation
- various spelling fixes in CorrelationTests and StatisticalTests
2.0.3:
- bug fixes with CorrelationTests and StatisticalTests
- moved glicko2 and trueskill to the metrics subpackage
- moved elo to a new metrics subpackage
2.0.2:
- fixed docs
2.0.1:
- fixed docs
2.0.0:
- cleaned up wild card imports with scipy and sklearn
- added CorrelationTests class
- added StatisticalTests class
- added several correlation tests to CorrelationTests
- added several statistical tests to StatisticalTests
1.13.9:
- moved elo, glicko2, trueskill functions under class Metrics
1.13.8:
- moved Glicko2 to a seperate package
1.13.7:
- fixed bug with trueskill
1.13.6:
- cleaned up imports
1.13.5:
- cleaned up package
1.13.4:
- small fixes to regression to improve performance
1.13.3:
- filtered nans from regression
1.13.2:
- removed torch requirement, and moved Regression back to regression.py
1.13.1:
- bug fix with linear regression not returning a proper value
- cleaned up regression
- fixed bug with polynomial regressions
1.13.0:
- fixed all regressions to now properly work
1.12.6:
- fixed bg with a division by zero in histo_analysis
1.12.5:
- fixed numba issues by removing numba from elo, glicko2 and trueskill
1.12.4:
- renamed gliko to glicko
1.12.3:
- removed depreciated code
1.12.2:
- removed team first time trueskill instantiation in favor of integration in superscript.py
1.12.1:
- improved readibility of regression outputs by stripping tensor data
- used map with lambda to acheive the improved readibility
- lost numba jit support with regression, and generated_jit hangs at execution
- TODO: reimplement correct numba integration in regression
1.12.0:
- temporarily fixed polynomial regressions by using sklearn's PolynomialFeatures
1.11.010:
- alphabeticaly ordered import lists
1.11.9:
- bug fixes
1.11.8:
- bug fixes
1.11.7:
- bug fixes
1.11.6:
- tested min and max
- bug fixes
1.11.5:
- added min and max in basic_stats
1.11.4:
- bug fixes
1.11.3:
- bug fixes
1.11.2:
- consolidated metrics
- fixed __all__
1.11.1:
- added test/train split to RandomForestClassifier and RandomForestRegressor
1.11.0:
- added RandomForestClassifier and RandomForestRegressor
- note: untested
1.10.0:
- added numba.jit to remaining functions
1.9.2:
- kernelized PCA and KNN
1.9.1:
- fixed bugs with SVM and NaiveBayes
1.9.0:
- added SVM class, subclasses, and functions
- note: untested
1.8.0:
- added NaiveBayes classification engine
- note: untested
1.7.0:
- added knn()
- added confusion matrix to decisiontree()
1.6.2:
- changed layout of __changelog to be vscode friendly
1.6.1:
- added additional hyperparameters to decisiontree()
1.6.0:
- fixed __version__
- fixed __all__ order
- added decisiontree()
1.5.3:
- added pca
1.5.2:
- reduced import list
- added kmeans clustering engine
1.5.1:
- simplified regression by using .to(device)
1.5.0:
- added polynomial regression to regression(); untested
1.4.0:
- added trueskill()
1.3.2:
- renamed regression class to Regression, regression_engine() to regression gliko2_engine class to Gliko2
1.3.1:
- changed glicko2() to return tuple instead of array
1.3.0:
- added glicko2_engine class and glicko()
- verified glicko2() accuracy
1.2.3:
- fixed elo()
1.2.2:
- added elo()
- elo() has bugs to be fixed
1.2.1:
- readded regrression import
1.2.0:
- integrated regression.py as regression class
- removed regression import
- fixed metadata for regression class
- fixed metadata for analysis class
1.1.1:
- regression_engine() bug fixes, now actaully regresses
1.1.0:
- added regression_engine()
- added all regressions except polynomial
1.0.7:
- updated _init_device()
1.0.6:
- removed useless try statements
1.0.5:
- removed impossible outcomes
1.0.4:
- added performance metrics (r^2, mse, rms)
1.0.3:
- resolved nopython mode for mean, median, stdev, variance
1.0.2:
- snapped (removed) majority of uneeded imports
- forced object mode (bad) on all jit
- TODO: stop numba complaining about not being able to compile in nopython mode
1.0.1:
- removed from sklearn import * to resolve uneeded wildcard imports
1.0.0:
- removed c_entities,nc_entities,obstacles,objectives from __all__
- applied numba.jit to all functions
- depreciated and removed stdev_z_split
- cleaned up histo_analysis to include numpy and numba.jit optimizations
- depreciated and removed all regression functions in favor of future pytorch optimizer
- depreciated and removed all nonessential functions (basic_analysis, benchmark, strip_data)
- optimized z_normalize using sklearn.preprocessing.normalize
- TODO: implement kernel/function based pytorch regression optimizer
0.9.0:
- refactored
- numpyed everything
- removed stats in favor of numpy functions
0.8.5:
- minor fixes
0.8.4:
- removed a few unused dependencies
0.8.3:
- added p_value function
0.8.2:
- updated __all__ correctly to contain changes made in v 0.8.0 and v 0.8.1
0.8.1:
- refactors
- bugfixes
0.8.0:
- depreciated histo_analysis_old
- depreciated debug
- altered basic_analysis to take array data instead of filepath
- refactor
- optimization
0.7.2:
- bug fixes
0.7.1:
- bug fixes
0.7.0:
- added tanh_regression (logistical regression)
- bug fixes
0.6.5:
- added z_normalize function to normalize dataset
- bug fixes
0.6.4:
- bug fixes
0.6.3:
- bug fixes
0.6.2:
- bug fixes
0.6.1:
- corrected __all__ to contain all of the functions
0.6.0:
- added calc_overfit, which calculates two measures of overfit, error and performance
- added calculating overfit to optimize_regression
0.5.0:
- added optimize_regression function, which is a sample function to find the optimal regressions
- optimize_regression function filters out some overfit funtions (functions with r^2 = 1)
- planned addition: overfit detection in the optimize_regression function
0.4.2:
- added __changelog__
- updated debug function with log and exponential regressions
0.4.1:
- added log regressions
- added exponential regressions
- added log_regression and exp_regression to __all__
0.3.8:
- added debug function to further consolidate functions
0.3.7:
- added builtin benchmark function
- added builtin random (linear) data generation function
- added device initialization (_init_device)
0.3.6:
- reorganized the imports list to be in alphabetical order
- added search and regurgitate functions to c_entities, nc_entities, obstacles, objectives
0.3.5:
- major bug fixes
- updated historical analysis
- depreciated old historical analysis
0.3.4:
- added __version__, __author__, __all__
- added polynomial regression
- added root mean squared function
- added r squared function
0.3.3:
- bug fixes
- added c_entities
0.3.2:
- bug fixes
- added nc_entities, obstacles, objectives
- consolidated statistics.py to analysis.py
0.3.1:
- compiled 1d, column, and row basic stats into basic stats function
0.3.0:
- added historical analysis function
0.2.x:
- added z score test
0.1.x:
- major bug fixes
0.0.x:
- added loading csv
- added 1d, column, row basic stats
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
'load_csv',
'basic_stats',
'z_score',
'z_normalize',
'histo_analysis',
'regression',
'Metric',
'kmeans',
'pca',
'decisiontree',
# all statistics functions left out due to integration in other functions
]
# now back to your regularly scheduled programming:
# imports (now in alphabetical order! v 0.3.006):
import csv
from tra_analysis.metrics import elo as Elo
from tra_analysis.metrics import glicko2 as Glicko2
import math
import numpy as np
import scipy
from scipy import optimize, stats
import sklearn
from sklearn import preprocessing, pipeline, linear_model, metrics, cluster, decomposition, tree, neighbors, naive_bayes, svm, model_selection, ensemble
from tra_analysis.metrics import trueskill as Trueskill
import warnings
# import submodules
from .Array import Array
from .ClassificationMetric import ClassificationMetric
from .CorrelationTest_obj import CorrelationTest
from .KNN_obj import KNN
from .NaiveBayes_obj import NaiveBayes
from .RandomForest_obj import RandomForest
from .RegressionMetric import RegressionMetric
from .Sort_obj import Sort
from .StatisticalTest_obj import StatisticalTest
from . import SVM
class error(ValueError):
pass
def load_csv(filepath):
with open(filepath, newline='') as csvfile:
file_array = np.array(list(csv.reader(csvfile)))
csvfile.close()
return file_array
# expects 1d array
def basic_stats(data):
data_t = np.array(data).astype(float)
_mean = mean(data_t)
_median = median(data_t)
_stdev = stdev(data_t)
_variance = variance(data_t)
_min = npmin(data_t)
_max = npmax(data_t)
return {"mean": _mean, "median": _median, "standard-deviation": _stdev, "variance": _variance, "minimum": _min, "maximum": _max}
# returns z score with inputs of point, mean and standard deviation of spread
def z_score(point, mean, stdev):
score = (point - mean) / stdev
return score
# expects 2d array, normalizes across all axes
def z_normalize(array, *args):
array = np.array(array)
for arg in args:
array = sklearn.preprocessing.normalize(array, axis = arg)
return array
# expects 2d array of [x,y]
def histo_analysis(hist_data):
if len(hist_data[0]) > 2:
hist_data = np.array(hist_data)
derivative = np.array(len(hist_data) - 1, dtype = float)
t = np.diff(hist_data)
derivative = t[1] / t[0]
np.sort(derivative)
return {"mean": basic_stats(derivative)["mean"], "deviation": basic_stats(derivative)["standard-deviation"]}
else:
return None
def regression(inputs, outputs, args): # inputs, outputs expects N-D array
X = np.array(inputs)
y = np.array(outputs)
regressions = {}
if 'lin' in args: # formula: ax + b
try:
def lin(x, a, b):
return a * x + b
popt, pcov = scipy.optimize.curve_fit(lin, X, y)
coeffs = popt.flatten().tolist()
regressions["lin"] = (str(coeffs[0]) + "*x+" + str(coeffs[1]))
except Exception as e:
pass
if 'log' in args: # formula: a log (b(x + c)) + d
try:
def log(x, a, b, c, d):
return a * np.log(b*(x + c)) + d
popt, pcov = scipy.optimize.curve_fit(log, X, y)
coeffs = popt.flatten().tolist()
regressions["log"] = (str(coeffs[0]) + "*log(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
except Exception as e:
pass
if 'exp' in args: # formula: a e ^ (b(x + c)) + d
try:
def exp(x, a, b, c, d):
return a * np.exp(b*(x + c)) + d
popt, pcov = scipy.optimize.curve_fit(exp, X, y)
coeffs = popt.flatten().tolist()
regressions["exp"] = (str(coeffs[0]) + "*e^(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
except Exception as e:
pass
if 'ply' in args: # formula: a + bx^1 + cx^2 + dx^3 + ...
inputs = np.array([inputs])
outputs = np.array([outputs])
plys = {}
limit = len(outputs[0])
for i in range(2, limit):
model = sklearn.preprocessing.PolynomialFeatures(degree = i)
model = sklearn.pipeline.make_pipeline(model, sklearn.linear_model.LinearRegression())
model = model.fit(np.rot90(inputs), np.rot90(outputs))
params = model.steps[1][1].intercept_.tolist()
params = np.append(params, model.steps[1][1].coef_[0].tolist()[1::])
params = params.flatten().tolist()
temp = ""
counter = 0
for param in params:
temp += "(" + str(param) + "*x^" + str(counter) + ")"
counter += 1
plys["x^" + str(i)] = (temp)
regressions["ply"] = (plys)
if 'sig' in args: # formula: a tanh (b(x + c)) + d
try:
def sig(x, a, b, c, d):
return a * np.tanh(b*(x + c)) + d
popt, pcov = scipy.optimize.curve_fit(sig, X, y)
coeffs = popt.flatten().tolist()
regressions["sig"] = (str(coeffs[0]) + "*tanh(" + str(coeffs[1]) + "*(x+" + str(coeffs[2]) + "))+" + str(coeffs[3]))
except Exception as e:
pass
return regressions
class Metric:
def elo(self, starting_score, opposing_score, observed, N, K):
return Elo.calculate(starting_score, opposing_score, observed, N, K)
def glicko2(self, starting_score, starting_rd, starting_vol, opposing_score, opposing_rd, observations):
player = Glicko2.Glicko2(rating = starting_score, rd = starting_rd, vol = starting_vol)
player.update_player([x for x in opposing_score], [x for x in opposing_rd], observations)
return (player.rating, player.rd, player.vol)
def trueskill(self, teams_data, observations): # teams_data is array of array of tuples ie. [[(mu, sigma), (mu, sigma), (mu, sigma)], [(mu, sigma), (mu, sigma), (mu, sigma)]]
team_ratings = []
for team in teams_data:
team_temp = ()
for player in team:
player = Trueskill.Rating(player[0], player[1])
team_temp = team_temp + (player,)
team_ratings.append(team_temp)
return Trueskill.rate(team_ratings, ranks=observations)
def mean(data):
return np.mean(data)
def median(data):
return np.median(data)
def stdev(data):
return np.std(data)
def variance(data):
return np.var(data)
def npmin(data):
return np.amin(data)
def npmax(data):
return np.amax(data)
def kmeans(data, n_clusters=8, init="k-means++", n_init=10, max_iter=300, tol=0.0001, precompute_distances="auto", verbose=0, random_state=None, copy_x=True, n_jobs=None, algorithm="auto"):
kernel = sklearn.cluster.KMeans(n_clusters = n_clusters, init = init, n_init = n_init, max_iter = max_iter, tol = tol, precompute_distances = precompute_distances, verbose = verbose, random_state = random_state, copy_x = copy_x, n_jobs = n_jobs, algorithm = algorithm)
kernel.fit(data)
predictions = kernel.predict(data)
centers = kernel.cluster_centers_
return centers, predictions
def pca(data, n_components = None, copy = True, whiten = False, svd_solver = "auto", tol = 0.0, iterated_power = "auto", random_state = None):
kernel = sklearn.decomposition.PCA(n_components = n_components, copy = copy, whiten = whiten, svd_solver = svd_solver, tol = tol, iterated_power = iterated_power, random_state = random_state)
return kernel.fit_transform(data)
def decisiontree(data, labels, test_size = 0.3, criterion = "gini", splitter = "default", max_depth = None): #expects *2d data and 1d labels
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.tree.DecisionTreeClassifier(criterion = criterion, splitter = splitter, max_depth = max_depth)
model = model.fit(data_train,labels_train)
predictions = model.predict(data_test)
metrics = ClassificationMetric(predictions, labels_test)
return model, metrics

View File

@@ -0,0 +1,164 @@
# Titan Robotics Team 2022: Array submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import Array'
# setup:
__version__ = "1.0.3"
__changelog__ = """changelog:
1.0.3:
- fixed __all__
1.0.2:
- fixed several implementation bugs with magic methods
1.0.1:
- removed search and __search functions
1.0.0:
- ported analysis.Array() here
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
"Array",
]
import numpy as np
import warnings
class Array(): # tests on nd arrays independent of basic_stats
def __init__(self, narray):
self.array = np.array(narray)
def __str__(self):
return str(self.array)
def __repr__(self):
return str(self.array)
def elementwise_mean(self, axis = 0): # expects arrays that are size normalized
return np.mean(self.array, axis = axis)
def elementwise_median(self, axis = 0):
return np.median(self.array, axis = axis)
def elementwise_stdev(self, axis = 0):
return np.std(self.array, axis = axis)
def elementwise_variance(self, axis = 0):
return np.var(self.array, axis = axis)
def elementwise_npmin(self, axis = 0):
return np.amin(self.array, axis = axis)
def elementwise_npmax(self, axis = 0):
return np.amax(self.array, axis = axis)
def elementwise_stats(self, axis = 0):
_mean = self.elementwise_mean(axis = axis)
_median = self.elementwise_median(axis = axis)
_stdev = self.elementwise_stdev(axis = axis)
_variance = self.elementwise_variance(axis = axis)
_min = self.elementwise_npmin(axis = axis)
_max = self.elementwise_npmax(axis = axis)
return _mean, _median, _stdev, _variance, _min, _max
def __getitem__(self, key):
return self.array[key]
def __setitem__(self, key, value):
self.array[key] = value
def __len__(self):
return len(self.array)
def normalize(self):
a = np.atleast_1d(np.linalg.norm(self.array))
a[a==0] = 1
return Array(self.array / np.expand_dims(a, -1))
def __add__(self, other):
return Array(self.array + other.array)
def __sub__(self, other):
return Array(self.array - other.array)
def __neg__(self):
return Array(-self.array)
def __abs__(self):
return Array(abs(self.array))
def __invert__(self):
return Array(1/self.array)
def __mul__(self, other):
if(isinstance(other, Array)):
return Array(self.array.dot(other.array))
elif(isinstance(other, int)):
return Array(other * self.array)
else:
raise Exception("unsupported multiplication between Array and " + str(type(other)))
def __rmul__(self, other):
return self.__mul__(other)
def cross(self, other):
return np.cross(self.array, other.array)
def transpose(self):
return Array(np.transpose(self.array))
def sort(self, array): # depreciated
warnings.warn("Array.sort has been depreciated in favor of Sort")
array_length = len(array)
if array_length <= 1:
return array
middle_index = int(array_length / 2)
left = array[0:middle_index]
right = array[middle_index:]
left = self.sort(left)
right = self.sort(right)
return self.__merge(left, right)
def __merge(self, left, right):
sorted_list = []
left = left[:]
right = right[:]
while len(left) > 0 or len(right) > 0:
if len(left) > 0 and len(right) > 0:
if left[0] <= right[0]:
sorted_list.append(left.pop(0))
else:
sorted_list.append(right.pop(0))
elif len(left) > 0:
sorted_list.append(left.pop(0))
elif len(right) > 0:
sorted_list.append(right.pop(0))
return sorted_list

View File

@@ -0,0 +1,39 @@
# Titan Robotics Team 2022: ClassificationMetric submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import ClassificationMetric'
# setup:
__version__ = "1.0.1"
__changelog__ = """changelog:
1.0.1:
- fixed __all__
1.0.0:
- ported analysis.ClassificationMetric() here
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
"ClassificationMetric",
]
import sklearn
from sklearn import metrics
class ClassificationMetric():
def __new__(cls, predictions, targets):
return cls.cm(cls, predictions, targets), cls.cr(cls, predictions, targets)
def cm(self, predictions, targets):
return sklearn.metrics.confusion_matrix(targets, predictions)
def cr(self, predictions, targets):
return sklearn.metrics.classification_report(targets, predictions)

View File

@@ -0,0 +1,67 @@
# Titan Robotics Team 2022: CorrelationTest submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import CorrelationTest'
# setup:
__version__ = "1.0.1"
__changelog__ = """changelog:
1.0.1:
- fixed __all__
1.0.0:
- ported analysis.CorrelationTest() here
- removed classness
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
"anova_oneway",
"pearson",
"spearman",
"point_biserial",
"kendall",
"kendall_weighted",
"mgc",
]
import scipy
from scipy import stats
def anova_oneway(*args): #expects arrays of samples
results = scipy.stats.f_oneway(*args)
return {"f-value": results[0], "p-value": results[1]}
def pearson(x, y):
results = scipy.stats.pearsonr(x, y)
return {"r-value": results[0], "p-value": results[1]}
def spearman(a, b = None, axis = 0, nan_policy = 'propagate'):
results = scipy.stats.spearmanr(a, b = b, axis = axis, nan_policy = nan_policy)
return {"r-value": results[0], "p-value": results[1]}
def point_biserial(x, y):
results = scipy.stats.pointbiserialr(x, y)
return {"r-value": results[0], "p-value": results[1]}
def kendall(x, y, initial_lexsort = None, nan_policy = 'propagate', method = 'auto'):
results = scipy.stats.kendalltau(x, y, initial_lexsort = initial_lexsort, nan_policy = nan_policy, method = method)
return {"tau": results[0], "p-value": results[1]}
def kendall_weighted(x, y, rank = True, weigher = None, additive = True):
results = scipy.stats.weightedtau(x, y, rank = rank, weigher = weigher, additive = additive)
return {"tau": results[0], "p-value": results[1]}
def mgc(x, y, compute_distance = None, reps = 1000, workers = 1, is_twosamp = False, random_state = None):
results = scipy.stats.multiscale_graphcorr(x, y, compute_distance = compute_distance, reps = reps, workers = workers, is_twosamp = is_twosamp, random_state = random_state)
return {"k-value": results[0], "p-value": results[1], "data": results[2]} # unsure if MGC test returns a k value

View File

@@ -0,0 +1,41 @@
# Only included for backwards compatibility! Do not update, CorrelationTest is preferred and supported.
import scipy
from scipy import stats
class CorrelationTest:
def anova_oneway(self, *args): #expects arrays of samples
results = scipy.stats.f_oneway(*args)
return {"f-value": results[0], "p-value": results[1]}
def pearson(self, x, y):
results = scipy.stats.pearsonr(x, y)
return {"r-value": results[0], "p-value": results[1]}
def spearman(self, a, b = None, axis = 0, nan_policy = 'propagate'):
results = scipy.stats.spearmanr(a, b = b, axis = axis, nan_policy = nan_policy)
return {"r-value": results[0], "p-value": results[1]}
def point_biserial(self, x,y):
results = scipy.stats.pointbiserialr(x, y)
return {"r-value": results[0], "p-value": results[1]}
def kendall(self, x, y, initial_lexsort = None, nan_policy = 'propagate', method = 'auto'):
results = scipy.stats.kendalltau(x, y, initial_lexsort = initial_lexsort, nan_policy = nan_policy, method = method)
return {"tau": results[0], "p-value": results[1]}
def kendall_weighted(self, x, y, rank = True, weigher = None, additive = True):
results = scipy.stats.weightedtau(x, y, rank = rank, weigher = weigher, additive = additive)
return {"tau": results[0], "p-value": results[1]}
def mgc(self, x, y, compute_distance = None, reps = 1000, workers = 1, is_twosamp = False, random_state = None):
results = scipy.stats.multiscale_graphcorr(x, y, compute_distance = compute_distance, reps = reps, workers = workers, is_twosamp = is_twosamp, random_state = random_state)
return {"k-value": results[0], "p-value": results[1], "data": results[2]} # unsure if MGC test returns a k value

View File

@@ -0,0 +1,87 @@
# Titan Robotics Team 2022: CPU fitting models
# Written by Dev Singh
# Notes:
# this module is cuda-optimized (as appropriate) and vectorized (except for one small part)
# setup:
__version__ = "0.0.2"
# changelog should be viewed using print(analysis.fits.__changelog__)
__changelog__ = """changelog:
0.0.2:
- renamed module to Fit
0.0.1:
- initial release, add circle fitting with LSC
"""
__author__ = (
"Dev Singh <dev@devksingh.com>"
)
__all__ = [
'CircleFit'
]
import numpy as np
class CircleFit:
"""Class to fit data to a circle using the Least Square Circle (LSC) method"""
# For more information on the LSC method, see:
# http://www.dtcenter.org/sites/default/files/community-code/met/docs/write-ups/circle_fit.pdf
def __init__(self, x, y, xy=None):
self.ournp = np #todo: implement cupy correctly
if type(x) == list:
x = np.array(x)
if type(y) == list:
y = np.array(y)
if type(xy) == list:
xy = np.array(xy)
if xy != None:
self.coords = xy
else:
# following block combines x and y into one array if not already done
self.coords = self.ournp.vstack(([x.T], [y.T])).T
def calc_R(x, y, xc, yc):
"""Returns distance between center and point"""
return self.ournp.sqrt((x-xc)**2 + (y-yc)**2)
def f(c, x, y):
"""Returns distance between point and circle at c"""
Ri = calc_R(x, y, *c)
return Ri - Ri.mean()
def LSC(self):
"""Fits given data to a circle and returns the center, radius, and variance"""
x = self.coords[:, 0]
y = self.coords[:, 1]
# guessing at a center
x_m = self.ournp.mean(x)
y_m = self.ournp.mean(y)
# calculation of the reduced coordinates
u = x - x_m
v = y - y_m
# linear system defining the center (uc, vc) in reduced coordinates:
# Suu * uc + Suv * vc = (Suuu + Suvv)/2
# Suv * uc + Svv * vc = (Suuv + Svvv)/2
Suv = self.ournp.sum(u*v)
Suu = self.ournp.sum(u**2)
Svv = self.ournp.sum(v**2)
Suuv = self.ournp.sum(u**2 * v)
Suvv = self.ournp.sum(u * v**2)
Suuu = self.ournp.sum(u**3)
Svvv = self.ournp.sum(v**3)
# Solving the linear system
A = self.ournp.array([ [ Suu, Suv ], [Suv, Svv]])
B = self.ournp.array([ Suuu + Suvv, Svvv + Suuv ])/2.0
uc, vc = self.ournp.linalg.solve(A, B)
xc_1 = x_m + uc
yc_1 = y_m + vc
# Calculate the distances from center (xc_1, yc_1)
Ri_1 = self.ournp.sqrt((x-xc_1)**2 + (y-yc_1)**2)
R_1 = self.ournp.mean(Ri_1)
# calculate residual error
residu_1 = self.ournp.sum((Ri_1-R_1)**2)
return (xc_1, yc_1, R_1, residu_1)

View File

@@ -0,0 +1,45 @@
# Titan Robotics Team 2022: KNN submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import KNN'
# setup:
__version__ = "1.0.0"
__changelog__ = """changelog:
1.0.0:
- ported analysis.KNN() here
- removed classness
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"James Pan <zpan@imsa.edu>"
)
__all__ = [
'knn_classifier',
'knn_regressor'
]
import sklearn
from sklearn import model_selection, neighbors
from . import ClassificationMetric, RegressionMetric
def knn_classifier(data, labels, n_neighbors = 5, test_size = 0.3, algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, p=2, weights='uniform'): #expects *2d data and 1d labels post-scaling
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.neighbors.KNeighborsClassifier(n_neighbors = n_neighbors, weights = weights, algorithm = algorithm, leaf_size = leaf_size, p = p, metric = metric, metric_params = metric_params, n_jobs = n_jobs)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def knn_regressor(data, outputs, n_neighbors = 5, test_size = 0.3, weights = "uniform", algorithm = "auto", leaf_size = 30, p = 2, metric = "minkowski", metric_params = None, n_jobs = None):
data_train, data_test, outputs_train, outputs_test = sklearn.model_selection.train_test_split(data, outputs, test_size=test_size, random_state=1)
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors = n_neighbors, weights = weights, algorithm = algorithm, leaf_size = leaf_size, p = p, metric = metric, metric_params = metric_params, n_jobs = n_jobs)
model.fit(data_train, outputs_train)
predictions = model.predict(data_test)
return model, RegressionMetric.RegressionMetric(predictions, outputs_test)

View File

@@ -0,0 +1,25 @@
# Only included for backwards compatibility! Do not update, NaiveBayes is preferred and supported.
import sklearn
from sklearn import model_selection, neighbors
from . import ClassificationMetric, RegressionMetric
class KNN:
def knn_classifier(self, data, labels, n_neighbors, test_size = 0.3, algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, p=2, weights='uniform'): #expects *2d data and 1d labels post-scaling
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.neighbors.KNeighborsClassifier()
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def knn_regressor(self, data, outputs, n_neighbors, test_size = 0.3, weights = "uniform", algorithm = "auto", leaf_size = 30, p = 2, metric = "minkowski", metric_params = None, n_jobs = None):
data_train, data_test, outputs_train, outputs_test = sklearn.model_selection.train_test_split(data, outputs, test_size=test_size, random_state=1)
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors = n_neighbors, weights = weights, algorithm = algorithm, leaf_size = leaf_size, p = p, metric = metric, metric_params = metric_params, n_jobs = n_jobs)
model.fit(data_train, outputs_train)
predictions = model.predict(data_test)
return model, RegressionMetric(predictions, outputs_test)

View File

@@ -0,0 +1,64 @@
# Titan Robotics Team 2022: NaiveBayes submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import NaiveBayes'
# setup:
__version__ = "1.0.0"
__changelog__ = """changelog:
1.0.0:
- ported analysis.NaiveBayes() here
- removed classness
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
'gaussian',
'multinomial'
'bernoulli',
'complement'
]
import sklearn
from sklearn import model_selection, naive_bayes
from . import ClassificationMetric, RegressionMetric
def gaussian(data, labels, test_size = 0.3, priors = None, var_smoothing = 1e-09):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.GaussianNB(priors = priors, var_smoothing = var_smoothing)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def multinomial(data, labels, test_size = 0.3, alpha=1.0, fit_prior=True, class_prior=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.MultinomialNB(alpha = alpha, fit_prior = fit_prior, class_prior = class_prior)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def bernoulli(data, labels, test_size = 0.3, alpha=1.0, binarize=0.0, fit_prior=True, class_prior=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.BernoulliNB(alpha = alpha, binarize = binarize, fit_prior = fit_prior, class_prior = class_prior)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def complement(data, labels, test_size = 0.3, alpha=1.0, fit_prior=True, class_prior=None, norm=False):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.ComplementNB(alpha = alpha, fit_prior = fit_prior, class_prior = class_prior, norm = norm)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)

View File

@@ -0,0 +1,43 @@
# Only included for backwards compatibility! Do not update, NaiveBayes is preferred and supported.
import sklearn
from sklearn import model_selection, naive_bayes
from . import ClassificationMetric, RegressionMetric
class NaiveBayes:
def guassian(self, data, labels, test_size = 0.3, priors = None, var_smoothing = 1e-09):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.GaussianNB(priors = priors, var_smoothing = var_smoothing)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def multinomial(self, data, labels, test_size = 0.3, alpha=1.0, fit_prior=True, class_prior=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.MultinomialNB(alpha = alpha, fit_prior = fit_prior, class_prior = class_prior)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def bernoulli(self, data, labels, test_size = 0.3, alpha=1.0, binarize=0.0, fit_prior=True, class_prior=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.BernoulliNB(alpha = alpha, binarize = binarize, fit_prior = fit_prior, class_prior = class_prior)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)
def complement(self, data, labels, test_size = 0.3, alpha=1.0, fit_prior=True, class_prior=None, norm=False):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
model = sklearn.naive_bayes.ComplementNB(alpha = alpha, fit_prior = fit_prior, class_prior = class_prior, norm = norm)
model.fit(data_train, labels_train)
predictions = model.predict(data_test)
return model, ClassificationMetric(predictions, labels_test)

View File

@@ -0,0 +1,46 @@
# Titan Robotics Team 2022: RandomForest submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import RandomForest'
# setup:
__version__ = "1.0.1"
__changelog__ = """changelog:
1.0.1:
- fixed __all__
1.0.0:
- ported analysis.RandomFores() here
- removed classness
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
"random_forest_classifier",
"random_forest_regressor",
]
import sklearn
from sklearn import ensemble, model_selection
from . import ClassificationMetric, RegressionMetric
def random_forest_classifier(data, labels, test_size, n_estimators, criterion="gini", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
kernel = sklearn.ensemble.RandomForestClassifier(n_estimators = n_estimators, criterion = criterion, max_depth = max_depth, min_samples_split = min_samples_split, min_samples_leaf = min_samples_leaf, min_weight_fraction_leaf = min_weight_fraction_leaf, max_leaf_nodes = max_leaf_nodes, min_impurity_decrease = min_impurity_decrease, bootstrap = bootstrap, oob_score = oob_score, n_jobs = n_jobs, random_state = random_state, verbose = verbose, warm_start = warm_start, class_weight = class_weight)
kernel.fit(data_train, labels_train)
predictions = kernel.predict(data_test)
return kernel, ClassificationMetric(predictions, labels_test)
def random_forest_regressor(data, outputs, test_size, n_estimators, criterion="mse", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False):
data_train, data_test, outputs_train, outputs_test = sklearn.model_selection.train_test_split(data, outputs, test_size=test_size, random_state=1)
kernel = sklearn.ensemble.RandomForestRegressor(n_estimators = n_estimators, criterion = criterion, max_depth = max_depth, min_samples_split = min_samples_split, min_weight_fraction_leaf = min_weight_fraction_leaf, max_features = max_features, max_leaf_nodes = max_leaf_nodes, min_impurity_decrease = min_impurity_decrease, min_impurity_split = min_impurity_split, bootstrap = bootstrap, oob_score = oob_score, n_jobs = n_jobs, random_state = random_state, verbose = verbose, warm_start = warm_start)
kernel.fit(data_train, outputs_train)
predictions = kernel.predict(data_test)
return kernel, RegressionMetric.RegressionMetric(predictions, outputs_test)

View File

@@ -0,0 +1,25 @@
# Only included for backwards compatibility! Do not update, RandomForest is preferred and supported.
import sklearn
from sklearn import ensemble, model_selection
from . import ClassificationMetric, RegressionMetric
class RandomForest:
def random_forest_classifier(self, data, labels, test_size, n_estimators, criterion="gini", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None):
data_train, data_test, labels_train, labels_test = sklearn.model_selection.train_test_split(data, labels, test_size=test_size, random_state=1)
kernel = sklearn.ensemble.RandomForestClassifier(n_estimators = n_estimators, criterion = criterion, max_depth = max_depth, min_samples_split = min_samples_split, min_samples_leaf = min_samples_leaf, min_weight_fraction_leaf = min_weight_fraction_leaf, max_leaf_nodes = max_leaf_nodes, min_impurity_decrease = min_impurity_decrease, bootstrap = bootstrap, oob_score = oob_score, n_jobs = n_jobs, random_state = random_state, verbose = verbose, warm_start = warm_start, class_weight = class_weight)
kernel.fit(data_train, labels_train)
predictions = kernel.predict(data_test)
return kernel, ClassificationMetric(predictions, labels_test)
def random_forest_regressor(self, data, outputs, test_size, n_estimators, criterion="mse", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False):
data_train, data_test, outputs_train, outputs_test = sklearn.model_selection.train_test_split(data, outputs, test_size=test_size, random_state=1)
kernel = sklearn.ensemble.RandomForestRegressor(n_estimators = n_estimators, criterion = criterion, max_depth = max_depth, min_samples_split = min_samples_split, min_weight_fraction_leaf = min_weight_fraction_leaf, max_features = max_features, max_leaf_nodes = max_leaf_nodes, min_impurity_decrease = min_impurity_decrease, min_impurity_split = min_impurity_split, bootstrap = bootstrap, oob_score = oob_score, n_jobs = n_jobs, random_state = random_state, verbose = verbose, warm_start = warm_start)
kernel.fit(data_train, outputs_train)
predictions = kernel.predict(data_test)
return kernel, RegressionMetric(predictions, outputs_test)

View File

@@ -0,0 +1,42 @@
# Titan Robotics Team 2022: RegressionMetric submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import RegressionMetric'
# setup:
__version__ = "1.0.0"
__changelog__ = """changelog:
1.0.0:
- ported analysis.RegressionMetric() here
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
'RegressionMetric'
]
import numpy as np
import sklearn
from sklearn import metrics
class RegressionMetric():
def __new__(cls, predictions, targets):
return cls.r_squared(cls, predictions, targets), cls.mse(cls, predictions, targets), cls.rms(cls, predictions, targets)
def r_squared(self, predictions, targets): # assumes equal size inputs
return sklearn.metrics.r2_score(targets, predictions)
def mse(self, predictions, targets):
return sklearn.metrics.mean_squared_error(targets, predictions)
def rms(self, predictions, targets):
return np.sqrt(sklearn.metrics.mean_squared_error(targets, predictions))

View File

@@ -0,0 +1,88 @@
# Titan Robotics Team 2022: SVM submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import SVM'
# setup:
__version__ = "1.0.2"
__changelog__ = """changelog:
1.0.2:
- fixed __all__
1.0.1:
- removed unessasary self calls
- removed classness
1.0.0:
- ported analysis.SVM() here
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
"CustomKernel",
"StandardKernel",
"PrebuiltKernel",
"fit",
"eval_classification",
"eval_regression",
]
import sklearn
from sklearn import svm
from . import ClassificationMetric, RegressionMetric
class CustomKernel:
def __new__(cls, C, kernel, degre, gamma, coef0, shrinking, probability, tol, cache_size, class_weight, verbose, max_iter, decision_function_shape, random_state):
return sklearn.svm.SVC(C = C, kernel = kernel, gamma = gamma, coef0 = coef0, shrinking = shrinking, probability = probability, tol = tol, cache_size = cache_size, class_weight = class_weight, verbose = verbose, max_iter = max_iter, decision_function_shape = decision_function_shape, random_state = random_state)
class StandardKernel:
def __new__(cls, kernel, C=1.0, degree=3, gamma='auto_deprecated', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', random_state=None):
return sklearn.svm.SVC(C = C, kernel = kernel, gamma = gamma, coef0 = coef0, shrinking = shrinking, probability = probability, tol = tol, cache_size = cache_size, class_weight = class_weight, verbose = verbose, max_iter = max_iter, decision_function_shape = decision_function_shape, random_state = random_state)
class PrebuiltKernel:
class Linear:
def __new__(cls):
return sklearn.svm.SVC(kernel = 'linear')
class Polynomial:
def __new__(cls, power, r_bias):
return sklearn.svm.SVC(kernel = 'polynomial', degree = power, coef0 = r_bias)
class RBF:
def __new__(cls, gamma):
return sklearn.svm.SVC(kernel = 'rbf', gamma = gamma)
class Sigmoid:
def __new__(cls, r_bias):
return sklearn.svm.SVC(kernel = 'sigmoid', coef0 = r_bias)
def fit(kernel, train_data, train_outputs): # expects *2d data, 1d labels or outputs
return kernel.fit(train_data, train_outputs)
def eval_classification(kernel, test_data, test_outputs):
predictions = kernel.predict(test_data)
return ClassificationMetric(predictions, test_outputs)
def eval_regression(kernel, test_data, test_outputs):
predictions = kernel.predict(test_data)
return RegressionMetric(predictions, test_outputs)

View File

@@ -0,0 +1,424 @@
# Titan Robotics Team 2022: Sort submodule
# Written by Arthur Lu and James Pan
# Notes:
# this should be imported as a python module using 'from tra_analysis import Sort'
# setup:
__version__ = "1.0.1"
__changelog__ = """changelog:
1.0.1:
- fixed __all__
1.0.0:
- ported analysis.Sort() here
- removed classness
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"James Pan <zpan@imsa.edu>"
)
__all__ = [
"quicksort",
"mergesort",
"introsort",
"heapsort",
"insertionsort",
"timsort",
"selectionsort",
"shellsort",
"bubblesort",
"cyclesort",
"cocktailsort",
]
import numpy as np
def quicksort(a):
def sort(array):
less = []
equal = []
greater = []
if len(array) > 1:
pivot = array[0]
for x in array:
if x < pivot:
less.append(x)
elif x == pivot:
equal.append(x)
elif x > pivot:
greater.append(x)
return sort(less)+equal+sort(greater)
else:
return array
return np.array(sort(a))
def mergesort(a):
def sort(array):
array = array
if len(array) >1:
middle = len(array) // 2
L = array[:middle]
R = array[middle:]
sort(L)
sort(R)
i = j = k = 0
while i < len(L) and j < len(R):
if L[i] < R[j]:
array[k] = L[i]
i+= 1
else:
array[k] = R[j]
j+= 1
k+= 1
while i < len(L):
array[k] = L[i]
i+= 1
k+= 1
while j < len(R):
array[k] = R[j]
j+= 1
k+= 1
return array
return sort(a)
def introsort(a):
def sort(array, start, end, maxdepth):
array = array
if end - start <= 1:
return
elif maxdepth == 0:
heapsort(array, start, end)
else:
p = partition(array, start, end)
sort(array, start, p + 1, maxdepth - 1)
sort(array, p + 1, end, maxdepth - 1)
return array
def partition(array, start, end):
pivot = array[start]
i = start - 1
j = end
while True:
i = i + 1
while array[i] < pivot:
i = i + 1
j = j - 1
while array[j] > pivot:
j = j - 1
if i >= j:
return j
swap(array, i, j)
def swap(array, i, j):
array[i], array[j] = array[j], array[i]
def heapsort(array, start, end):
build_max_heap(array, start, end)
for i in range(end - 1, start, -1):
swap(array, start, i)
max_heapify(array, index=0, start=start, end=i)
def build_max_heap(array, start, end):
def parent(i):
return (i - 1)//2
length = end - start
index = parent(length - 1)
while index >= 0:
max_heapify(array, index, start, end)
index = index - 1
def max_heapify(array, index, start, end):
def left(i):
return 2*i + 1
def right(i):
return 2*i + 2
size = end - start
l = left(index)
r = right(index)
if (l < size and array[start + l] > array[start + index]):
largest = l
else:
largest = index
if (r < size and array[start + r] > array[start + largest]):
largest = r
if largest != index:
swap(array, start + largest, start + index)
max_heapify(array, largest, start, end)
maxdepth = (len(a).bit_length() - 1)*2
return sort(a, 0, len(a), maxdepth)
def heapsort(a):
def sort(array):
array = array
n = len(array)
for i in range(n//2 - 1, -1, -1):
heapify(array, n, i)
for i in range(n-1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, i, 0)
return array
def heapify(array, n, i):
array = array
largest = i
l = 2 * i + 1
r = 2 * i + 2
if l < n and array[i] < array[l]:
largest = l
if r < n and array[largest] < array[r]:
largest = r
if largest != i:
array[i],array[largest] = array[largest],array[i]
heapify(array, n, largest)
return array
return sort(a)
def insertionsort(a):
def sort(array):
array = array
for i in range(1, len(array)):
key = array[i]
j = i-1
while j >=0 and key < array[j] :
array[j+1] = array[j]
j -= 1
array[j+1] = key
return array
return sort(a)
def timsort(a, block = 32):
BLOCK = block
def sort(array, n):
array = array
for i in range(0, n, BLOCK):
insertionsort(array, i, min((i+31), (n-1)))
size = BLOCK
while size < n:
for left in range(0, n, 2*size):
mid = left + size - 1
right = min((left + 2*size - 1), (n-1))
merge(array, left, mid, right)
size = 2*size
return array
def insertionsort(array, left, right):
array = array
for i in range(left + 1, right+1):
temp = array[i]
j = i - 1
while j >= left and array[j] > temp :
array[j+1] = array[j]
j -= 1
array[j+1] = temp
return array
def merge(array, l, m, r):
len1, len2 = m - l + 1, r - m
left, right = [], []
for i in range(0, len1):
left.append(array[l + i])
for i in range(0, len2):
right.append(array[m + 1 + i])
i, j, k = 0, 0, l
while i < len1 and j < len2:
if left[i] <= right[j]:
array[k] = left[i]
i += 1
else:
array[k] = right[j]
j += 1
k += 1
while i < len1:
array[k] = left[i]
k += 1
i += 1
while j < len2:
array[k] = right[j]
k += 1
j += 1
return sort(a, len(a))
def selectionsort(a):
array = a
for i in range(len(array)):
min_idx = i
for j in range(i+1, len(array)):
if array[min_idx] > array[j]:
min_idx = j
array[i], array[min_idx] = array[min_idx], array[i]
return array
def shellsort(a):
array = a
n = len(array)
gap = n//2
while gap > 0:
for i in range(gap,n):
temp = array[i]
j = i
while j >= gap and array[j-gap] >temp:
array[j] = array[j-gap]
j -= gap
array[j] = temp
gap //= 2
return array
def bubblesort(a):
def sort(array):
for i, num in enumerate(array):
try:
if array[i+1] < num:
array[i] = array[i+1]
array[i+1] = num
sort(array)
except IndexError:
pass
return array
return sort(a)
def cyclesort(a):
def sort(array):
array = array
writes = 0
for cycleStart in range(0, len(array) - 1):
item = array[cycleStart]
pos = cycleStart
for i in range(cycleStart + 1, len(array)):
if array[i] < item:
pos += 1
if pos == cycleStart:
continue
while item == array[pos]:
pos += 1
array[pos], item = item, array[pos]
writes += 1
while pos != cycleStart:
pos = cycleStart
for i in range(cycleStart + 1, len(array)):
if array[i] < item:
pos += 1
while item == array[pos]:
pos += 1
array[pos], item = item, array[pos]
writes += 1
return array
return sort(a)
def cocktailsort(a):
def sort(array):
array = array
n = len(array)
swapped = True
start = 0
end = n-1
while (swapped == True):
swapped = False
for i in range (start, end):
if (array[i] > array[i + 1]) :
array[i], array[i + 1]= array[i + 1], array[i]
swapped = True
if (swapped == False):
break
swapped = False
end = end-1
for i in range(end-1, start-1, -1):
if (array[i] > array[i + 1]):
array[i], array[i + 1] = array[i + 1], array[i]
swapped = True
start = start + 1
return array
return sort(a)

View File

@@ -0,0 +1,391 @@
# Only included for backwards compatibility! Do not update, Sort is preferred and supported.
class Sort: # if you haven't used a sort, then you've never lived
def quicksort(self, a):
def sort(array):
less = []
equal = []
greater = []
if len(array) > 1:
pivot = array[0]
for x in array:
if x < pivot:
less.append(x)
elif x == pivot:
equal.append(x)
elif x > pivot:
greater.append(x)
return sort(less)+equal+sort(greater)
else:
return array
return np.array(sort(a))
def mergesort(self, a):
def sort(array):
array = array
if len(array) >1:
middle = len(array) // 2
L = array[:middle]
R = array[middle:]
sort(L)
sort(R)
i = j = k = 0
while i < len(L) and j < len(R):
if L[i] < R[j]:
array[k] = L[i]
i+= 1
else:
array[k] = R[j]
j+= 1
k+= 1
while i < len(L):
array[k] = L[i]
i+= 1
k+= 1
while j < len(R):
array[k] = R[j]
j+= 1
k+= 1
return array
return sort(a)
def introsort(self, a):
def sort(array, start, end, maxdepth):
array = array
if end - start <= 1:
return
elif maxdepth == 0:
heapsort(array, start, end)
else:
p = partition(array, start, end)
sort(array, start, p + 1, maxdepth - 1)
sort(array, p + 1, end, maxdepth - 1)
return array
def partition(array, start, end):
pivot = array[start]
i = start - 1
j = end
while True:
i = i + 1
while array[i] < pivot:
i = i + 1
j = j - 1
while array[j] > pivot:
j = j - 1
if i >= j:
return j
swap(array, i, j)
def swap(array, i, j):
array[i], array[j] = array[j], array[i]
def heapsort(array, start, end):
build_max_heap(array, start, end)
for i in range(end - 1, start, -1):
swap(array, start, i)
max_heapify(array, index=0, start=start, end=i)
def build_max_heap(array, start, end):
def parent(i):
return (i - 1)//2
length = end - start
index = parent(length - 1)
while index >= 0:
max_heapify(array, index, start, end)
index = index - 1
def max_heapify(array, index, start, end):
def left(i):
return 2*i + 1
def right(i):
return 2*i + 2
size = end - start
l = left(index)
r = right(index)
if (l < size and array[start + l] > array[start + index]):
largest = l
else:
largest = index
if (r < size and array[start + r] > array[start + largest]):
largest = r
if largest != index:
swap(array, start + largest, start + index)
max_heapify(array, largest, start, end)
maxdepth = (len(a).bit_length() - 1)*2
return sort(a, 0, len(a), maxdepth)
def heapsort(self, a):
def sort(array):
array = array
n = len(array)
for i in range(n//2 - 1, -1, -1):
heapify(array, n, i)
for i in range(n-1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, i, 0)
return array
def heapify(array, n, i):
array = array
largest = i
l = 2 * i + 1
r = 2 * i + 2
if l < n and array[i] < array[l]:
largest = l
if r < n and array[largest] < array[r]:
largest = r
if largest != i:
array[i],array[largest] = array[largest],array[i]
heapify(array, n, largest)
return array
return sort(a)
def insertionsort(self, a):
def sort(array):
array = array
for i in range(1, len(array)):
key = array[i]
j = i-1
while j >=0 and key < array[j] :
array[j+1] = array[j]
j -= 1
array[j+1] = key
return array
return sort(a)
def timsort(self, a, block = 32):
BLOCK = block
def sort(array, n):
array = array
for i in range(0, n, BLOCK):
insertionsort(array, i, min((i+31), (n-1)))
size = BLOCK
while size < n:
for left in range(0, n, 2*size):
mid = left + size - 1
right = min((left + 2*size - 1), (n-1))
merge(array, left, mid, right)
size = 2*size
return array
def insertionsort(array, left, right):
array = array
for i in range(left + 1, right+1):
temp = array[i]
j = i - 1
while j >= left and array[j] > temp :
array[j+1] = array[j]
j -= 1
array[j+1] = temp
return array
def merge(array, l, m, r):
len1, len2 = m - l + 1, r - m
left, right = [], []
for i in range(0, len1):
left.append(array[l + i])
for i in range(0, len2):
right.append(array[m + 1 + i])
i, j, k = 0, 0, l
while i < len1 and j < len2:
if left[i] <= right[j]:
array[k] = left[i]
i += 1
else:
array[k] = right[j]
j += 1
k += 1
while i < len1:
array[k] = left[i]
k += 1
i += 1
while j < len2:
array[k] = right[j]
k += 1
j += 1
return sort(a, len(a))
def selectionsort(self, a):
array = a
for i in range(len(array)):
min_idx = i
for j in range(i+1, len(array)):
if array[min_idx] > array[j]:
min_idx = j
array[i], array[min_idx] = array[min_idx], array[i]
return array
def shellsort(self, a):
array = a
n = len(array)
gap = n//2
while gap > 0:
for i in range(gap,n):
temp = array[i]
j = i
while j >= gap and array[j-gap] >temp:
array[j] = array[j-gap]
j -= gap
array[j] = temp
gap //= 2
return array
def bubblesort(self, a):
def sort(array):
for i, num in enumerate(array):
try:
if array[i+1] < num:
array[i] = array[i+1]
array[i+1] = num
sort(array)
except IndexError:
pass
return array
return sort(a)
def cyclesort(self, a):
def sort(array):
array = array
writes = 0
for cycleStart in range(0, len(array) - 1):
item = array[cycleStart]
pos = cycleStart
for i in range(cycleStart + 1, len(array)):
if array[i] < item:
pos += 1
if pos == cycleStart:
continue
while item == array[pos]:
pos += 1
array[pos], item = item, array[pos]
writes += 1
while pos != cycleStart:
pos = cycleStart
for i in range(cycleStart + 1, len(array)):
if array[i] < item:
pos += 1
while item == array[pos]:
pos += 1
array[pos], item = item, array[pos]
writes += 1
return array
return sort(a)
def cocktailsort(self, a):
def sort(array):
array = array
n = len(array)
swapped = True
start = 0
end = n-1
while (swapped == True):
swapped = False
for i in range (start, end):
if (array[i] > array[i + 1]) :
array[i], array[i + 1]= array[i + 1], array[i]
swapped = True
if (swapped == False):
break
swapped = False
end = end-1
for i in range(end-1, start-1, -1):
if (array[i] > array[i + 1]):
array[i], array[i + 1] = array[i + 1], array[i]
swapped = True
start = start + 1
return array
return sort(a)

View File

@@ -0,0 +1,314 @@
# Titan Robotics Team 2022: StatisticalTest submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import StatisticalTest'
# setup:
__version__ = "1.0.2"
__changelog__ = """changelog:
1.0.2:
- added tukey_multicomparison
- fixed styling
1.0.1:
- fixed typo in __all__
1.0.0:
- ported analysis.StatisticalTest() here
- removed classness
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"James Pan <zpan@imsa.edu>",
)
__all__ = [
'ttest_onesample',
'ttest_independent',
'ttest_statistic',
'ttest_related',
'ks_fitness',
'chisquare',
'powerdivergence'
'ks_twosample',
'es_twosample',
'mw_rank',
'mw_tiecorrection',
'rankdata',
'wilcoxon_ranksum',
'wilcoxon_signedrank',
'kw_htest',
'friedman_chisquare',
'bm_wtest',
'combine_pvalues',
'jb_fitness',
'ab_equality',
'bartlett_variance',
'levene_variance',
'sw_normality',
'shapiro',
'ad_onesample',
'ad_ksample',
'binomial',
'fk_variance',
'mood_mediantest',
'mood_equalscale',
'skewtest',
'kurtosistest',
'normaltest',
'tukey_multicomparison'
]
import numpy as np
import scipy
from scipy import stats, interpolate
def ttest_onesample(a, popmean, axis = 0, nan_policy = 'propagate'):
results = scipy.stats.ttest_1samp(a, popmean, axis = axis, nan_policy = nan_policy)
return {"t-value": results[0], "p-value": results[1]}
def ttest_independent(a, b, equal = True, nan_policy = 'propagate'):
results = scipy.stats.ttest_ind(a, b, equal_var = equal, nan_policy = nan_policy)
return {"t-value": results[0], "p-value": results[1]}
def ttest_statistic(o1, o2, equal = True):
results = scipy.stats.ttest_ind_from_stats(o1["mean"], o1["std"], o1["nobs"], o2["mean"], o2["std"], o2["nobs"], equal_var = equal)
return {"t-value": results[0], "p-value": results[1]}
def ttest_related(a, b, axis = 0, nan_policy='propagate'):
results = scipy.stats.ttest_rel(a, b, axis = axis, nan_policy = nan_policy)
return {"t-value": results[0], "p-value": results[1]}
def ks_fitness(rvs, cdf, args = (), N = 20, alternative = 'two-sided', mode = 'approx'):
results = scipy.stats.kstest(rvs, cdf, args = args, N = N, alternative = alternative, mode = mode)
return {"ks-value": results[0], "p-value": results[1]}
def chisquare(f_obs, f_exp = None, ddof = None, axis = 0):
results = scipy.stats.chisquare(f_obs, f_exp = f_exp, ddof = ddof, axis = axis)
return {"chisquared-value": results[0], "p-value": results[1]}
def powerdivergence(f_obs, f_exp = None, ddof = None, axis = 0, lambda_ = None):
results = scipy.stats.power_divergence(f_obs, f_exp = f_exp, ddof = ddof, axis = axis, lambda_ = lambda_)
return {"powerdivergence-value": results[0], "p-value": results[1]}
def ks_twosample(x, y, alternative = 'two_sided', mode = 'auto'):
results = scipy.stats.ks_2samp(x, y, alternative = alternative, mode = mode)
return {"ks-value": results[0], "p-value": results[1]}
def es_twosample(x, y, t = (0.4, 0.8)):
results = scipy.stats.epps_singleton_2samp(x, y, t = t)
return {"es-value": results[0], "p-value": results[1]}
def mw_rank(x, y, use_continuity = True, alternative = None):
results = scipy.stats.mannwhitneyu(x, y, use_continuity = use_continuity, alternative = alternative)
return {"u-value": results[0], "p-value": results[1]}
def mw_tiecorrection(rank_values):
results = scipy.stats.tiecorrect(rank_values)
return {"correction-factor": results}
def rankdata(a, method = 'average'):
results = scipy.stats.rankdata(a, method = method)
return results
def wilcoxon_ranksum(a, b): # this seems to be superceded by Mann Whitney Wilcoxon U Test
results = scipy.stats.ranksums(a, b)
return {"u-value": results[0], "p-value": results[1]}
def wilcoxon_signedrank(x, y = None, zero_method = 'wilcox', correction = False, alternative = 'two-sided'):
results = scipy.stats.wilcoxon(x, y = y, zero_method = zero_method, correction = correction, alternative = alternative)
return {"t-value": results[0], "p-value": results[1]}
def kw_htest(*args, nan_policy = 'propagate'):
results = scipy.stats.kruskal(*args, nan_policy = nan_policy)
return {"h-value": results[0], "p-value": results[1]}
def friedman_chisquare(*args):
results = scipy.stats.friedmanchisquare(*args)
return {"chisquared-value": results[0], "p-value": results[1]}
def bm_wtest(x, y, alternative = 'two-sided', distribution = 't', nan_policy = 'propagate'):
results = scipy.stats.brunnermunzel(x, y, alternative = alternative, distribution = distribution, nan_policy = nan_policy)
return {"w-value": results[0], "p-value": results[1]}
def combine_pvalues(pvalues, method = 'fisher', weights = None):
results = scipy.stats.combine_pvalues(pvalues, method = method, weights = weights)
return {"combined-statistic": results[0], "p-value": results[1]}
def jb_fitness(x):
results = scipy.stats.jarque_bera(x)
return {"jb-value": results[0], "p-value": results[1]}
def ab_equality(x, y):
results = scipy.stats.ansari(x, y)
return {"ab-value": results[0], "p-value": results[1]}
def bartlett_variance(*args):
results = scipy.stats.bartlett(*args)
return {"t-value": results[0], "p-value": results[1]}
def levene_variance(*args, center = 'median', proportiontocut = 0.05):
results = scipy.stats.levene(*args, center = center, proportiontocut = proportiontocut)
return {"w-value": results[0], "p-value": results[1]}
def sw_normality(x):
results = scipy.stats.shapiro(x)
return {"w-value": results[0], "p-value": results[1]}
def shapiro(x):
return "destroyed by facts and logic"
def ad_onesample(x, dist = 'norm'):
results = scipy.stats.anderson(x, dist = dist)
return {"d-value": results[0], "critical-values": results[1], "significance-value": results[2]}
def ad_ksample(samples, midrank = True):
results = scipy.stats.anderson_ksamp(samples, midrank = midrank)
return {"d-value": results[0], "critical-values": results[1], "significance-value": results[2]}
def binomial(x, n = None, p = 0.5, alternative = 'two-sided'):
results = scipy.stats.binom_test(x, n = n, p = p, alternative = alternative)
return {"p-value": results}
def fk_variance(*args, center = 'median', proportiontocut = 0.05):
results = scipy.stats.fligner(*args, center = center, proportiontocut = proportiontocut)
return {"h-value": results[0], "p-value": results[1]} # unknown if the statistic is an h value
def mood_mediantest(*args, ties = 'below', correction = True, lambda_ = 1, nan_policy = 'propagate'):
results = scipy.stats.median_test(*args, ties = ties, correction = correction, lambda_ = lambda_, nan_policy = nan_policy)
return {"chisquared-value": results[0], "p-value": results[1], "m-value": results[2], "table": results[3]}
def mood_equalscale(x, y, axis = 0):
results = scipy.stats.mood(x, y, axis = axis)
return {"z-score": results[0], "p-value": results[1]}
def skewtest(a, axis = 0, nan_policy = 'propogate'):
results = scipy.stats.skewtest(a, axis = axis, nan_policy = nan_policy)
return {"z-score": results[0], "p-value": results[1]}
def kurtosistest(a, axis = 0, nan_policy = 'propogate'):
results = scipy.stats.kurtosistest(a, axis = axis, nan_policy = nan_policy)
return {"z-score": results[0], "p-value": results[1]}
def normaltest(a, axis = 0, nan_policy = 'propogate'):
results = scipy.stats.normaltest(a, axis = axis, nan_policy = nan_policy)
return {"z-score": results[0], "p-value": results[1]}
def get_tukeyQcrit(k, df, alpha=0.05):
'''
From statsmodels.sandbox.stats.multicomp
return critical values for Tukey's HSD (Q)
Parameters
----------
k : int in {2, ..., 10}
number of tests
df : int
degrees of freedom of error term
alpha : {0.05, 0.01}
type 1 error, 1-confidence level
not enough error checking for limitations
'''
# qtable from statsmodels.sandbox.stats.multicomp
qcrit = '''
2 3 4 5 6 7 8 9 10
5 3.64 5.70 4.60 6.98 5.22 7.80 5.67 8.42 6.03 8.91 6.33 9.32 6.58 9.67 6.80 9.97 6.99 10.24
6 3.46 5.24 4.34 6.33 4.90 7.03 5.30 7.56 5.63 7.97 5.90 8.32 6.12 8.61 6.32 8.87 6.49 9.10
7 3.34 4.95 4.16 5.92 4.68 6.54 5.06 7.01 5.36 7.37 5.61 7.68 5.82 7.94 6.00 8.17 6.16 8.37
8 3.26 4.75 4.04 5.64 4.53 6.20 4.89 6.62 5.17 6.96 5.40 7.24 5.60 7.47 5.77 7.68 5.92 7.86
9 3.20 4.60 3.95 5.43 4.41 5.96 4.76 6.35 5.02 6.66 5.24 6.91 5.43 7.13 5.59 7.33 5.74 7.49
10 3.15 4.48 3.88 5.27 4.33 5.77 4.65 6.14 4.91 6.43 5.12 6.67 5.30 6.87 5.46 7.05 5.60 7.21
11 3.11 4.39 3.82 5.15 4.26 5.62 4.57 5.97 4.82 6.25 5.03 6.48 5.20 6.67 5.35 6.84 5.49 6.99
12 3.08 4.32 3.77 5.05 4.20 5.50 4.51 5.84 4.75 6.10 4.95 6.32 5.12 6.51 5.27 6.67 5.39 6.81
13 3.06 4.26 3.73 4.96 4.15 5.40 4.45 5.73 4.69 5.98 4.88 6.19 5.05 6.37 5.19 6.53 5.32 6.67
14 3.03 4.21 3.70 4.89 4.11 5.32 4.41 5.63 4.64 5.88 4.83 6.08 4.99 6.26 5.13 6.41 5.25 6.54
15 3.01 4.17 3.67 4.84 4.08 5.25 4.37 5.56 4.59 5.80 4.78 5.99 4.94 6.16 5.08 6.31 5.20 6.44
16 3.00 4.13 3.65 4.79 4.05 5.19 4.33 5.49 4.56 5.72 4.74 5.92 4.90 6.08 5.03 6.22 5.15 6.35
17 2.98 4.10 3.63 4.74 4.02 5.14 4.30 5.43 4.52 5.66 4.70 5.85 4.86 6.01 4.99 6.15 5.11 6.27
18 2.97 4.07 3.61 4.70 4.00 5.09 4.28 5.38 4.49 5.60 4.67 5.79 4.82 5.94 4.96 6.08 5.07 6.20
19 2.96 4.05 3.59 4.67 3.98 5.05 4.25 5.33 4.47 5.55 4.65 5.73 4.79 5.89 4.92 6.02 5.04 6.14
20 2.95 4.02 3.58 4.64 3.96 5.02 4.23 5.29 4.45 5.51 4.62 5.69 4.77 5.84 4.90 5.97 5.01 6.09
24 2.92 3.96 3.53 4.55 3.90 4.91 4.17 5.17 4.37 5.37 4.54 5.54 4.68 5.69 4.81 5.81 4.92 5.92
30 2.89 3.89 3.49 4.45 3.85 4.80 4.10 5.05 4.30 5.24 4.46 5.40 4.60 5.54 4.72 5.65 4.82 5.76
40 2.86 3.82 3.44 4.37 3.79 4.70 4.04 4.93 4.23 5.11 4.39 5.26 4.52 5.39 4.63 5.50 4.73 5.60
60 2.83 3.76 3.40 4.28 3.74 4.59 3.98 4.82 4.16 4.99 4.31 5.13 4.44 5.25 4.55 5.36 4.65 5.45
120 2.80 3.70 3.36 4.20 3.68 4.50 3.92 4.71 4.10 4.87 4.24 5.01 4.36 5.12 4.47 5.21 4.56 5.30
infinity 2.77 3.64 3.31 4.12 3.63 4.40 3.86 4.60 4.03 4.76 4.17 4.88 4.29 4.99 4.39 5.08 4.47 5.16
'''
res = [line.split() for line in qcrit.replace('infinity','9999').split('\n')]
c=np.array(res[2:-1]).astype(float)
#c[c==9999] = np.inf
ccols = np.arange(2,11)
crows = c[:,0]
cv005 = c[:, 1::2]
cv001 = c[:, 2::2]
if alpha == 0.05:
intp = interpolate.interp1d(crows, cv005[:,k-2])
elif alpha == 0.01:
intp = interpolate.interp1d(crows, cv001[:,k-2])
else:
raise ValueError('only implemented for alpha equal to 0.01 and 0.05')
return intp(df)
def tukey_multicomparison(groups, alpha=0.05):
#formulas according to https://astatsa.com/OneWay_Anova_with_TukeyHSD/
k = len(groups)
df = 0
means = []
MSE = 0
for group in groups:
df+= len(group)
mean = sum(group)/len(group)
means.append(mean)
MSE += sum([(i-mean)**2 for i in group])
df -= k
MSE /= df
q_dict = {}
crit_q = get_tukeyQcrit(k, df, alpha)
for i in range(k-1):
for j in range(i+1, k):
numerator = abs(means[i] - means[j])
denominator = np.sqrt( MSE / ( 2/(1/len(groups[i]) + 1/len(groups[j])) ))
q = numerator/denominator
q_dict["group "+ str(i+1) + " and group " + str(j+1)] = [q, q>crit_q]
return q_dict

View File

@@ -0,0 +1,170 @@
# Only included for backwards compatibility! Do not update, StatisticalTest is preferred and supported.
import scipy
from scipy import stats
class StatisticalTest:
def ttest_onesample(self, a, popmean, axis = 0, nan_policy = 'propagate'):
results = scipy.stats.ttest_1samp(a, popmean, axis = axis, nan_policy = nan_policy)
return {"t-value": results[0], "p-value": results[1]}
def ttest_independent(self, a, b, equal = True, nan_policy = 'propagate'):
results = scipy.stats.ttest_ind(a, b, equal_var = equal, nan_policy = nan_policy)
return {"t-value": results[0], "p-value": results[1]}
def ttest_statistic(self, o1, o2, equal = True):
results = scipy.stats.ttest_ind_from_stats(o1["mean"], o1["std"], o1["nobs"], o2["mean"], o2["std"], o2["nobs"], equal_var = equal)
return {"t-value": results[0], "p-value": results[1]}
def ttest_related(self, a, b, axis = 0, nan_policy='propagate'):
results = scipy.stats.ttest_rel(a, b, axis = axis, nan_policy = nan_policy)
return {"t-value": results[0], "p-value": results[1]}
def ks_fitness(self, rvs, cdf, args = (), N = 20, alternative = 'two-sided', mode = 'approx'):
results = scipy.stats.kstest(rvs, cdf, args = args, N = N, alternative = alternative, mode = mode)
return {"ks-value": results[0], "p-value": results[1]}
def chisquare(self, f_obs, f_exp = None, ddof = None, axis = 0):
results = scipy.stats.chisquare(f_obs, f_exp = f_exp, ddof = ddof, axis = axis)
return {"chisquared-value": results[0], "p-value": results[1]}
def powerdivergence(self, f_obs, f_exp = None, ddof = None, axis = 0, lambda_ = None):
results = scipy.stats.power_divergence(f_obs, f_exp = f_exp, ddof = ddof, axis = axis, lambda_ = lambda_)
return {"powerdivergence-value": results[0], "p-value": results[1]}
def ks_twosample(self, x, y, alternative = 'two_sided', mode = 'auto'):
results = scipy.stats.ks_2samp(x, y, alternative = alternative, mode = mode)
return {"ks-value": results[0], "p-value": results[1]}
def es_twosample(self, x, y, t = (0.4, 0.8)):
results = scipy.stats.epps_singleton_2samp(x, y, t = t)
return {"es-value": results[0], "p-value": results[1]}
def mw_rank(self, x, y, use_continuity = True, alternative = None):
results = scipy.stats.mannwhitneyu(x, y, use_continuity = use_continuity, alternative = alternative)
return {"u-value": results[0], "p-value": results[1]}
def mw_tiecorrection(self, rank_values):
results = scipy.stats.tiecorrect(rank_values)
return {"correction-factor": results}
def rankdata(self, a, method = 'average'):
results = scipy.stats.rankdata(a, method = method)
return results
def wilcoxon_ranksum(self, a, b): # this seems to be superceded by Mann Whitney Wilcoxon U Test
results = scipy.stats.ranksums(a, b)
return {"u-value": results[0], "p-value": results[1]}
def wilcoxon_signedrank(self, x, y = None, zero_method = 'wilcox', correction = False, alternative = 'two-sided'):
results = scipy.stats.wilcoxon(x, y = y, zero_method = zero_method, correction = correction, alternative = alternative)
return {"t-value": results[0], "p-value": results[1]}
def kw_htest(self, *args, nan_policy = 'propagate'):
results = scipy.stats.kruskal(*args, nan_policy = nan_policy)
return {"h-value": results[0], "p-value": results[1]}
def friedman_chisquare(self, *args):
results = scipy.stats.friedmanchisquare(*args)
return {"chisquared-value": results[0], "p-value": results[1]}
def bm_wtest(self, x, y, alternative = 'two-sided', distribution = 't', nan_policy = 'propagate'):
results = scipy.stats.brunnermunzel(x, y, alternative = alternative, distribution = distribution, nan_policy = nan_policy)
return {"w-value": results[0], "p-value": results[1]}
def combine_pvalues(self, pvalues, method = 'fisher', weights = None):
results = scipy.stats.combine_pvalues(pvalues, method = method, weights = weights)
return {"combined-statistic": results[0], "p-value": results[1]}
def jb_fitness(self, x):
results = scipy.stats.jarque_bera(x)
return {"jb-value": results[0], "p-value": results[1]}
def ab_equality(self, x, y):
results = scipy.stats.ansari(x, y)
return {"ab-value": results[0], "p-value": results[1]}
def bartlett_variance(self, *args):
results = scipy.stats.bartlett(*args)
return {"t-value": results[0], "p-value": results[1]}
def levene_variance(self, *args, center = 'median', proportiontocut = 0.05):
results = scipy.stats.levene(*args, center = center, proportiontocut = proportiontocut)
return {"w-value": results[0], "p-value": results[1]}
def sw_normality(self, x):
results = scipy.stats.shapiro(x)
return {"w-value": results[0], "p-value": results[1]}
def shapiro(self, x):
return "destroyed by facts and logic"
def ad_onesample(self, x, dist = 'norm'):
results = scipy.stats.anderson(x, dist = dist)
return {"d-value": results[0], "critical-values": results[1], "significance-value": results[2]}
def ad_ksample(self, samples, midrank = True):
results = scipy.stats.anderson_ksamp(samples, midrank = midrank)
return {"d-value": results[0], "critical-values": results[1], "significance-value": results[2]}
def binomial(self, x, n = None, p = 0.5, alternative = 'two-sided'):
results = scipy.stats.binom_test(x, n = n, p = p, alternative = alternative)
return {"p-value": results}
def fk_variance(self, *args, center = 'median', proportiontocut = 0.05):
results = scipy.stats.fligner(*args, center = center, proportiontocut = proportiontocut)
return {"h-value": results[0], "p-value": results[1]} # unknown if the statistic is an h value
def mood_mediantest(self, *args, ties = 'below', correction = True, lambda_ = 1, nan_policy = 'propagate'):
results = scipy.stats.median_test(*args, ties = ties, correction = correction, lambda_ = lambda_, nan_policy = nan_policy)
return {"chisquared-value": results[0], "p-value": results[1], "m-value": results[2], "table": results[3]}
def mood_equalscale(self, x, y, axis = 0):
results = scipy.stats.mood(x, y, axis = axis)
return {"z-score": results[0], "p-value": results[1]}
def skewtest(self, a, axis = 0, nan_policy = 'propogate'):
results = scipy.stats.skewtest(a, axis = axis, nan_policy = nan_policy)
return {"z-score": results[0], "p-value": results[1]}
def kurtosistest(self, a, axis = 0, nan_policy = 'propogate'):
results = scipy.stats.kurtosistest(a, axis = axis, nan_policy = nan_policy)
return {"z-score": results[0], "p-value": results[1]}
def normaltest(self, a, axis = 0, nan_policy = 'propogate'):
results = scipy.stats.normaltest(a, axis = axis, nan_policy = nan_policy)
return {"z-score": results[0], "p-value": results[1]}

View File

@@ -0,0 +1,68 @@
# Titan Robotics Team 2022: tra_analysis package
# Written by Arthur Lu, Jacob Levine, Dev Singh, and James Pan
# Notes:
# this should be imported as a python package using 'import tra_analysis'
# this should be included in the local directory or environment variable
# this module has been optimized for multhreaded computing
# current benchmark of optimization: 1.33 times faster
# setup:
__version__ = "3.0.0"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
3.0.0:
- incremented version to release 3.0.0
3.0.0-rc2:
- fixed __changelog__
- fixed __all__ of Analysis, Array, ClassificationMetric, CorrelationTest, RandomForest, Sort, SVM
- populated __all__
3.0.0-alpha.4:
- changed version to 3 because of significant changes
- added backwards compatibility import of analysis
2.1.0-alpha.3:
- fixed indentation in meta data
2.1.0-alpha.2:
- updated SVM import
2.1.0-alpha.1:
- moved multiple submodules under analysis to their own modules/files
- added header, __version__, __changelog__, __author__, __all__ (unpopulated)
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"Jacob Levine <jlevine@imsa.edu>",
"Dev Singh <dev@devksingh.com>",
"James Pan <zpan@imsa.edu>"
)
__all__ = [
"Analysis",
"Array",
"ClassificationMetric",
"CorrelationTest",
"Expression",
"Fit",
"KNN",
"NaiveBayes",
"RandomForest",
"RegressionMetric",
"Sort",
"StatisticalTest",
"SVM"
]
from . import Analysis as Analysis
from . import Analysis as analysis
from .Array import Array
from .ClassificationMetric import ClassificationMetric
from . import CorrelationTest
from .equation import Expression
from . import Fit
from . import KNN
from . import NaiveBayes
from . import RandomForest
from .RegressionMetric import RegressionMetric
from . import Sort
from . import StatisticalTest
from . import SVM

View File

@@ -0,0 +1,37 @@
# Titan Robotics Team 2022: Expression submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis.Equation import Expression'
# TODO:
# - add option to pick parser backend
# - fix unit tests
# setup:
__version__ = "0.0.1-alpha"
__changelog__ = """changelog:
0.0.1-alpha:
- used the HybridExpressionParser as backend for Expression
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = {
"Expression"
}
import re
from .parser import BNF, RegexInplaceParser, HybridExpressionParser, Core, equation_base
class Expression(HybridExpressionParser):
expression = None
core = None
def __init__(self,expression,argorder=[],*args,**kwargs):
self.core = Core()
equation_base.equation_extend(self.core)
self.core.recalculateFMatch()
super().__init__(self.core, expression, argorder=[],*args,**kwargs)

View File

@@ -0,0 +1,22 @@
# Titan Robotics Team 2022: Expression submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis import Equation'
# setup:
__version__ = "0.0.1-alpha"
__changelog__ = """changelog:
0.0.1-alpha:
- made first prototype of Expression
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = {
"Expression"
}
from .Expression import Expression

View File

@@ -0,0 +1,97 @@
from __future__ import division
from pyparsing import (Literal, CaselessLiteral, Word, Combine, Group, Optional, ZeroOrMore, Forward, nums, alphas, oneOf)
from . import py2
import math
import operator
class BNF(object):
def pushFirst(self, strg, loc, toks):
self.exprStack.append(toks[0])
def pushUMinus(self, strg, loc, toks):
if toks and toks[0] == '-':
self.exprStack.append('unary -')
def __init__(self):
"""
expop :: '^'
multop :: '*' | '/'
addop :: '+' | '-'
integer :: ['+' | '-'] '0'..'9'+
atom :: PI | E | real | fn '(' expr ')' | '(' expr ')'
factor :: atom [ expop factor ]*
term :: factor [ multop factor ]*
expr :: term [ addop term ]*
"""
point = Literal(".")
e = CaselessLiteral("E")
fnumber = Combine(Word("+-" + nums, nums) +
Optional(point + Optional(Word(nums))) +
Optional(e + Word("+-" + nums, nums)))
ident = Word(alphas, alphas + nums + "_$")
plus = Literal("+")
minus = Literal("-")
mult = Literal("*")
div = Literal("/")
lpar = Literal("(").suppress()
rpar = Literal(")").suppress()
addop = plus | minus
multop = mult | div
expop = Literal("^")
pi = CaselessLiteral("PI")
expr = Forward()
atom = ((Optional(oneOf("- +")) +
(ident + lpar + expr + rpar | pi | e | fnumber).setParseAction(self.pushFirst))
| Optional(oneOf("- +")) + Group(lpar + expr + rpar)
).setParseAction(self.pushUMinus)
factor = Forward()
factor << atom + \
ZeroOrMore((expop + factor).setParseAction(self.pushFirst))
term = factor + \
ZeroOrMore((multop + factor).setParseAction(self.pushFirst))
expr << term + \
ZeroOrMore((addop + term).setParseAction(self.pushFirst))
self.bnf = expr
epsilon = 1e-12
self.opn = {"+": operator.add,
"-": operator.sub,
"*": operator.mul,
"/": operator.truediv,
"^": operator.pow}
self.fn = {"sin": math.sin,
"cos": math.cos,
"tan": math.tan,
"exp": math.exp,
"abs": abs,
"trunc": lambda a: int(a),
"round": round,
"sgn": lambda a: abs(a) > epsilon and py2.cmp(a, 0) or 0}
def evaluateStack(self, s):
op = s.pop()
if op == 'unary -':
return -self.evaluateStack(s)
if op in "+-*/^":
op2 = self.evaluateStack(s)
op1 = self.evaluateStack(s)
return self.opn[op](op1, op2)
elif op == "PI":
return math.pi
elif op == "E":
return math.e
elif op in self.fn:
return self.fn[op](self.evaluateStack(s))
elif op[0].isalpha():
return 0
else:
return float(op)
def eval(self, num_string, parseAll=True):
self.exprStack = []
results = self.bnf.parseString(num_string, parseAll)
val = self.evaluateStack(self.exprStack[:])
return val

View File

@@ -0,0 +1,521 @@
from .Hybrid_Utils import Core, ExpressionFunction, ExpressionVariable, ExpressionValue
import sys
if sys.version_info >= (3,):
xrange = range
basestring = str
class HybridExpressionParser(object):
def __init__(self,core,expression,argorder=[],*args,**kwargs):
super(HybridExpressionParser,self).__init__(*args,**kwargs)
if isinstance(expression,type(self)): # clone the object
self.core = core
self.__args = list(expression.__args)
self.__vars = dict(expression.__vars) # intenral array of preset variables
self.__argsused = set(expression.__argsused)
self.__expr = list(expression.__expr)
self.variables = {} # call variables
else:
self.__expression = expression
self.__args = argorder;
self.__vars = {} # intenral array of preset variables
self.__argsused = set()
self.__expr = [] # compiled equation tokens
self.variables = {} # call variables
self.__compile()
del self.__expression
def __getitem__(self, name):
if name in self.__argsused:
if name in self.__vars:
return self.__vars[name]
else:
return None
else:
raise KeyError(name)
def __setitem__(self,name,value):
if name in self.__argsused:
self.__vars[name] = value
else:
raise KeyError(name)
def __delitem__(self,name):
if name in self.__argsused:
if name in self.__vars:
del self.__vars[name]
else:
raise KeyError(name)
def __contains__(self, name):
return name in self.__argsused
def __call__(self,*args,**kwargs):
if len(self.__expr) == 0:
return None
self.variables = {}
self.variables.update(self.core.constants)
self.variables.update(self.__vars)
if len(args) > len(self.__args):
raise TypeError("<{0:s}.{1:s}({2:s}) object at {3:0=#10x}>() takes at most {4:d} arguments ({5:d} given)".format(
type(self).__module__,type(self).__name__,repr(self),id(self),len(self.__args),len(args)))
for i in xrange(len(args)):
if i < len(self.__args):
if self.__args[i] in kwargs:
raise TypeError("<{0:s}.{1:s}({2:s}) object at {3:0=#10x}>() got multiple values for keyword argument '{4:s}'".format(
type(self).__module__,type(self).__name__,repr(self),id(self),self.__args[i]))
self.variables[self.__args[i]] = args[i]
self.variables.update(kwargs)
for arg in self.__argsused:
if arg not in self.variables:
min_args = len(self.__argsused - (set(self.__vars.keys()) | set(self.core.constants.keys())))
raise TypeError("<{0:s}.{1:s}({2:s}) object at {3:0=#10x}>() takes at least {4:d} arguments ({5:d} given) '{6:s}' not defined".format(
type(self).__module__,type(self).__name__,repr(self),id(self),min_args,len(args)+len(kwargs),arg))
expr = self.__expr[::-1]
args = []
while len(expr) > 0:
t = expr.pop()
r = t(args,self)
args.append(r)
if len(args) > 1:
return args
else:
return args[0]
def __next(self,__expect_op):
if __expect_op:
m = self.core.gematch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groups()
return g[0],'CLOSE'
m = self.core.smatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
return ",",'SEP'
m = self.core.omatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groups()
return g[0],'OP'
else:
m = self.core.gsmatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groups()
return g[0],'OPEN'
m = self.core.vmatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groupdict(0)
if g['dec']:
if g["ivalue"]:
return complex(int(g["rsign"]+"1")*float(g["rvalue"])*10**int(g["rexpoent"]),int(g["isign"]+"1")*float(g["ivalue"])*10**int(g["iexpoent"])),'VALUE'
elif g["rexpoent"] or g["rvalue"].find('.')>=0:
return int(g["rsign"]+"1")*float(g["rvalue"])*10**int(g["rexpoent"]),'VALUE'
else:
return int(g["rsign"]+"1")*int(g["rvalue"]),'VALUE'
elif g["hex"]:
return int(g["hexsign"]+"1")*int(g["hexvalue"],16),'VALUE'
elif g["oct"]:
return int(g["octsign"]+"1")*int(g["octvalue"],8),'VALUE'
elif g["bin"]:
return int(g["binsign"]+"1")*int(g["binvalue"],2),'VALUE'
else:
raise NotImplemented("'{0:s}' Values Not Implemented Yet".format(m.string))
m = self.core.nmatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groups()
return g[0],'NAME'
m = self.core.fmatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groups()
return g[0],'FUNC'
m = self.core.umatch.match(self.__expression)
if m != None:
self.__expression = self.__expression[m.end():]
g = m.groups()
return g[0],'UNARY'
return None
def show(self):
"""Show RPN tokens
This will print out the internal token list (RPN) of the expression
one token perline.
"""
for expr in self.__expr:
print(expr)
def __str__(self):
"""str(fn)
Generates a Printable version of the Expression
Returns
-------
str
Latex String respresation of the Expression, suitable for rendering the equation
"""
expr = self.__expr[::-1]
if len(expr) == 0:
return ""
args = [];
while len(expr) > 0:
t = expr.pop()
r = t.toStr(args,self)
args.append(r)
if len(args) > 1:
return args
else:
return args[0]
def __repr__(self):
"""repr(fn)
Generates a String that correctrly respresents the equation
Returns
-------
str
Convert the Expression to a String that passed to the constructor, will constuct
an identical equation object (in terms of sequence of tokens, and token type/value)
"""
expr = self.__expr[::-1]
if len(expr) == 0:
return ""
args = [];
while len(expr) > 0:
t = expr.pop()
r = t.toRepr(args,self)
args.append(r)
if len(args) > 1:
return args
else:
return args[0]
def __iter__(self):
return iter(self.__argsused)
def __lt__(self, other):
if isinstance(other, Expression):
return repr(self) < repr(other)
else:
raise TypeError("{0:s} is not an {1:s} Object, and can't be compared to an Expression Object".format(repr(other), type(other)))
def __eq__(self, other):
if isinstance(other, Expression):
return repr(self) == repr(other)
else:
raise TypeError("{0:s} is not an {1:s} Object, and can't be compared to an Expression Object".format(repr(other), type(other)))
def __combine(self,other,op):
if op not in self.core.ops or not isinstance(other,(int,float,complex,type(self),basestring)):
return NotImplemented
else:
obj = type(self)(self)
if isinstance(other,(int,float,complex)):
obj.__expr.append(ExpressionValue(other))
else:
if isinstance(other,basestring):
try:
other = type(self)(other)
except:
raise SyntaxError("Can't Convert string, \"{0:s}\" to an Expression Object".format(other))
obj.__expr += other.__expr
obj.__argsused |= other.__argsused
for v in other.__args:
if v not in obj.__args:
obj.__args.append(v)
for k,v in other.__vars.items():
if k not in obj.__vars:
obj.__vars[k] = v
elif v != obj.__vars[k]:
raise RuntimeError("Predifined Variable Conflict in '{0:s}' two differing values defined".format(k))
fn = self.core.ops[op]
obj.__expr.append(ExpressionFunction(fn['func'],fn['args'],fn['str'],fn['latex'],op,False))
return obj
def __rcombine(self,other,op):
if op not in self.core.ops or not isinstance(other,(int,float,complex,type(self),basestring)):
return NotImplemented
else:
obj = type(self)(self)
if isinstance(other,(int,float,complex)):
obj.__expr.insert(0,ExpressionValue(other))
else:
if isinstance(other,basestring):
try:
other = type(self)(other)
except:
raise SyntaxError("Can't Convert string, \"{0:s}\" to an Expression Object".format(other))
obj.__expr = other.__expr + self.__expr
obj.__argsused = other.__argsused | self.__expr
__args = other.__args
for v in obj.__args:
if v not in __args:
__args.append(v)
obj.__args = __args
for k,v in other.__vars.items():
if k not in obj.__vars:
obj.__vars[k] = v
elif v != obj.__vars[k]:
raise RuntimeError("Predifined Variable Conflict in '{0:s}' two differing values defined".format(k))
fn = self.core.ops[op]
obj.__expr.append(ExpressionFunction(fn['func'],fn['args'],fn['str'],fn['latex'],op,False))
return obj
def __icombine(self,other,op):
if op not in self.core.ops or not isinstance(other,(int,float,complex,type(self),basestring)):
return NotImplemented
else:
obj = self
if isinstance(other,(int,float,complex)):
obj.__expr.append(ExpressionValue(other))
else:
if isinstance(other,basestring):
try:
other = type(self)(other)
except:
raise SyntaxError("Can't Convert string, \"{0:s}\" to an Expression Object".format(other))
obj.__expr += other.__expr
obj.__argsused |= other.__argsused
for v in other.__args:
if v not in obj.__args:
obj.__args.append(v)
for k,v in other.__vars.items():
if k not in obj.__vars:
obj.__vars[k] = v
elif v != obj.__vars[k]:
raise RuntimeError("Predifined Variable Conflict in '{0:s}' two differing values defined".format(k))
fn = self.core.ops[op]
obj.__expr.append(ExpressionFunction(fn['func'],fn['args'],fn['str'],fn['latex'],op,False))
return obj
def __apply(self,op):
fn = self.core.unary_ops[op]
obj = type(self)(self)
obj.__expr.append(ExpressionFunction(fn['func'],1,fn['str'],fn['latex'],op,False))
return obj
def __applycall(self,op):
fn = self.core.functions[op]
if 1 not in fn['args'] or '*' not in fn['args']:
raise RuntimeError("Can't Apply {0:s} function, dosen't accept only 1 argument".format(op))
obj = type(self)(self)
obj.__expr.append(ExpressionFunction(fn['func'],1,fn['str'],fn['latex'],op,False))
return obj
def __add__(self,other):
return self.__combine(other,'+')
def __sub__(self,other):
return self.__combine(other,'-')
def __mul__(self,other):
return self.__combine(other,'*')
def __div__(self,other):
return self.__combine(other,'/')
def __truediv__(self,other):
return self.__combine(other,'/')
def __pow__(self,other):
return self.__combine(other,'^')
def __mod__(self,other):
return self.__combine(other,'%')
def __and__(self,other):
return self.__combine(other,'&')
def __or__(self,other):
return self.__combine(other,'|')
def __xor__(self,other):
return self.__combine(other,'</>')
def __radd__(self,other):
return self.__rcombine(other,'+')
def __rsub__(self,other):
return self.__rcombine(other,'-')
def __rmul__(self,other):
return self.__rcombine(other,'*')
def __rdiv__(self,other):
return self.__rcombine(other,'/')
def __rtruediv__(self,other):
return self.__rcombine(other,'/')
def __rpow__(self,other):
return self.__rcombine(other,'^')
def __rmod__(self,other):
return self.__rcombine(other,'%')
def __rand__(self,other):
return self.__rcombine(other,'&')
def __ror__(self,other):
return self.__rcombine(other,'|')
def __rxor__(self,other):
return self.__rcombine(other,'</>')
def __iadd__(self,other):
return self.__icombine(other,'+')
def __isub__(self,other):
return self.__icombine(other,'-')
def __imul__(self,other):
return self.__icombine(other,'*')
def __idiv__(self,other):
return self.__icombine(other,'/')
def __itruediv__(self,other):
return self.__icombine(other,'/')
def __ipow__(self,other):
return self.__icombine(other,'^')
def __imod__(self,other):
return self.__icombine(other,'%')
def __iand__(self,other):
return self.__icombine(other,'&')
def __ior__(self,other):
return self.__icombine(other,'|')
def __ixor__(self,other):
return self.__icombine(other,'</>')
def __neg__(self):
return self.__apply('-')
def __invert__(self):
return self.__apply('!')
def __abs__(self):
return self.__applycall('abs')
def __getfunction(self,op):
if op[1] == 'FUNC':
fn = self.core.functions[op[0]]
fn['type'] = 'FUNC'
elif op[1] == 'UNARY':
fn = self.core.unary_ops[op[0]]
fn['type'] = 'UNARY'
fn['args'] = 1
elif op[1] == 'OP':
fn = self.core.ops[op[0]]
fn['type'] = 'OP'
return fn
def __compile(self):
self.__expr = []
stack = []
argc = []
__expect_op = False
v = self.__next(__expect_op)
while v != None:
if not __expect_op and v[1] == "OPEN":
stack.append(v)
__expect_op = False
elif __expect_op and v[1] == "CLOSE":
op = stack.pop()
while op[1] != "OPEN":
fs = self.__getfunction(op)
self.__expr.append(ExpressionFunction(fs['func'],fs['args'],fs['str'],fs['latex'],op[0],False))
op = stack.pop()
if len(stack) > 0 and stack[-1][0] in self.core.functions:
op = stack.pop()
fs = self.core.functions[op[0]]
args = argc.pop()
if fs['args'] != '+' and (args != fs['args'] and args not in fs['args']):
raise SyntaxError("Invalid number of arguments for {0:s} function".format(op[0]))
self.__expr.append(ExpressionFunction(fs['func'],args,fs['str'],fs['latex'],op[0],True))
__expect_op = True
elif __expect_op and v[0] == ",":
argc[-1] += 1
op = stack.pop()
while op[1] != "OPEN":
fs = self.__getfunction(op)
self.__expr.append(ExpressionFunction(fs['func'],fs['args'],fs['str'],fs['latex'],op[0],False))
op = stack.pop()
stack.append(op)
__expect_op = False
elif __expect_op and v[0] in self.core.ops:
fn = self.core.ops[v[0]]
if len(stack) == 0:
stack.append(v)
__expect_op = False
v = self.__next(__expect_op)
continue
op = stack.pop()
if op[0] == "(":
stack.append(op)
stack.append(v)
__expect_op = False
v = self.__next(__expect_op)
continue
fs = self.__getfunction(op)
while True:
if (fn['prec'] >= fs['prec']):
self.__expr.append(ExpressionFunction(fs['func'],fs['args'],fs['str'],fs['latex'],op[0],False))
if len(stack) == 0:
stack.append(v)
break
op = stack.pop()
if op[0] == "(":
stack.append(op)
stack.append(v)
break
fs = self.__getfunction(op)
else:
stack.append(op)
stack.append(v)
break
__expect_op = False
elif not __expect_op and v[0] in self.core.unary_ops:
fn = self.core.unary_ops[v[0]]
stack.append(v)
__expect_op = False
elif not __expect_op and v[0] in self.core.functions:
stack.append(v)
argc.append(1)
__expect_op = False
elif not __expect_op and v[1] == 'NAME':
self.__argsused.add(v[0])
if v[0] not in self.__args:
self.__args.append(v[0])
self.__expr.append(ExpressionVariable(v[0]))
__expect_op = True
elif not __expect_op and v[1] == 'VALUE':
self.__expr.append(ExpressionValue(v[0]))
__expect_op = True
else:
raise SyntaxError("Invalid Token \"{0:s}\" in Expression, Expected {1:s}".format(v,"Op" if __expect_op else "Value"))
v = self.__next(__expect_op)
if len(stack) > 0:
op = stack.pop()
while op != "(":
fs = self.__getfunction(op)
self.__expr.append(ExpressionFunction(fs['func'],fs['args'],fs['str'],fs['latex'],op[0],False))
if len(stack) > 0:
op = stack.pop()
else:
break

View File

@@ -0,0 +1,237 @@
import math
import sys
import re
if sys.version_info >= (3,):
xrange = range
basestring = str
class ExpressionObject(object):
def __init__(self,*args,**kwargs):
super(ExpressionObject,self).__init__(*args,**kwargs)
def toStr(self,args,expression):
return ""
def toRepr(self,args,expression):
return ""
def __call__(self,args,expression):
pass
class ExpressionValue(ExpressionObject):
def __init__(self,value,*args,**kwargs):
super(ExpressionValue,self).__init__(*args,**kwargs)
self.value = value
def toStr(self,args,expression):
if (isinstance(self.value,complex)):
V = [self.value.real,self.value.imag]
E = [0,0]
B = [0,0]
out = ["",""]
for i in xrange(2):
if V[i] == 0:
E[i] = 0
B[i] = 0
else:
E[i] = int(math.floor(math.log10(abs(V[i]))))
B[i] = V[i]*10**-E[i]
if E[i] in [0,1,2,3] and str(V[i])[-2:] == ".0":
B[i] = int(V[i])
E[i] = 0
if E[i] in [-1,-2] and len(str(V[i])) <= 7:
B[i] = V[i]
E[i] = 0
if i == 1:
fmt = "{{0:+{0:s}}}"
else:
fmt = "{{0:-{0:s}}}"
if type(B[i]) == int:
out[i] += fmt.format('d').format(B[i])
else:
out[i] += fmt.format('.5f').format(B[i]).rstrip("0.")
if i == 1:
out[i] += "\\imath"
if E[i] != 0:
out[i] += "\\times10^{{{0:d}}}".format(E[i])
return "\\left(" + ''.join(out) + "\\right)"
elif (isinstance(self.value,float)):
V = self.value
E = 0
B = 0
out = ""
if V == 0:
E = 0
B = 0
else:
E = int(math.floor(math.log10(abs(V))))
B = V*10**-E
if E in [0,1,2,3] and str(V)[-2:] == ".0":
B = int(V)
E = 0
if E in [-1,-2] and len(str(V)) <= 7:
B = V
E = 0
if type(B) == int:
out += "{0:-d}".format(B)
else:
out += "{0:-.5f}".format(B).rstrip("0.")
if E != 0:
out += "\\times10^{{{0:d}}}".format(E)
return "\\left(" + out + "\\right)"
else:
return out
else:
return str(self.value)
def toRepr(self,args,expression):
return str(self.value)
def __call__(self,args,expression):
return self.value
def __repr__(self):
return "<{0:s}.{1:s}({2:s}) object at {3:0=#10x}>".format(type(self).__module__,type(self).__name__,str(self.value),id(self))
class ExpressionFunction(ExpressionObject):
def __init__(self,function,nargs,form,display,id,isfunc,*args,**kwargs):
super(ExpressionFunction,self).__init__(*args,**kwargs)
self.function = function
self.nargs = nargs
self.form = form
self.display = display
self.id = id
self.isfunc = isfunc
def toStr(self,args,expression):
params = []
for i in xrange(self.nargs):
params.append(args.pop())
if self.isfunc:
return str(self.display.format(','.join(params[::-1])))
else:
return str(self.display.format(*params[::-1]))
def toRepr(self,args,expression):
params = []
for i in xrange(self.nargs):
params.append(args.pop())
if self.isfunc:
return str(self.form.format(','.join(params[::-1])))
else:
return str(self.form.format(*params[::-1]))
def __call__(self,args,expression):
params = []
for i in xrange(self.nargs):
params.append(args.pop())
return self.function(*params[::-1])
def __repr__(self):
return "<{0:s}.{1:s}({2:s},{3:d}) object at {4:0=#10x}>".format(type(self).__module__,type(self).__name__,str(self.id),self.nargs,id(self))
class ExpressionVariable(ExpressionObject):
def __init__(self,name,*args,**kwargs):
super(ExpressionVariable,self).__init__(*args,**kwargs)
self.name = name
def toStr(self,args,expression):
return str(self.name)
def toRepr(self,args,expression):
return str(self.name)
def __call__(self,args,expression):
if self.name in expression.variables:
return expression.variables[self.name]
else:
return 0 # Default variables to return 0
def __repr__(self):
return "<{0:s}.{1:s}({2:s}) object at {3:0=#10x}>".format(type(self).__module__,type(self).__name__,str(self.name),id(self))
class Core():
constants = {}
unary_ops = {}
ops = {}
functions = {}
smatch = re.compile(r"\s*,")
vmatch = re.compile(r"\s*"
"(?:"
"(?P<oct>"
"(?P<octsign>[+-]?)"
r"\s*0o"
"(?P<octvalue>[0-7]+)"
")|(?P<hex>"
"(?P<hexsign>[+-]?)"
r"\s*0x"
"(?P<hexvalue>[0-9a-fA-F]+)"
")|(?P<bin>"
"(?P<binsign>[+-]?)"
r"\s*0b"
"(?P<binvalue>[01]+)"
")|(?P<dec>"
"(?P<rsign>[+-]?)"
r"\s*"
r"(?P<rvalue>(?:\d+\.\d+|\d+\.|\.\d+|\d+))"
"(?:"
"[Ee]"
r"(?P<rexpoent>[+-]?\d+)"
")?"
"(?:"
r"\s*"
r"(?P<sep>(?(rvalue)\+|))?"
r"\s*"
"(?P<isign>(?(rvalue)(?(sep)[+-]?|[+-])|[+-]?)?)"
r"\s*"
r"(?P<ivalue>(?:\d+\.\d+|\d+\.|\.\d+|\d+))"
"(?:"
"[Ee]"
r"(?P<iexpoent>[+-]?\d+)"
")?"
"[ij]"
")?"
")"
")")
nmatch = re.compile(r"\s*([a-zA-Z_][a-zA-Z0-9_]*)")
gsmatch = re.compile(r'\s*(\()')
gematch = re.compile(r'\s*(\))')
def recalculateFMatch(self):
fks = sorted(self.functions.keys(), key=len, reverse=True)
oks = sorted(self.ops.keys(), key=len, reverse=True)
uks = sorted(self.unary_ops.keys(), key=len, reverse=True)
self.fmatch = re.compile(r'\s*(' + '|'.join(map(re.escape,fks)) + ')')
self.omatch = re.compile(r'\s*(' + '|'.join(map(re.escape,oks)) + ')')
self.umatch = re.compile(r'\s*(' + '|'.join(map(re.escape,uks)) + ')')
def addFn(self,id,str,latex,args,func):
self.functions[id] = {
'str': str,
'latex': latex,
'args': args,
'func': func}
def addOp(self,id,str,latex,single,prec,func):
if single:
raise RuntimeError("Single Ops Not Yet Supported")
self.ops[id] = {
'str': str,
'latex': latex,
'args': 2,
'prec': prec,
'func': func}
def addUnaryOp(self,id,str,latex,func):
self.unary_ops[id] = {
'str': str,
'latex': latex,
'args': 1,
'prec': 0,
'func': func}
def addConst(self,name,value):
self.constants[name] = value

View File

@@ -0,0 +1,2 @@
from . import equation_base as equation_base
from .ExpressionCore import ExpressionValue, ExpressionFunction, ExpressionVariable, Core

View File

@@ -0,0 +1,106 @@
try:
import numpy as np
has_numpy = True
except ImportError:
import math
has_numpy = False
try:
import scipy.constants
has_scipy = True
except ImportError:
has_scipy = False
import operator as op
from .similar import sim, nsim, gsim, lsim
def equation_extend(core):
def product(*args):
if len(args) == 1 and has_numpy:
return np.prod(args[0])
else:
return reduce(op.mul,args,1)
def sumargs(*args):
if len(args) == 1:
return sum(args[0])
else:
return sum(args)
core.addOp('+',"({0:s} + {1:s})","\\left({0:s} + {1:s}\\right)",False,3,op.add)
core.addOp('-',"({0:s} - {1:s})","\\left({0:s} - {1:s}\\right)",False,3,op.sub)
core.addOp('*',"({0:s} * {1:s})","\\left({0:s} \\times {1:s}\\right)",False,2,op.mul)
core.addOp('/',"({0:s} / {1:s})","\\frac{{{0:s}}}{{{1:s}}}",False,2,op.truediv)
core.addOp('%',"({0:s} % {1:s})","\\left({0:s} \\bmod {1:s}\\right)",False,2,op.mod)
core.addOp('^',"({0:s} ^ {1:s})","{0:s}^{{{1:s}}}",False,1,op.pow)
core.addOp('**',"({0:s} ^ {1:s})","{0:s}^{{{1:s}}}",False,1,op.pow)
core.addOp('&',"({0:s} & {1:s})","\\left({0:s} \\land {1:s}\\right)",False,4,op.and_)
core.addOp('|',"({0:s} | {1:s})","\\left({0:s} \\lor {1:s}\\right)",False,4,op.or_)
core.addOp('</>',"({0:s} </> {1:s})","\\left({0:s} \\oplus {1:s}\\right)",False,4,op.xor)
core.addOp('&|',"({0:s} </> {1:s})","\\left({0:s} \\oplus {1:s}\\right)",False,4,op.xor)
core.addOp('|&',"({0:s} </> {1:s})","\\left({0:s} \\oplus {1:s}\\right)",False,4,op.xor)
core.addOp('==',"({0:s} == {1:s})","\\left({0:s} = {1:s}\\right)",False,5,op.eq)
core.addOp('=',"({0:s} == {1:s})","\\left({0:s} = {1:s}\\right)",False,5,op.eq)
core.addOp('~',"({0:s} ~ {1:s})","\\left({0:s} \\approx {1:s}\\right)",False,5,sim)
core.addOp('!~',"({0:s} !~ {1:s})","\\left({0:s} \\not\\approx {1:s}\\right)",False,5,nsim)
core.addOp('!=',"({0:s} != {1:s})","\\left({0:s} \\neg {1:s}\\right)",False,5,op.ne)
core.addOp('<>',"({0:s} != {1:s})","\\left({0:s} \\neg {1:s}\\right)",False,5,op.ne)
core.addOp('><',"({0:s} != {1:s})","\\left({0:s} \\neg {1:s}\\right)",False,5,op.ne)
core.addOp('<',"({0:s} < {1:s})","\\left({0:s} < {1:s}\\right)",False,5,op.lt)
core.addOp('>',"({0:s} > {1:s})","\\left({0:s} > {1:s}\\right)",False,5,op.gt)
core.addOp('<=',"({0:s} <= {1:s})","\\left({0:s} \\leq {1:s}\\right)",False,5,op.le)
core.addOp('>=',"({0:s} >= {1:s})","\\left({0:s} \\geq {1:s}\\right)",False,5,op.ge)
core.addOp('=<',"({0:s} <= {1:s})","\\left({0:s} \\leq {1:s}\\right)",False,5,op.le)
core.addOp('=>',"({0:s} >= {1:s})","\\left({0:s} \\geq {1:s}\\right)",False,5,op.ge)
core.addOp('<~',"({0:s} <~ {1:s})","\\left({0:s} \lessapprox {1:s}\\right)",False,5,lsim)
core.addOp('>~',"({0:s} >~ {1:s})","\\left({0:s} \\gtrapprox {1:s}\\right)",False,5,gsim)
core.addOp('~<',"({0:s} <~ {1:s})","\\left({0:s} \lessapprox {1:s}\\right)",False,5,lsim)
core.addOp('~>',"({0:s} >~ {1:s})","\\left({0:s} \\gtrapprox {1:s}\\right)",False,5,gsim)
core.addUnaryOp('!',"(!{0:s})","\\neg{0:s}",op.not_)
core.addUnaryOp('-',"-{0:s}","-{0:s}",op.neg)
core.addFn('abs',"abs({0:s})","\\left|{0:s}\\right|",1,op.abs)
core.addFn('sum',"sum({0:s})","\\sum\\left({0:s}\\right)",'+',sumargs)
core.addFn('prod',"prod({0:s})","\\prod\\left({0:s}\\right)",'+',product)
if has_numpy:
core.addFn('floor',"floor({0:s})","\\lfloor {0:s} \\rfloor",1,np.floor)
core.addFn('ceil',"ceil({0:s})","\\lceil {0:s} \\rceil",1,np.ceil)
core.addFn('round',"round({0:s})","\\lfloor {0:s} \\rceil",1,np.round)
core.addFn('sin',"sin({0:s})","\\sin\\left({0:s}\\right)",1,np.sin)
core.addFn('cos',"cos({0:s})","\\cos\\left({0:s}\\right)",1,np.cos)
core.addFn('tan',"tan({0:s})","\\tan\\left({0:s}\\right)",1,np.tan)
core.addFn('re',"re({0:s})","\\Re\\left({0:s}\\right)",1,np.real)
core.addFn('im',"re({0:s})","\\Im\\left({0:s}\\right)",1,np.imag)
core.addFn('sqrt',"sqrt({0:s})","\\sqrt{{{0:s}}}",1,np.sqrt)
core.addConst("pi",np.pi)
core.addConst("e",np.e)
core.addConst("Inf",np.Inf)
core.addConst("NaN",np.NaN)
else:
core.addFn('floor',"floor({0:s})","\\lfloor {0:s} \\rfloor",1,math.floor)
core.addFn('ceil',"ceil({0:s})","\\lceil {0:s} \\rceil",1,math.ceil)
core.addFn('round',"round({0:s})","\\lfloor {0:s} \\rceil",1,round)
core.addFn('sin',"sin({0:s})","\\sin\\left({0:s}\\right)",1,math.sin)
core.addFn('cos',"cos({0:s})","\\cos\\left({0:s}\\right)",1,math.cos)
core.addFn('tan',"tan({0:s})","\\tan\\left({0:s}\\right)",1,math.tan)
core.addFn('re',"re({0:s})","\\Re\\left({0:s}\\right)",1,complex.real)
core.addFn('im',"re({0:s})","\\Im\\left({0:s}\\right)",1,complex.imag)
core.addFn('sqrt',"sqrt({0:s})","\\sqrt{{{0:s}}}",1,math.sqrt)
core.addConst("pi",math.pi)
core.addConst("e",math.e)
core.addConst("Inf",float("Inf"))
core.addConst("NaN",float("NaN"))
if has_scipy:
core.addConst("h",scipy.constants.h)
core.addConst("hbar",scipy.constants.hbar)
core.addConst("m_e",scipy.constants.m_e)
core.addConst("m_p",scipy.constants.m_p)
core.addConst("m_n",scipy.constants.m_n)
core.addConst("c",scipy.constants.c)
core.addConst("N_A",scipy.constants.N_A)
core.addConst("mu_0",scipy.constants.mu_0)
core.addConst("eps_0",scipy.constants.epsilon_0)
core.addConst("k",scipy.constants.k)
core.addConst("G",scipy.constants.G)
core.addConst("g",scipy.constants.g)
core.addConst("q",scipy.constants.e)
core.addConst("R",scipy.constants.R)
core.addConst("sigma",scipy.constants.e)
core.addConst("Rb",scipy.constants.Rydberg)

View File

@@ -0,0 +1,49 @@
_tol = 1e-5
def sim(a,b):
if (a==b):
return True
elif a == 0 or b == 0:
return False
if (a<b):
return (1-a/b)<=_tol
else:
return (1-b/a)<=_tol
def nsim(a,b):
if (a==b):
return False
elif a == 0 or b == 0:
return True
if (a<b):
return (1-a/b)>_tol
else:
return (1-b/a)>_tol
def gsim(a,b):
if a >= b:
return True
return (1-a/b)<=_tol
def lsim(a,b):
if a <= b:
return True
return (1-b/a)<=_tol
def set_tol(value=1e-5):
r"""Set Error Tolerance
Set the tolerance for detriming if two numbers are simliar, i.e
:math:`\left|\frac{a}{b}\right| = 1 \pm tolerance`
Parameters
----------
value: float
The Value to set the tolerance to show be very small as it respresents the
percentage of acceptable error in detriming if two values are the same.
"""
global _tol
if isinstance(value,float):
_tol = value
else:
raise TypeError(type(value))

View File

@@ -0,0 +1,51 @@
import re
from decimal import Decimal
from functools import reduce
class RegexInplaceParser(object):
def __init__(self, string):
self.string = string
def add(self, string):
while(len(re.findall("[+]{1}[-]?", string)) != 0):
string = re.sub("[-]?\d+[.]?\d*[+]{1}[-]?\d+[.]?\d*", str("%f" % reduce((lambda x, y: x + y), [Decimal(i) for i in re.split("[+]{1}", re.search("[-]?\d+[.]?\d*[+]{1}[-]?\d+[.]?\d*", string).group())])), string, 1)
return string
def sub(self, string):
while(len(re.findall("\d+[.]?\d*[-]{1,2}\d+[.]?\d*", string)) != 0):
g = re.search("\d+[.]?\d*[-]{1,2}\d+[.]?\d*", string).group()
if(re.search("[-]{1,2}", g).group() == "-"):
r = re.sub("[-]{1}", "+-", g, 1)
string = re.sub(g, r, string, 1)
elif(re.search("[-]{1,2}", g).group() == "--"):
r = re.sub("[-]{2}", "+", g, 1)
string = re.sub(g, r, string, 1)
else:
pass
return string
def mul(self, string):
while(len(re.findall("[*]{1}[-]?", string)) != 0):
string = re.sub("[-]?\d+[.]?\d*[*]{1}[-]?\d+[.]?\d*", str("%f" % reduce((lambda x, y: x * y), [Decimal(i) for i in re.split("[*]{1}", re.search("[-]?\d+[.]?\d*[*]{1}[-]?\d+[.]?\d*", string).group())])), string, 1)
return string
def div(self, string):
while(len(re.findall("[/]{1}[-]?", string)) != 0):
string = re.sub("[-]?\d+[.]?\d*[/]{1}[-]?\d+[.]?\d*", str("%f" % reduce((lambda x, y: x / y), [Decimal(i) for i in re.split("[/]{1}", re.search("[-]?\d+[.]?\d*[/]{1}[-]?\d+[.]?\d*", string).group())])), string, 1)
return string
def exp(self, string):
while(len(re.findall("[\^]{1}[-]?", string)) != 0):
string = re.sub("[-]?\d+[.]?\d*[\^]{1}[-]?\d+[.]?\d*", str("%f" % reduce((lambda x, y: x ** y), [Decimal(i) for i in re.split("[\^]{1}", re.search("[-]?\d+[.]?\d*[\^]{1}[-]?\d+[.]?\d*", string).group())])), string, 1)
return string
def evaluate(self):
string = self.string
string = self.exp(string)
string = self.div(string)
string = self.mul(string)
string = self.sub(string)
string = self.add(string)
return string

View File

@@ -0,0 +1,34 @@
# Titan Robotics Team 2022: Expression submodule
# Written by Arthur Lu
# Notes:
# this should be imported as a python module using 'from tra_analysis.Equation import parser'
# setup:
__version__ = "0.0.4-alpha"
__changelog__ = """changelog:
0.0.4-alpha:
- moved individual parsers to their own files
0.0.3-alpha:
- readded old regex based parser as RegexInplaceParser
0.0.2-alpha:
- wrote BNF using pyparsing and uses a BNF metasyntax
- renamed this submodule parser
0.0.1-alpha:
- took items from equation.ipynb and ported here
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = {
"BNF",
"RegexInplaceParser",
"HybridExpressionParser"
}
from .BNF import BNF as BNF
from .RegexInplaceParser import RegexInplaceParser as RegexInplaceParser
from .Hybrid import HybridExpressionParser
from .Hybrid_Utils import equation_base, Core

View File

@@ -0,0 +1,21 @@
# Titan Robotics Team 2022: py2 module
# Written by Arthur Lu
# Notes:
# this module should only be used internally, contains old python 2.X functions that have been removed.
# setup:
from __future__ import division
__version__ = "1.0.0"
__changelog__ = """changelog:
1.0.0:
- added cmp function
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
def cmp(a, b):
return (a > b) - (a < b)

View File

@@ -0,0 +1,7 @@
import numpy as np
def calculate(starting_score, opposing_score, observed, N, K):
expected = 1/(1+10**((np.array(opposing_score) - starting_score)/N))
return starting_score + K*(np.sum(observed) - np.sum(expected))

View File

@@ -0,0 +1,99 @@
import math
class Glicko2:
_tau = 0.5
def getRating(self):
return (self.__rating * 173.7178) + 1500
def setRating(self, rating):
self.__rating = (rating - 1500) / 173.7178
rating = property(getRating, setRating)
def getRd(self):
return self.__rd * 173.7178
def setRd(self, rd):
self.__rd = rd / 173.7178
rd = property(getRd, setRd)
def __init__(self, rating = 1500, rd = 350, vol = 0.06):
self.setRating(rating)
self.setRd(rd)
self.vol = vol
def _preRatingRD(self):
self.__rd = math.sqrt(math.pow(self.__rd, 2) + math.pow(self.vol, 2))
def update_player(self, rating_list, RD_list, outcome_list):
rating_list = [(x - 1500) / 173.7178 for x in rating_list]
RD_list = [x / 173.7178 for x in RD_list]
v = self._v(rating_list, RD_list)
self.vol = self._newVol(rating_list, RD_list, outcome_list, v)
self._preRatingRD()
self.__rd = 1 / math.sqrt((1 / math.pow(self.__rd, 2)) + (1 / v))
tempSum = 0
for i in range(len(rating_list)):
tempSum += self._g(RD_list[i]) * \
(outcome_list[i] - self._E(rating_list[i], RD_list[i]))
self.__rating += math.pow(self.__rd, 2) * tempSum
def _newVol(self, rating_list, RD_list, outcome_list, v):
i = 0
delta = self._delta(rating_list, RD_list, outcome_list, v)
a = math.log(math.pow(self.vol, 2))
tau = self._tau
x0 = a
x1 = 0
while x0 != x1:
# New iteration, so x(i) becomes x(i-1)
x0 = x1
d = math.pow(self.__rating, 2) + v + math.exp(x0)
h1 = -(x0 - a) / math.pow(tau, 2) - 0.5 * math.exp(x0) \
/ d + 0.5 * math.exp(x0) * math.pow(delta / d, 2)
h2 = -1 / math.pow(tau, 2) - 0.5 * math.exp(x0) * \
(math.pow(self.__rating, 2) + v) \
/ math.pow(d, 2) + 0.5 * math.pow(delta, 2) * math.exp(x0) \
* (math.pow(self.__rating, 2) + v - math.exp(x0)) / math.pow(d, 3)
x1 = x0 - (h1 / h2)
return math.exp(x1 / 2)
def _delta(self, rating_list, RD_list, outcome_list, v):
tempSum = 0
for i in range(len(rating_list)):
tempSum += self._g(RD_list[i]) * (outcome_list[i] - self._E(rating_list[i], RD_list[i]))
return v * tempSum
def _v(self, rating_list, RD_list):
tempSum = 0
for i in range(len(rating_list)):
tempE = self._E(rating_list[i], RD_list[i])
tempSum += math.pow(self._g(RD_list[i]), 2) * tempE * (1 - tempE)
return 1 / tempSum
def _E(self, p2rating, p2RD):
return 1 / (1 + math.exp(-1 * self._g(p2RD) * \
(self.__rating - p2rating)))
def _g(self, RD):
return 1 / math.sqrt(1 + 3 * math.pow(RD, 2) / math.pow(math.pi, 2))
def did_not_compete(self):
self._preRatingRD()

View File

@@ -0,0 +1,907 @@
from __future__ import absolute_import
from itertools import chain
import math
from six import iteritems
from six.moves import map, range, zip
from six import iterkeys
import copy
try:
from numbers import Number
except ImportError:
Number = (int, long, float, complex)
inf = float('inf')
class Gaussian(object):
#: Precision, the inverse of the variance.
pi = 0
#: Precision adjusted mean, the precision multiplied by the mean.
tau = 0
def __init__(self, mu=None, sigma=None, pi=0, tau=0):
if mu is not None:
if sigma is None:
raise TypeError('sigma argument is needed')
elif sigma == 0:
raise ValueError('sigma**2 should be greater than 0')
pi = sigma ** -2
tau = pi * mu
self.pi = pi
self.tau = tau
@property
def mu(self):
return self.pi and self.tau / self.pi
@property
def sigma(self):
return math.sqrt(1 / self.pi) if self.pi else inf
def __mul__(self, other):
pi, tau = self.pi + other.pi, self.tau + other.tau
return Gaussian(pi=pi, tau=tau)
def __truediv__(self, other):
pi, tau = self.pi - other.pi, self.tau - other.tau
return Gaussian(pi=pi, tau=tau)
__div__ = __truediv__ # for Python 2
def __eq__(self, other):
return self.pi == other.pi and self.tau == other.tau
def __lt__(self, other):
return self.mu < other.mu
def __le__(self, other):
return self.mu <= other.mu
def __gt__(self, other):
return self.mu > other.mu
def __ge__(self, other):
return self.mu >= other.mu
def __repr__(self):
return 'N(mu={:.3f}, sigma={:.3f})'.format(self.mu, self.sigma)
def _repr_latex_(self):
latex = r'\mathcal{{ N }}( {:.3f}, {:.3f}^2 )'.format(self.mu, self.sigma)
return '$%s$' % latex
class Matrix(list):
def __init__(self, src, height=None, width=None):
if callable(src):
f, src = src, {}
size = [height, width]
if not height:
def set_height(height):
size[0] = height
size[0] = set_height
if not width:
def set_width(width):
size[1] = width
size[1] = set_width
try:
for (r, c), val in f(*size):
src[r, c] = val
except TypeError:
raise TypeError('A callable src must return an interable '
'which generates a tuple containing '
'coordinate and value')
height, width = tuple(size)
if height is None or width is None:
raise TypeError('A callable src must call set_height and '
'set_width if the size is non-deterministic')
if isinstance(src, list):
is_number = lambda x: isinstance(x, Number)
unique_col_sizes = set(map(len, src))
everything_are_number = filter(is_number, sum(src, []))
if len(unique_col_sizes) != 1 or not everything_are_number:
raise ValueError('src must be a rectangular array of numbers')
two_dimensional_array = src
elif isinstance(src, dict):
if not height or not width:
w = h = 0
for r, c in iterkeys(src):
if not height:
h = max(h, r + 1)
if not width:
w = max(w, c + 1)
if not height:
height = h
if not width:
width = w
two_dimensional_array = []
for r in range(height):
row = []
two_dimensional_array.append(row)
for c in range(width):
row.append(src.get((r, c), 0))
else:
raise TypeError('src must be a list or dict or callable')
super(Matrix, self).__init__(two_dimensional_array)
@property
def height(self):
return len(self)
@property
def width(self):
return len(self[0])
def transpose(self):
height, width = self.height, self.width
src = {}
for c in range(width):
for r in range(height):
src[c, r] = self[r][c]
return type(self)(src, height=width, width=height)
def minor(self, row_n, col_n):
height, width = self.height, self.width
if not (0 <= row_n < height):
raise ValueError('row_n should be between 0 and %d' % height)
elif not (0 <= col_n < width):
raise ValueError('col_n should be between 0 and %d' % width)
two_dimensional_array = []
for r in range(height):
if r == row_n:
continue
row = []
two_dimensional_array.append(row)
for c in range(width):
if c == col_n:
continue
row.append(self[r][c])
return type(self)(two_dimensional_array)
def determinant(self):
height, width = self.height, self.width
if height != width:
raise ValueError('Only square matrix can calculate a determinant')
tmp, rv = copy.deepcopy(self), 1.
for c in range(width - 1, 0, -1):
pivot, r = max((abs(tmp[r][c]), r) for r in range(c + 1))
pivot = tmp[r][c]
if not pivot:
return 0.
tmp[r], tmp[c] = tmp[c], tmp[r]
if r != c:
rv = -rv
rv *= pivot
fact = -1. / pivot
for r in range(c):
f = fact * tmp[r][c]
for x in range(c):
tmp[r][x] += f * tmp[c][x]
return rv * tmp[0][0]
def adjugate(self):
height, width = self.height, self.width
if height != width:
raise ValueError('Only square matrix can be adjugated')
if height == 2:
a, b = self[0][0], self[0][1]
c, d = self[1][0], self[1][1]
return type(self)([[d, -b], [-c, a]])
src = {}
for r in range(height):
for c in range(width):
sign = -1 if (r + c) % 2 else 1
src[r, c] = self.minor(r, c).determinant() * sign
return type(self)(src, height, width)
def inverse(self):
if self.height == self.width == 1:
return type(self)([[1. / self[0][0]]])
return (1. / self.determinant()) * self.adjugate()
def __add__(self, other):
height, width = self.height, self.width
if (height, width) != (other.height, other.width):
raise ValueError('Must be same size')
src = {}
for r in range(height):
for c in range(width):
src[r, c] = self[r][c] + other[r][c]
return type(self)(src, height, width)
def __mul__(self, other):
if self.width != other.height:
raise ValueError('Bad size')
height, width = self.height, other.width
src = {}
for r in range(height):
for c in range(width):
src[r, c] = sum(self[r][x] * other[x][c]
for x in range(self.width))
return type(self)(src, height, width)
def __rmul__(self, other):
if not isinstance(other, Number):
raise TypeError('The operand should be a number')
height, width = self.height, self.width
src = {}
for r in range(height):
for c in range(width):
src[r, c] = other * self[r][c]
return type(self)(src, height, width)
def __repr__(self):
return '{}({})'.format(type(self).__name__, super(Matrix, self).__repr__())
def _repr_latex_(self):
rows = [' && '.join(['%.3f' % cell for cell in row]) for row in self]
latex = r'\begin{matrix} %s \end{matrix}' % r'\\'.join(rows)
return '$%s$' % latex
def _gen_erfcinv(erfc, math=math):
def erfcinv(y):
"""The inverse function of erfc."""
if y >= 2:
return -100.
elif y <= 0:
return 100.
zero_point = y < 1
if not zero_point:
y = 2 - y
t = math.sqrt(-2 * math.log(y / 2.))
x = -0.70711 * \
((2.30753 + t * 0.27061) / (1. + t * (0.99229 + t * 0.04481)) - t)
for i in range(2):
err = erfc(x) - y
x += err / (1.12837916709551257 * math.exp(-(x ** 2)) - x * err)
return x if zero_point else -x
return erfcinv
def _gen_ppf(erfc, math=math):
erfcinv = _gen_erfcinv(erfc, math)
def ppf(x, mu=0, sigma=1):
return mu - sigma * math.sqrt(2) * erfcinv(2 * x)
return ppf
def erfc(x):
z = abs(x)
t = 1. / (1. + z / 2.)
r = t * math.exp(-z * z - 1.26551223 + t * (1.00002368 + t * (
0.37409196 + t * (0.09678418 + t * (-0.18628806 + t * (
0.27886807 + t * (-1.13520398 + t * (1.48851587 + t * (
-0.82215223 + t * 0.17087277
)))
)))
)))
return 2. - r if x < 0 else r
def cdf(x, mu=0, sigma=1):
return 0.5 * erfc(-(x - mu) / (sigma * math.sqrt(2)))
def pdf(x, mu=0, sigma=1):
return (1 / math.sqrt(2 * math.pi) * abs(sigma) *
math.exp(-(((x - mu) / abs(sigma)) ** 2 / 2)))
ppf = _gen_ppf(erfc)
def choose_backend(backend):
if backend is None: # fallback
return cdf, pdf, ppf
elif backend == 'mpmath':
try:
import mpmath
except ImportError:
raise ImportError('Install "mpmath" to use this backend')
return mpmath.ncdf, mpmath.npdf, _gen_ppf(mpmath.erfc, math=mpmath)
elif backend == 'scipy':
try:
from scipy.stats import norm
except ImportError:
raise ImportError('Install "scipy" to use this backend')
return norm.cdf, norm.pdf, norm.ppf
raise ValueError('%r backend is not defined' % backend)
def available_backends():
backends = [None]
for backend in ['mpmath', 'scipy']:
try:
__import__(backend)
except ImportError:
continue
backends.append(backend)
return backends
class Node(object):
pass
class Variable(Node, Gaussian):
def __init__(self):
self.messages = {}
super(Variable, self).__init__()
def set(self, val):
delta = self.delta(val)
self.pi, self.tau = val.pi, val.tau
return delta
def delta(self, other):
pi_delta = abs(self.pi - other.pi)
if pi_delta == inf:
return 0.
return max(abs(self.tau - other.tau), math.sqrt(pi_delta))
def update_message(self, factor, pi=0, tau=0, message=None):
message = message or Gaussian(pi=pi, tau=tau)
old_message, self[factor] = self[factor], message
return self.set(self / old_message * message)
def update_value(self, factor, pi=0, tau=0, value=None):
value = value or Gaussian(pi=pi, tau=tau)
old_message = self[factor]
self[factor] = value * old_message / self
return self.set(value)
def __getitem__(self, factor):
return self.messages[factor]
def __setitem__(self, factor, message):
self.messages[factor] = message
def __repr__(self):
args = (type(self).__name__, super(Variable, self).__repr__(),
len(self.messages), '' if len(self.messages) == 1 else 's')
return '<%s %s with %d connection%s>' % args
class Factor(Node):
def __init__(self, variables):
self.vars = variables
for var in variables:
var[self] = Gaussian()
def down(self):
return 0
def up(self):
return 0
@property
def var(self):
assert len(self.vars) == 1
return self.vars[0]
def __repr__(self):
args = (type(self).__name__, len(self.vars),
'' if len(self.vars) == 1 else 's')
return '<%s with %d connection%s>' % args
class PriorFactor(Factor):
def __init__(self, var, val, dynamic=0):
super(PriorFactor, self).__init__([var])
self.val = val
self.dynamic = dynamic
def down(self):
sigma = math.sqrt(self.val.sigma ** 2 + self.dynamic ** 2)
value = Gaussian(self.val.mu, sigma)
return self.var.update_value(self, value=value)
class LikelihoodFactor(Factor):
def __init__(self, mean_var, value_var, variance):
super(LikelihoodFactor, self).__init__([mean_var, value_var])
self.mean = mean_var
self.value = value_var
self.variance = variance
def calc_a(self, var):
return 1. / (1. + self.variance * var.pi)
def down(self):
# update value.
msg = self.mean / self.mean[self]
a = self.calc_a(msg)
return self.value.update_message(self, a * msg.pi, a * msg.tau)
def up(self):
# update mean.
msg = self.value / self.value[self]
a = self.calc_a(msg)
return self.mean.update_message(self, a * msg.pi, a * msg.tau)
class SumFactor(Factor):
def __init__(self, sum_var, term_vars, coeffs):
super(SumFactor, self).__init__([sum_var] + term_vars)
self.sum = sum_var
self.terms = term_vars
self.coeffs = coeffs
def down(self):
vals = self.terms
msgs = [var[self] for var in vals]
return self.update(self.sum, vals, msgs, self.coeffs)
def up(self, index=0):
coeff = self.coeffs[index]
coeffs = []
for x, c in enumerate(self.coeffs):
try:
if x == index:
coeffs.append(1. / coeff)
else:
coeffs.append(-c / coeff)
except ZeroDivisionError:
coeffs.append(0.)
vals = self.terms[:]
vals[index] = self.sum
msgs = [var[self] for var in vals]
return self.update(self.terms[index], vals, msgs, coeffs)
def update(self, var, vals, msgs, coeffs):
pi_inv = 0
mu = 0
for val, msg, coeff in zip(vals, msgs, coeffs):
div = val / msg
mu += coeff * div.mu
if pi_inv == inf:
continue
try:
# numpy.float64 handles floating-point error by different way.
# For example, it can just warn RuntimeWarning on n/0 problem
# instead of throwing ZeroDivisionError. So div.pi, the
# denominator has to be a built-in float.
pi_inv += coeff ** 2 / float(div.pi)
except ZeroDivisionError:
pi_inv = inf
pi = 1. / pi_inv
tau = pi * mu
return var.update_message(self, pi, tau)
class TruncateFactor(Factor):
def __init__(self, var, v_func, w_func, draw_margin):
super(TruncateFactor, self).__init__([var])
self.v_func = v_func
self.w_func = w_func
self.draw_margin = draw_margin
def up(self):
val = self.var
msg = self.var[self]
div = val / msg
sqrt_pi = math.sqrt(div.pi)
args = (div.tau / sqrt_pi, self.draw_margin * sqrt_pi)
v = self.v_func(*args)
w = self.w_func(*args)
denom = (1. - w)
pi, tau = div.pi / denom, (div.tau + sqrt_pi * v) / denom
return val.update_value(self, pi, tau)
#: Default initial mean of ratings.
MU = 25.
#: Default initial standard deviation of ratings.
SIGMA = MU / 3
#: Default distance that guarantees about 76% chance of winning.
BETA = SIGMA / 2
#: Default dynamic factor.
TAU = SIGMA / 100
#: Default draw probability of the game.
DRAW_PROBABILITY = .10
#: A basis to check reliability of the result.
DELTA = 0.0001
def calc_draw_probability(draw_margin, size, env=None):
if env is None:
env = global_env()
return 2 * env.cdf(draw_margin / (math.sqrt(size) * env.beta)) - 1
def calc_draw_margin(draw_probability, size, env=None):
if env is None:
env = global_env()
return env.ppf((draw_probability + 1) / 2.) * math.sqrt(size) * env.beta
def _team_sizes(rating_groups):
team_sizes = [0]
for group in rating_groups:
team_sizes.append(len(group) + team_sizes[-1])
del team_sizes[0]
return team_sizes
def _floating_point_error(env):
if env.backend == 'mpmath':
msg = 'Set "mpmath.mp.dps" to higher'
else:
msg = 'Cannot calculate correctly, set backend to "mpmath"'
return FloatingPointError(msg)
class Rating(Gaussian):
def __init__(self, mu=None, sigma=None):
if isinstance(mu, tuple):
mu, sigma = mu
elif isinstance(mu, Gaussian):
mu, sigma = mu.mu, mu.sigma
if mu is None:
mu = global_env().mu
if sigma is None:
sigma = global_env().sigma
super(Rating, self).__init__(mu, sigma)
def __int__(self):
return int(self.mu)
def __long__(self):
return long(self.mu)
def __float__(self):
return float(self.mu)
def __iter__(self):
return iter((self.mu, self.sigma))
def __repr__(self):
c = type(self)
args = ('.'.join([c.__module__, c.__name__]), self.mu, self.sigma)
return '%s(mu=%.3f, sigma=%.3f)' % args
class TrueSkill(object):
def __init__(self, mu=MU, sigma=SIGMA, beta=BETA, tau=TAU,
draw_probability=DRAW_PROBABILITY, backend=None):
self.mu = mu
self.sigma = sigma
self.beta = beta
self.tau = tau
self.draw_probability = draw_probability
self.backend = backend
if isinstance(backend, tuple):
self.cdf, self.pdf, self.ppf = backend
else:
self.cdf, self.pdf, self.ppf = choose_backend(backend)
def create_rating(self, mu=None, sigma=None):
if mu is None:
mu = self.mu
if sigma is None:
sigma = self.sigma
return Rating(mu, sigma)
def v_win(self, diff, draw_margin):
x = diff - draw_margin
denom = self.cdf(x)
return (self.pdf(x) / denom) if denom else -x
def v_draw(self, diff, draw_margin):
abs_diff = abs(diff)
a, b = draw_margin - abs_diff, -draw_margin - abs_diff
denom = self.cdf(a) - self.cdf(b)
numer = self.pdf(b) - self.pdf(a)
return ((numer / denom) if denom else a) * (-1 if diff < 0 else +1)
def w_win(self, diff, draw_margin):
x = diff - draw_margin
v = self.v_win(diff, draw_margin)
w = v * (v + x)
if 0 < w < 1:
return w
raise _floating_point_error(self)
def w_draw(self, diff, draw_margin):
abs_diff = abs(diff)
a, b = draw_margin - abs_diff, -draw_margin - abs_diff
denom = self.cdf(a) - self.cdf(b)
if not denom:
raise _floating_point_error(self)
v = self.v_draw(abs_diff, draw_margin)
return (v ** 2) + (a * self.pdf(a) - b * self.pdf(b)) / denom
def validate_rating_groups(self, rating_groups):
# check group sizes
if len(rating_groups) < 2:
raise ValueError('Need multiple rating groups')
elif not all(rating_groups):
raise ValueError('Each group must contain multiple ratings')
# check group types
group_types = set(map(type, rating_groups))
if len(group_types) != 1:
raise TypeError('All groups should be same type')
elif group_types.pop() is Rating:
raise TypeError('Rating cannot be a rating group')
# normalize rating_groups
if isinstance(rating_groups[0], dict):
dict_rating_groups = rating_groups
rating_groups = []
keys = []
for dict_rating_group in dict_rating_groups:
rating_group, key_group = [], []
for key, rating in iteritems(dict_rating_group):
rating_group.append(rating)
key_group.append(key)
rating_groups.append(tuple(rating_group))
keys.append(tuple(key_group))
else:
rating_groups = list(rating_groups)
keys = None
return rating_groups, keys
def validate_weights(self, weights, rating_groups, keys=None):
if weights is None:
weights = [(1,) * len(g) for g in rating_groups]
elif isinstance(weights, dict):
weights_dict, weights = weights, []
for x, group in enumerate(rating_groups):
w = []
weights.append(w)
for y, rating in enumerate(group):
if keys is not None:
y = keys[x][y]
w.append(weights_dict.get((x, y), 1))
return weights
def factor_graph_builders(self, rating_groups, ranks, weights):
flatten_ratings = sum(map(tuple, rating_groups), ())
flatten_weights = sum(map(tuple, weights), ())
size = len(flatten_ratings)
group_size = len(rating_groups)
# create variables
rating_vars = [Variable() for x in range(size)]
perf_vars = [Variable() for x in range(size)]
team_perf_vars = [Variable() for x in range(group_size)]
team_diff_vars = [Variable() for x in range(group_size - 1)]
team_sizes = _team_sizes(rating_groups)
# layer builders
def build_rating_layer():
for rating_var, rating in zip(rating_vars, flatten_ratings):
yield PriorFactor(rating_var, rating, self.tau)
def build_perf_layer():
for rating_var, perf_var in zip(rating_vars, perf_vars):
yield LikelihoodFactor(rating_var, perf_var, self.beta ** 2)
def build_team_perf_layer():
for team, team_perf_var in enumerate(team_perf_vars):
if team > 0:
start = team_sizes[team - 1]
else:
start = 0
end = team_sizes[team]
child_perf_vars = perf_vars[start:end]
coeffs = flatten_weights[start:end]
yield SumFactor(team_perf_var, child_perf_vars, coeffs)
def build_team_diff_layer():
for team, team_diff_var in enumerate(team_diff_vars):
yield SumFactor(team_diff_var,
team_perf_vars[team:team + 2], [+1, -1])
def build_trunc_layer():
for x, team_diff_var in enumerate(team_diff_vars):
if callable(self.draw_probability):
# dynamic draw probability
team_perf1, team_perf2 = team_perf_vars[x:x + 2]
args = (Rating(team_perf1), Rating(team_perf2), self)
draw_probability = self.draw_probability(*args)
else:
# static draw probability
draw_probability = self.draw_probability
size = sum(map(len, rating_groups[x:x + 2]))
draw_margin = calc_draw_margin(draw_probability, size, self)
if ranks[x] == ranks[x + 1]: # is a tie?
v_func, w_func = self.v_draw, self.w_draw
else:
v_func, w_func = self.v_win, self.w_win
yield TruncateFactor(team_diff_var,
v_func, w_func, draw_margin)
# build layers
return (build_rating_layer, build_perf_layer, build_team_perf_layer,
build_team_diff_layer, build_trunc_layer)
def run_schedule(self, build_rating_layer, build_perf_layer,
build_team_perf_layer, build_team_diff_layer,
build_trunc_layer, min_delta=DELTA):
if min_delta <= 0:
raise ValueError('min_delta must be greater than 0')
layers = []
def build(builders):
layers_built = [list(build()) for build in builders]
layers.extend(layers_built)
return layers_built
# gray arrows
layers_built = build([build_rating_layer,
build_perf_layer,
build_team_perf_layer])
rating_layer, perf_layer, team_perf_layer = layers_built
for f in chain(*layers_built):
f.down()
# arrow #1, #2, #3
team_diff_layer, trunc_layer = build([build_team_diff_layer,
build_trunc_layer])
team_diff_len = len(team_diff_layer)
for x in range(10):
if team_diff_len == 1:
# only two teams
team_diff_layer[0].down()
delta = trunc_layer[0].up()
else:
# multiple teams
delta = 0
for x in range(team_diff_len - 1):
team_diff_layer[x].down()
delta = max(delta, trunc_layer[x].up())
team_diff_layer[x].up(1) # up to right variable
for x in range(team_diff_len - 1, 0, -1):
team_diff_layer[x].down()
delta = max(delta, trunc_layer[x].up())
team_diff_layer[x].up(0) # up to left variable
# repeat until to small update
if delta <= min_delta:
break
# up both ends
team_diff_layer[0].up(0)
team_diff_layer[team_diff_len - 1].up(1)
# up the remainder of the black arrows
for f in team_perf_layer:
for x in range(len(f.vars) - 1):
f.up(x)
for f in perf_layer:
f.up()
return layers
def rate(self, rating_groups, ranks=None, weights=None, min_delta=DELTA):
rating_groups, keys = self.validate_rating_groups(rating_groups)
weights = self.validate_weights(weights, rating_groups, keys)
group_size = len(rating_groups)
if ranks is None:
ranks = range(group_size)
elif len(ranks) != group_size:
raise ValueError('Wrong ranks')
# sort rating groups by rank
by_rank = lambda x: x[1][1]
sorting = sorted(enumerate(zip(rating_groups, ranks, weights)),
key=by_rank)
sorted_rating_groups, sorted_ranks, sorted_weights = [], [], []
for x, (g, r, w) in sorting:
sorted_rating_groups.append(g)
sorted_ranks.append(r)
# make weights to be greater than 0
sorted_weights.append(max(min_delta, w_) for w_ in w)
# build factor graph
args = (sorted_rating_groups, sorted_ranks, sorted_weights)
builders = self.factor_graph_builders(*args)
args = builders + (min_delta,)
layers = self.run_schedule(*args)
# make result
rating_layer, team_sizes = layers[0], _team_sizes(sorted_rating_groups)
transformed_groups = []
for start, end in zip([0] + team_sizes[:-1], team_sizes):
group = []
for f in rating_layer[start:end]:
group.append(Rating(float(f.var.mu), float(f.var.sigma)))
transformed_groups.append(tuple(group))
by_hint = lambda x: x[0]
unsorting = sorted(zip((x for x, __ in sorting), transformed_groups),
key=by_hint)
if keys is None:
return [g for x, g in unsorting]
# restore the structure with input dictionary keys
return [dict(zip(keys[x], g)) for x, g in unsorting]
def quality(self, rating_groups, weights=None):
rating_groups, keys = self.validate_rating_groups(rating_groups)
weights = self.validate_weights(weights, rating_groups, keys)
flatten_ratings = sum(map(tuple, rating_groups), ())
flatten_weights = sum(map(tuple, weights), ())
length = len(flatten_ratings)
# a vector of all of the skill means
mean_matrix = Matrix([[r.mu] for r in flatten_ratings])
# a matrix whose diagonal values are the variances (sigma ** 2) of each
# of the players.
def variance_matrix(height, width):
variances = (r.sigma ** 2 for r in flatten_ratings)
for x, variance in enumerate(variances):
yield (x, x), variance
variance_matrix = Matrix(variance_matrix, length, length)
# the player-team assignment and comparison matrix
def rotated_a_matrix(set_height, set_width):
t = 0
for r, (cur, _next) in enumerate(zip(rating_groups[:-1],
rating_groups[1:])):
for x in range(t, t + len(cur)):
yield (r, x), flatten_weights[x]
t += 1
x += 1
for x in range(x, x + len(_next)):
yield (r, x), -flatten_weights[x]
set_height(r + 1)
set_width(x + 1)
rotated_a_matrix = Matrix(rotated_a_matrix)
a_matrix = rotated_a_matrix.transpose()
# match quality further derivation
_ata = (self.beta ** 2) * rotated_a_matrix * a_matrix
_atsa = rotated_a_matrix * variance_matrix * a_matrix
start = mean_matrix.transpose() * a_matrix
middle = _ata + _atsa
end = rotated_a_matrix * mean_matrix
# make result
e_arg = (-0.5 * start * middle.inverse() * end).determinant()
s_arg = _ata.determinant() / middle.determinant()
return math.exp(e_arg) * math.sqrt(s_arg)
def expose(self, rating):
k = self.mu / self.sigma
return rating.mu - k * rating.sigma
def make_as_global(self):
return setup(env=self)
def __repr__(self):
c = type(self)
if callable(self.draw_probability):
f = self.draw_probability
draw_probability = '.'.join([f.__module__, f.__name__])
else:
draw_probability = '%.1f%%' % (self.draw_probability * 100)
if self.backend is None:
backend = ''
elif isinstance(self.backend, tuple):
backend = ', backend=...'
else:
backend = ', backend=%r' % self.backend
args = ('.'.join([c.__module__, c.__name__]), self.mu, self.sigma,
self.beta, self.tau, draw_probability, backend)
return ('%s(mu=%.3f, sigma=%.3f, beta=%.3f, tau=%.3f, '
'draw_probability=%s%s)' % args)
def rate_1vs1(rating1, rating2, drawn=False, min_delta=DELTA, env=None):
if env is None:
env = global_env()
ranks = [0, 0 if drawn else 1]
teams = env.rate([(rating1,), (rating2,)], ranks, min_delta=min_delta)
return teams[0][0], teams[1][0]
def quality_1vs1(rating1, rating2, env=None):
if env is None:
env = global_env()
return env.quality([(rating1,), (rating2,)])
def global_env():
try:
global_env.__trueskill__
except AttributeError:
# setup the default environment
setup()
return global_env.__trueskill__
def setup(mu=MU, sigma=SIGMA, beta=BETA, tau=TAU,
draw_probability=DRAW_PROBABILITY, backend=None, env=None):
if env is None:
env = TrueSkill(mu, sigma, beta, tau, draw_probability, backend)
global_env.__trueskill__ = env
return env
def rate(rating_groups, ranks=None, weights=None, min_delta=DELTA):
return global_env().rate(rating_groups, ranks, weights, min_delta)
def quality(rating_groups, weights=None):
return global_env().quality(rating_groups, weights)
def expose(rating):
return global_env().expose(rating)

View File

@@ -0,0 +1,222 @@
# Titan Robotics Team 2022: CUDA-based Regressions Module
# Not actively maintained, may be removed in future release
# Written by Arthur Lu & Jacob Levine
# Notes:
# this module has been automatically inegrated into analysis.py, and should be callable as a class from the package
# this module is cuda-optimized (as appropriate) and vectorized (except for one small part)
# setup:
__version__ = "0.0.4"
# changelog should be viewed using print(analysis.regression.__changelog__)
__changelog__ = """
0.0.4:
- bug fixes
- fixed changelog
0.0.3:
- bug fixes
0.0.2:
-Added more parameters to log, exponential, polynomial
-Added SigmoidalRegKernelArthur, because Arthur apparently needs
to train the scaling and shifting of sigmoids
0.0.1:
-initial release, with linear, log, exponential, polynomial, and sigmoid kernels
-already vectorized (except for polynomial generation) and CUDA-optimized
"""
__author__ = (
"Jacob Levine <jlevine@imsa.edu>",
"Arthur Lu <learthurgo@gmail.com>",
)
__all__ = [
'factorial',
'take_all_pwrs',
'num_poly_terms',
'set_device',
'LinearRegKernel',
'SigmoidalRegKernel',
'LogRegKernel',
'PolyRegKernel',
'ExpRegKernel',
'SigmoidalRegKernelArthur',
'SGDTrain',
'CustomTrain',
'CircleFit'
]
import torch
global device
device = "cuda:0" if torch.cuda.is_available() else "cpu"
#todo: document completely
def set_device(self, new_device):
device=new_device
class LinearRegKernel():
parameters= []
weights=None
bias=None
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.bias]
def forward(self,mtx):
long_bias=self.bias.repeat([1,mtx.size()[1]])
return torch.matmul(self.weights,mtx)+long_bias
class SigmoidalRegKernel():
parameters= []
weights=None
bias=None
sigmoid=torch.nn.Sigmoid()
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.bias]
def forward(self,mtx):
long_bias=self.bias.repeat([1,mtx.size()[1]])
return self.sigmoid(torch.matmul(self.weights,mtx)+long_bias)
class SigmoidalRegKernelArthur():
parameters= []
weights=None
in_bias=None
scal_mult=None
out_bias=None
sigmoid=torch.nn.Sigmoid()
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.in_bias=torch.rand(1, requires_grad=True, device=device)
self.scal_mult=torch.rand(1, requires_grad=True, device=device)
self.out_bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.in_bias, self.scal_mult, self.out_bias]
def forward(self,mtx):
long_in_bias=self.in_bias.repeat([1,mtx.size()[1]])
long_out_bias=self.out_bias.repeat([1,mtx.size()[1]])
return (self.scal_mult*self.sigmoid(torch.matmul(self.weights,mtx)+long_in_bias))+long_out_bias
class LogRegKernel():
parameters= []
weights=None
in_bias=None
scal_mult=None
out_bias=None
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.in_bias=torch.rand(1, requires_grad=True, device=device)
self.scal_mult=torch.rand(1, requires_grad=True, device=device)
self.out_bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.in_bias, self.scal_mult, self.out_bias]
def forward(self,mtx):
long_in_bias=self.in_bias.repeat([1,mtx.size()[1]])
long_out_bias=self.out_bias.repeat([1,mtx.size()[1]])
return (self.scal_mult*torch.log(torch.matmul(self.weights,mtx)+long_in_bias))+long_out_bias
class ExpRegKernel():
parameters= []
weights=None
in_bias=None
scal_mult=None
out_bias=None
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.in_bias=torch.rand(1, requires_grad=True, device=device)
self.scal_mult=torch.rand(1, requires_grad=True, device=device)
self.out_bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.in_bias, self.scal_mult, self.out_bias]
def forward(self,mtx):
long_in_bias=self.in_bias.repeat([1,mtx.size()[1]])
long_out_bias=self.out_bias.repeat([1,mtx.size()[1]])
return (self.scal_mult*torch.exp(torch.matmul(self.weights,mtx)+long_in_bias))+long_out_bias
class PolyRegKernel():
parameters= []
weights=None
bias=None
power=None
def __init__(self, num_vars, power):
self.power=power
num_terms=self.num_poly_terms(num_vars, power)
self.weights=torch.rand(num_terms, requires_grad=True, device=device)
self.bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.bias]
def num_poly_terms(self,num_vars, power):
if power == 0:
return 0
return int(self.factorial(num_vars+power-1) / self.factorial(power) / self.factorial(num_vars-1)) + self.num_poly_terms(num_vars, power-1)
def factorial(self,n):
if n==0:
return 1
else:
return n*self.factorial(n-1)
def take_all_pwrs(self, vec, pwr):
#todo: vectorize (kinda)
combins=torch.combinations(vec, r=pwr, with_replacement=True)
out=torch.ones(combins.size()[0]).to(device).to(torch.float)
for i in torch.t(combins).to(device).to(torch.float):
out *= i
if pwr == 1:
return out
else:
return torch.cat((out,self.take_all_pwrs(vec, pwr-1)))
def forward(self,mtx):
#TODO: Vectorize the last part
cols=[]
for i in torch.t(mtx):
cols.append(self.take_all_pwrs(i,self.power))
new_mtx=torch.t(torch.stack(cols))
long_bias=self.bias.repeat([1,mtx.size()[1]])
return torch.matmul(self.weights,new_mtx)+long_bias
def SGDTrain(self, kernel, data, ground, loss=torch.nn.MSELoss(), iterations=1000, learning_rate=.1, return_losses=False):
optim=torch.optim.SGD(kernel.parameters, lr=learning_rate)
data_cuda=data.to(device)
ground_cuda=ground.to(device)
if (return_losses):
losses=[]
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data_cuda)
ls=loss(pred,ground_cuda)
losses.append(ls.item())
ls.backward()
optim.step()
return [kernel,losses]
else:
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data_cuda)
ls=loss(pred,ground_cuda)
ls.backward()
optim.step()
return kernel
def CustomTrain(self, kernel, optim, data, ground, loss=torch.nn.MSELoss(), iterations=1000, return_losses=False):
data_cuda=data.to(device)
ground_cuda=ground.to(device)
if (return_losses):
losses=[]
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data)
ls=loss(pred,ground)
losses.append(ls.item())
ls.backward()
optim.step()
return [kernel,losses]
else:
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data_cuda)
ls=loss(pred,ground_cuda)
ls.backward()
optim.step()
return kernel

View File

@@ -0,0 +1,122 @@
# Titan Robotics Team 2022: ML Module
# Written by Arthur Lu & Jacob Levine
# Notes:
# this should be imported as a python module using 'import titanlearn'
# this should be included in the local directory or environment variable
# this module is optimized for multhreaded computing
# this module learns from its mistakes far faster than 2022's captains
# setup:
__version__ = "1.1.1"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
1.1.1:
- removed matplotlib import
- removed graphloss()
1.1.0:
- added net, dataset, dataloader, and stdtrain template definitions
- added graphloss function
1.0.1:
- added clear functions
1.0.0:
- complete rewrite planned
- depreciated 1.0.0.xxx versions
- added simple training loop
0.0.x:
-added generation of ANNS, basic SGD training
"""
__author__ = (
"Arthur Lu <arthurlu@ttic.edu>,"
"Jacob Levine <jlevine@ttic.edu>,"
)
__all__ = [
'clear',
'net',
'dataset',
'dataloader',
'train',
'stdtrainer',
]
import torch
from os import system, name
import numpy as np
def clear():
if name == 'nt':
_ = system('cls')
else:
_ = system('clear')
class net(torch.nn.Module): #template for standard neural net
def __init__(self):
super(Net, self).__init__()
def forward(self, input):
pass
class dataset(torch.utils.data.Dataset): #template for standard dataset
def __init__(self):
super(torch.utils.data.Dataset).__init__()
def __getitem__(self, index):
pass
def __len__(self):
pass
def dataloader(dataset, batch_size, num_workers, shuffle = True):
return torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers)
def train(device, net, epochs, trainloader, optimizer, criterion): #expects standard dataloader, whch returns (inputs, labels)
dataset_len = trainloader.dataset.__len__()
iter_count = 0
running_loss = 0
running_loss_list = []
for epoch in range(epochs): # loop over the dataset multiple times
for i, data in enumerate(trainloader, 0):
inputs = data[0].to(device)
labels = data[1].to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels.to(torch.float))
loss.backward()
optimizer.step()
# monitoring steps below
iter_count += 1
running_loss += loss.item()
running_loss_list.append(running_loss)
clear()
print("training on: " + device)
print("iteration: " + str(i) + "/" + str(int(dataset_len / trainloader.batch_size)) + " | " + "epoch: " + str(epoch) + "/" + str(epochs))
print("current batch loss: " + str(loss.item))
print("running loss: " + str(running_loss / iter_count))
return net, running_loss_list
print("finished training")
def stdtrainer(net, criterion, optimizer, dataloader, epochs, batch_size):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = net.to(device)
criterion = criterion.to(device)
optimizer = optimizer.to(device)
trainloader = dataloader
return train(device, net, epochs, trainloader, optimizer, criterion)

View File

@@ -0,0 +1,58 @@
# Titan Robotics Team 2022: Visualization Module
# Written by Arthur Lu & Jacob Levine
# Notes:
# this should be imported as a python module using 'import visualization'
# this should be included in the local directory or environment variable
# fancy
# setup:
__version__ = "0.0.1"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
0.0.1:
- added graphhistogram function as a fragment of visualize_pit.py
0.0.0:
- created visualization.py
- added graphloss()
- added imports
"""
__author__ = (
"Arthur Lu <arthurlu@ttic.edu>,"
"Jacob Levine <jlevine@ttic.edu>,"
)
__all__ = [
'graphloss',
]
import matplotlib.pyplot as plt
import numpy as np
def graphloss(losses):
x = range(0, len(losses))
plt.plot(x, losses)
plt.show()
def graphhistogram(data, figsize, sharey = True): # expects library with key as variable and contents as occurances
fig, ax = plt.subplots(1, len(data), sharey=sharey, figsize=figsize)
i = 0
for variable in data:
ax[i].hist(data[variable])
ax[i].invert_xaxis()
ax[i].set_xlabel('Variable')
ax[i].set_ylabel('Frequency')
ax[i].set_title(variable)
plt.yticks(np.arange(len(data[variable])))
i+=1
plt.show()

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -1,16 +0,0 @@
import random
def generate(filename, x, y, low, high):
file = open(filename, "w")
for i in range (0, y, 1):
temp = ""
for j in range (0, x - 1, 1):
temp = str(random.uniform(low, high)) + "," + temp
temp = temp + str(random.uniform(low, high))
file.write(temp + "\n")

View File

@@ -1,28 +0,0 @@
import os
import json
import ordereddict
import collections
import unicodecsv
content = open("realtimeDatabaseExport2018.json").read()
dict_content = json.loads(content)
list_of_new_data = []
for datak, datav in dict_content.iteritems():
for teamk, teamv in datav["teams"].iteritems():
for matchk, matchv in teamv.iteritems():
for detailk, detailv in matchv.iteritems():
new_data = collections.OrderedDict(detailv)
new_data["uuid"] = detailk
new_data["match"] = matchk
new_data["team"] = teamk
list_of_new_data.append(new_data)
allkey = reduce(lambda x, y: x.union(y.keys()), list_of_new_data, set())
output_file = open('realtimeDatabaseExport2018.csv', 'wb')
dict_writer = unicodecsv.DictWriter(csvfile=output_file, fieldnames=allkey)
dict_writer.writerow(dict((fn,fn) for fn in dict_writer.fieldnames))
dict_writer.writerows(list_of_new_data)
output_file.close()