736 Commits

Author SHA1 Message Date
Dev Singh
7d64e67ad3 run on publish
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:46:07 -05:00
Dev Singh
def639284f remove bad if statement
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:43:16 -05:00
Dev Singh
18430208ff Merge branch 'master' of https://github.com/titanscout2022/red-alliance-analysis 2020-08-10 14:42:58 -05:00
Dev Singh
ba5fb2d72b build on release only (#35)
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:40:22 -05:00
Dev Singh
f34452d584 build on release only
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-08-10 14:35:38 -05:00
Dev Singh
5fd5e32cb1 Implement CD with building on tags to PyPI (#34)
* Create python-publish.yml

* populated publish-analysis.yml
moved legacy versions of analysis to seperate subfolder

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* attempt to fix issue with publish action

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* another attempt o fix publish-analysis.yml

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* this should work now

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* pypa can't take just one package so i'm trying all

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* this should totally work now

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* trying removing custom dir

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* rename analysis to tra_analysis, bump version to 2.0.0

* remove old packages which are already on github releases

* remove pycache

* removed ipynb_checkpoints

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* build

* do the dir thing

* trying removing custom dir

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
Signed-off-by: Dev Singh <dev@devksingh.com>

* rename analysis to tra_analysis, bump version to 2.0.0

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove old packages which are already on github releases

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove pycache

Signed-off-by: Dev Singh <dev@devksingh.com>

* build

Signed-off-by: Dev Singh <dev@devksingh.com>

* removed ipynb_checkpoints

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
Signed-off-by: Dev Singh <dev@devksingh.com>

* do the dir thing

Signed-off-by: Dev Singh <dev@devksingh.com>

* Revert "do the dir thing"

This reverts commit 2eb7ffca8d.

* correct dir

* set correct yaml positions

Signed-off-by: Dev Singh <dev@devksingh.com>

* attempt to set correct dir

Signed-off-by: Dev Singh <dev@devksingh.com>

* run on tags only

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove all caches from vcs

Signed-off-by: Dev Singh <dev@devksingh.com>

* bump version for testing

Signed-off-by: Dev Singh <dev@devksingh.com>

* remove broke build

Signed-off-by: Dev Singh <dev@devksingh.com>

* dont upload dists to github

Signed-off-by: Dev Singh <dev@devksingh.com>

* bump to 2.0.2 for testing

Signed-off-by: Dev Singh <dev@devksingh.com>

* fix yaml

Signed-off-by: Dev Singh <dev@devksingh.com>

* update docs

Signed-off-by: Dev Singh <dev@devksingh.com>

* add to readme

Signed-off-by: Dev Singh <dev@devksingh.com>

* run only on master

Signed-off-by: Dev Singh <dev@devksingh.com>

Co-authored-by: Arthur Lu <learthurgo@gmail.com>
Co-authored-by: Dev Singh <dsingh@CentaurusRidge.localdomain>
2020-08-10 14:29:51 -05:00
Dev Singh
edbfa5184a Update MAINTAINERS (#29)
Signed-off-by: Dev Singh <dev@devksingh.com>
2020-07-19 11:52:11 -05:00
Arthur Lu
635f736a69 Merge pull request #28 from titanscout2022/master-staged
Merge analysis.py updates to master
2020-07-12 18:26:03 -05:00
Arthur Lu
16fb21001a added negatives to analysis unit tests
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-12 13:57:24 -05:00
Arthur Lu
69559e9e4a Merge branch 'master' into master-staged 2020-07-11 17:03:50 -05:00
Arthur Lu
430822cdeb added unit tests for analysis.Sort algorithms
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-11 21:53:16 +00:00
Arthur Lu
648ac945ac analysis v 1.2.2.000
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-07-05 05:30:48 +00:00
Arthur Lu
d59d069943 analysis.py v 1.2.1.004 (#27)
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-24 11:49:04 -05:00
Arthur Lu
1d5a67c4f7 analysis.py v 1.2.1.004
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-22 00:37:39 +00:00
Arthur Lu
e4ab0487d0 Merge pull request #26 from titanscout2022/master
Merge master into master-staged
2020-05-21 19:36:56 -05:00
Arthur Lu
4f439d6094 Merge service-dev changes with master (#24)
* added config.json
removed old config files

Signed-off-by: Arthur <learthurgo@gmail.com>

* superscript.py v 0.0.6.000

Signed-off-by: Arthur <learthurgo@gmail.com>

* changed data.py

Signed-off-by: Arthur <learthurgo@gmail.com>

* changes to config.json

Signed-off-by: Arthur <learthurgo@gmail.com>

* removed cells from visualize_pit.py

Signed-off-by: Arthur <learthurgo@gmail.com>

* more changes to visualize_pit.py

Signed-off-by: Arthur <learthurgo@gmail.com>

* added analysis-master/metrics/__pycache__ to git ignore
moved pit configs in config.json to the borrom
superscript.py v 0.0.6.001

Signed-off-by: Arthur <learthurgo@gmail.com>

* removed old database key

Signed-off-by: Arthur <learthurgo@gmail.com>

* adjusted config files

Signed-off-by: Arthur <learthurgo@gmail.com>

* Delete config-pop.json

* fixed .gitignore

Signed-off-by: Arthur <learthurgo@gmail.com>

* analysis.py 1.2.1.003
added team kv pair to config.json

Signed-off-by: Arthur <learthurgo@gmail.com>

* superscript.py v 0.0.6.002

Signed-off-by: Arthur <learthurgo@gmail.com>

* finished app.py API
made minute changes to parentheses use in various packages

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* bug fixes in app.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* bug fixes in app.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* made changes to .gitignore

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* made changes to .gitignore

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* deleted a __pycache__ folder from metrics

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* more changes to .gitignore

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* additions to app.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* renamed app.py to api.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* removed extranneous files

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* renamed api.py to tra.py
removed rest api calls from tra.py

* renamed api.py to tra.py
removed rest api calls from tra.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* removed flask import from tra.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* changes to devcontainer.json

Signed-off-by: Arthur Lu <learthurgo@gmail.com>

* fixed unit tests to be correct
removed some tests regressions because of potential function overflow
removed trueskill unit test because of slight deviation chance

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-20 08:52:38 -05:00
Arthur Lu
ae64c7f10e Merge pull request #25 from titanscout2022/master-staged
fixed bug in analysis unit tests
2020-05-19 13:19:47 -05:00
Arthur Lu
d1dfe3b01b Merge branch 'master' into master-staged 2020-05-19 13:19:40 -05:00
Arthur Lu
3dd24dcd30 fixed bug in analysis unit tests
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-19 18:19:02 +00:00
Arthur Lu
2be67b2cc3 Merge pull request #23 from titanscout2022/master-staged
Merge minor .gitignore changes
2020-05-18 16:31:50 -05:00
Arthur Lu
f91159c49c added data-analysis/.pytest_cache/ to .gitignore
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:28:42 -05:00
Arthur Lu
df046d4806 Merge pull request #22 from titanscout2022/master
Reflect master to master-staged
2020-05-18 16:28:05 -05:00
Arthur Lu
c838c4fc15 Merge pull request #21 from titanscout2022/CI-tools
CI tools
2020-05-18 16:18:48 -05:00
Arthur Lu
cbf5d18332 i swear its working now
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:14:16 -05:00
Arthur Lu
641905e87a finally fixed issues
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:12:22 -05:00
Arthur Lu
3daa12a3da changes
superscript testing still refuses to collect any tests

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:07:02 -05:00
Arthur Lu
3c4fe7ab46 still not working
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 16:01:02 -05:00
Arthur Lu
4e3f6b4480 readded pytest install
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:59:34 -05:00
Arthur Lu
414ffdf96c removed flake8 import from unit tests
fixed superscript unit tests

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:58:17 -05:00
Arthur Lu
6296f78ff5 removed lint checks because it was the stupid
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:54:15 -05:00
Arthur Lu
7ae64d5dbf lint refused to exclude metrics
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:51:51 -05:00
Arthur Lu
fd2ac12dad excluded imports
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:49:52 -05:00
Arthur Lu
0f2bbd1a16 more fixes
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:44:39 -05:00
Arthur Lu
83bc7fa743 Merge branch 'CI-tools' of https://github.com/titanscout2022/red-alliance-analysis into CI-tools
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:44:20 -05:00
Arthur Lu
83eabce8cd also ignored regression.py
added temporary unit test for superscript.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:43:53 -05:00
Arthur Lu
e2e73986a2 also ignored regression.py
added temporary unit test for superscript.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:43:36 -05:00
Arthur Lu
91ae1c0df6 attempted fixes by excluding titanlearn
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:39:59 -05:00
Arthur Lu
efad5bd71c maybe its a versioning issue?
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:32:24 -05:00
Arthur Lu
3d5e0aac59 Revert "trying python3 and pip3"
This reverts commit 7937fb6ee6.
2020-05-18 15:29:51 -05:00
Arthur Lu
7937fb6ee6 trying python3 and pip3
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:27:56 -05:00
Arthur Lu
871ecb5561 attempt to fix working directory issue
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:25:19 -05:00
Arthur Lu
7d738ca51e another attempt
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:11:24 -05:00
Arthur Lu
eeee957d23 attempt to fix working directory issues
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 15:07:42 -05:00
Arthur Lu
f55f3cb7d1 populated analysis unit test
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-18 14:59:24 -05:00
Arthur Lu
dd11689c8c reverted indentation to github default
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:15:43 -05:00
Arthur Lu
1c4b1d1971 more indentation fixes
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:12:15 -05:00
Arthur Lu
94a7aae491 changed indentation to spaces
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:09:29 -05:00
Arthur Lu
26f4224caa fixed indents
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:07:44 -05:00
Arthur Lu
386b7c75ee added items to .gitignore
renamed pythonpackage.yml to ut-analysis.yml
populated ut-analysis.yml
fixed spelling
added ut-superscript.py

Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 20:04:31 -05:00
Arthur Lu
27feb0bf93 moved unit-test.py outside the analysis folder
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 19:41:19 -05:00
Arthur Lu
233440f03d removed pythonapp because it is redundant
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 19:40:35 -05:00
Arthur Lu
37c247aa46 created unit-test.py
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-16 19:33:56 -05:00
Arthur Lu
eeb5e46814 Merge pull request #19 from titanscout2022/CI-package
merge
2020-05-16 19:31:25 -05:00
Arthur Lu
4739c439f0 Create pythonpackage.yml 2020-05-16 19:30:52 -05:00
Arthur Lu
2e41326373 Create pythonapp.yml 2020-05-16 19:29:14 -05:00
Arthur Lu
e8ba8e1008 Merge pull request #18 from titanscout2022/master-staged
analysis.py v 1.2.1.003
2020-05-15 16:06:02 -05:00
Arthur Lu
dd49f6724f Merge branch 'master' into master-staged 2020-05-15 16:05:52 -05:00
Arthur Lu
b376f7c0c5 Merge pull request #17 from titanscout2022/equation.py-testing
merge equation.py-testing with master
2020-05-15 16:01:41 -05:00
Arthur Lu
4213386035 Merge branch 'master' into master-staged 2020-05-15 14:54:24 -05:00
Arthur Lu
3fdae646b8 analysis.py v 1.2.1.003
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-15 14:48:26 -05:00
Arthur Lu
8f8fb6c156 analysis.py v 1.2.2.000
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-14 23:36:28 -05:00
Arthur Lu
30b39aafff Merge pull request #16 from titanscout2022/master
pull recent changes into equation.py-testing
2020-05-14 23:22:03 -05:00
ltcptgeneral
77353c87e3 Merge pull request #15 from titanscout2022/master-staged
mirrored .gitignore changes from gui-dev
2020-05-14 19:29:44 -05:00
ltcptgeneral
ca2ebe5f6d Merge branch 'master' into master-staged 2020-05-14 19:18:34 -05:00
Arthur
55c7589c7d mirrored .gitignore changes from gui-dev
Signed-off-by: Arthur <learthurgo@gmail.com>
2020-05-14 19:17:26 -05:00
ltcptgeneral
6cff61cbe4 Merge pull request #13 from titanscout2022/devksingh4-bsd-license
Switch to BSD License
2020-05-13 13:19:10 -05:00
Dev Singh
5474081523 Update LICENSE 2020-05-13 12:04:59 -05:00
Dev Singh
4c25a5ce09 Contributing guideline changes (#11)
* changes

* more changes

* more changes

* contributing.md: sync with other contributor docs

Signed-off-by: Dev Singh <dev@singhk.dev>

* Create MAINTAINERS

Signed-off-by: Dev Singh <dev@singhk.dev>

* arthur's changes

* changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* more changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* more changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* contributing.md: sync with other contributor docs

Signed-off-by: Dev Singh <dev@singhk.dev>
Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* Create MAINTAINERS

Signed-off-by: Dev Singh <dev@singhk.dev>
Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

* arthur's changes

Signed-off-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>

Co-authored-by: ltcptgeneral <35508619+ltcptgeneral@users.noreply.github.com>
2020-05-13 11:56:52 -05:00
ltcptgeneral
3451bac6f5 Merge pull request #12 from titanscout2022/master-staged
analysis.py v 1.2.1.002
2020-05-13 11:44:25 -05:00
ltcptgeneral
7e37dd72bb analysis.py v 1.2.1.002
Signed-off-by: Arthur Lu <learthurgo@gmail.com>
2020-05-13 11:35:46 -05:00
ltcptgeneral
a9014c5d34 changed data analysis folder to data-analysis
added egg-info and build folders to git ignore
deleted egg-info and build folders
2020-05-12 20:54:19 -05:00
ltcptgeneral
230e98a745 9 2020-05-12 20:48:45 -05:00
ltcptgeneral
1c6ecb149b Merge branch 'equation.py-testing' of https://github.com/titanscout2022/tr2022-strategy into equation.py-testing 2020-05-12 20:46:51 -05:00
ltcptgeneral
6d544a434e readded equation.ipynb 2020-05-12 20:46:42 -05:00
ltcptgeneral
5a1aa780ff readded equation.ipynb 2020-05-12 20:43:31 -05:00
ltcptgeneral
952981cdb9 bug fixes 2020-05-12 20:39:23 -05:00
ltcptgeneral
6fee42f6d2 bug fixes 2020-05-12 20:21:11 -05:00
ltcptgeneral
24f8961500 analysis.py v 1.2.1.001 2020-05-12 20:19:58 -05:00
ltcptgeneral
db8fbbf068 visualization.py v 1.0.0.001 2020-05-05 22:37:32 -05:00
ltcptgeneral
64ae1b7026 analysis.py v 1.2.1.000 2020-05-04 14:50:36 -05:00
ltcptgeneral
4498387ac5 analysis.py v 1.2.0.006 2020-05-04 11:59:25 -05:00
ltcptgeneral
7a362476c9 fixed indent part 2 2020-05-01 23:16:32 -05:00
ltcptgeneral
b79cedae68 fixed indentation 2020-05-01 23:14:19 -05:00
ltcptgeneral
2bcd4236bb moved equation.ipynb to correct folder 2020-05-01 23:06:32 -05:00
ltcptgeneral
0cc35dc02d Merge pull request #10 from titanscout2022/master
merge file changes from master into equation.py-testing
2020-05-01 23:04:33 -05:00
ltcptgeneral
43bb9ef2bb analysis.py v 1.2.0.005 2020-05-01 22:59:54 -05:00
ltcptgeneral
3ab1d0f50a converted space indentation to tab indentation 2020-05-01 16:15:07 -05:00
ltcptgeneral
88e7c52c8b fixes 2020-05-01 16:07:57 -05:00
ltcptgeneral
b345bfb95b reconsolidated arm64 and amd64 versions 2020-05-01 15:52:27 -05:00
ltcptgeneral
aeb4990c81 analysis pkg v 1.0.0.12
analysis.py v 1.2.0.004
2020-04-30 16:03:37 -05:00
ltcptgeneral
0a721ca500 8 2020-04-30 15:22:37 -05:00
ltcptgeneral
37a4a0085e 7 2020-04-29 23:02:02 -05:00
ltcptgeneral
429d3eb42c 6 2020-04-29 22:34:43 -05:00
ltcptgeneral
60ffe7645b 5 2020-04-29 19:01:55 -05:00
ltcptgeneral
adfa6f5cc0 4 2020-04-29 17:24:59 -05:00
ltcptgeneral
f9c25dad09 3 2020-04-29 12:58:44 -05:00
ltcptgeneral
b1d5834ff1 2 2020-04-29 00:35:19 -05:00
ltcptgeneral
357d4977eb 1 2020-04-29 00:34:16 -05:00
ltcptgeneral
4545f5721a analysis.py v 1.2.0.003 2020-04-28 04:00:19 +00:00
ltcptgeneral
8d703b10b3 analysis.py v 1.2.0.002 2020-04-22 03:32:34 +00:00
ltcptgeneral
df305f30f0 analysis.py v 1.2.0.001 2020-04-21 04:08:00 +00:00
ltcptgeneral
a123b71ac9 Merge pull request #9 from titanscout2022/fix
testing release 1.2 of analysis.py
2020-04-20 00:10:24 -05:00
ltcptgeneral
a02668e59c Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-04-20 05:07:17 +00:00
ltcptgeneral
4d6372f620 removed depreciated files to seperate repository 2020-04-20 05:07:07 +00:00
ltcptgeneral
9d0b6e68d8 Update README.md 2020-04-20 00:02:35 -05:00
ltcptgeneral
b8d51811e0 testing release 1.2 of analysis.py 2020-04-13 19:58:04 +00:00
ltcptgeneral
7a58cd08e2 analysis pkg v 1.0.0.11
analysis.py v 1.1.13.009
superscript.py v 0.0.5.002
2020-04-12 02:51:40 +00:00
ltcptgeneral
337fae68ee analysis pkg v 1.0.0.10
analysis.py v 1.1.13.008
superscript.py v 0.0.5.001
2020-04-09 22:16:26 -05:00
art
5e71d05626 removed app from dep 2020-04-05 21:42:12 +00:00
art
01df42aa49 added gitgraph to vscode container 2020-04-05 21:36:12 +00:00
ltcptgeneral
33eea153c1 Merge pull request #8 from titanscout2022/containerization-testing
Containerization testing
2020-04-05 16:32:40 -05:00
art
114eee5d57 finalized changes to docker implements 2020-04-05 21:29:16 +00:00
ltcptgeneral
06f008746a Merge pull request #7 from titanscout2022/master
merge
2020-04-05 14:57:56 -05:00
art
4f9c4e0dbb verified and tested docker files 2020-04-05 19:53:01 +00:00
art
5697e8b79e created dockerfiles 2020-04-05 19:04:07 +00:00
ltcptgeneral
e054e66743 started on dockerfile 2020-04-05 12:46:21 -05:00
ltcptgeneral
c914bd3754 removed unessasary comment 2020-04-04 11:59:19 -05:00
ltcptgeneral
6c08885a53 created two new analysis variants
the existing amd64
new unpopulated arm64
2020-04-04 00:09:40 -05:00
ltcptgeneral
375befd0c4 analysis pkg v 1.0.0.9 2020-03-17 20:03:49 -05:00
ltcptgeneral
893d1fb1d0 analysis.py v 1.1.13.007 2020-03-16 22:05:52 -05:00
ltcptgeneral
6a426ae4cd a 2020-03-10 00:45:42 -05:00
ltcptgeneral
50c064ffa4 worked 2020-03-09 22:58:51 -05:00
ltcptgeneral
1b0a9967c8 test1 2020-03-09 22:58:11 -05:00
ltcptgeneral
2605f7c29f Merge pull request #6 from titanscout2022/testing
Testing
2020-03-09 20:42:30 -05:00
ltcptgeneral
6f5a3edd88 superscript.py v 0.0.5.000 2020-03-09 20:35:11 -05:00
ltcptgeneral
457146b0e4 working 2020-03-09 20:29:44 -05:00
ltcptgeneral
f7fd8ffcf9 working 2020-03-09 20:18:30 -05:00
art
77bc792426 removed unessasary stuff 2020-03-09 10:29:59 -05:00
ltcptgeneral
39146cc555 Merge pull request #5 from titanscout2022/comp-edits
Comp edits
2020-03-09 10:28:48 -05:00
ltcptgeneral
04141bbec8 analysis.py v 1.1.13.006
regression.py v 1.0.0.003
analysis pkg v 1.0.0.8
2020-03-08 16:48:19 -05:00
ltcptgeneral
40e5899972 added get_team_rakings.py 2020-03-08 14:26:21 -05:00
ltcptgeneral
025c7f9b3c a 2020-03-06 21:39:46 -06:00
Dev Singh
2daa09c040 hi 2020-03-06 21:21:37 -06:00
ltcptgeneral
9776136649 superscript.py v 0.0.4.002 2020-03-06 21:09:16 -06:00
Dev Singh
68d27a6302 add reqs 2020-03-06 20:44:40 -06:00
Dev Singh
7fc18b7c35 add Procfile 2020-03-06 20:41:53 -06:00
ltcptgeneral
9b412b51a8 analysis pkg v 1.0.0.7 2020-03-06 20:32:41 -06:00
ltcptgeneral
b6ac05a66e Merge pull request #4 from titanscout2022/comp-edits
Comp edits merge
2020-03-06 20:29:50 -06:00
Dev Singh
435c8a7bc6 tiny brain fix 2020-03-06 14:52:41 -06:00
Dev Singh
a69b18354b ultimate carl the fat kid brain working 2020-03-06 14:50:54 -06:00
Dev Singh
7b9e6921d0 ultra galaxybrain working 2020-03-06 14:44:13 -06:00
Dev Singh
fb2800cf9e fix 2020-03-06 13:12:01 -06:00
Dev Singh
12cbb21077 super ultra working 2020-03-06 12:43:01 -06:00
Dev Singh
46d1a48999 even more working 2020-03-06 12:21:17 -06:00
Dev Singh
ad0a761d53 more working 2020-03-06 12:18:42 -06:00
Dev Singh
43f503a38d working 2020-03-06 12:15:35 -06:00
Dev Singh
d38744438b working 2020-03-06 11:50:07 -06:00
Dev Singh
eb8914aa26 maybe working 2020-03-06 11:27:32 -06:00
Dev Singh
283140094f a 2020-03-06 11:18:02 -06:00
Dev Singh
66ac1c304e testing part 2 better electric boogaloo 2020-03-06 11:16:24 -06:00
Dev Singh
0eb9e07711 testing 2020-03-06 11:14:10 -06:00
Dev Singh
f56c85b298 10:57 2020-03-06 10:57:39 -06:00
Dev Singh
6a9a17c5b4 10:43 2020-03-06 10:43:45 -06:00
Dev Singh
e24c49bedb 10:25 2020-03-06 10:25:20 -06:00
Dev Singh
2daed73aaa 10:21 unverified 2020-03-06 10:21:23 -06:00
art
8ebdb3b89b superscript.py v 0.0.3.000 2020-03-05 22:52:02 -06:00
art
a0e1293361 analysis.py v 1.1.13.001
analysis pkg v 1.0.0.006
2020-03-05 13:18:33 -06:00
art
b669e55283 analysis pkg v 1.0.0.005 2020-03-05 12:44:09 -06:00
art
3e38446eae analysis pkg v 1.0.0.004 2020-03-05 12:29:58 -06:00
art
dac0a4a0cd analysis.py v 1.1.13.000 2020-03-05 12:28:16 -06:00
art
897ba03078 removed unessasaary folders and files 2020-03-05 11:17:49 -06:00
ltcptgeneral
e815a2fbf7 superscript.py v 0.0.2.001 2020-03-04 23:59:50 -06:00
ltcptgeneral
941383de4b analysis.py v 1.1.12.006
analysis pkg v 1.0.0.003
2020-03-04 21:20:00 -06:00
ltcptgeneral
5771c7957e a 2020-03-04 20:15:03 -06:00
ltcptgeneral
72c233649d superscript.py v 0.0.1.004 2020-03-04 20:12:09 -06:00
ltcptgeneral
c7031361b0 analysis.py v 1.1.12.005
analysis pkg v 1.0.0.002
2020-03-04 18:55:45 -06:00
ltcptgeneral
59508574c9 analysis pkg 1.0.0.001 2020-03-04 17:54:30 -06:00
ltcptgeneral
d57d1ebc6d analysis.py v 1.1.12.004 2020-03-04 17:52:07 -06:00
ltcptgeneral
70b2ff1151 superscript.py v 0.0.1.003 2020-03-04 16:53:25 -06:00
ltcptgeneral
c3746539b3 superscript.py v 0.0.1.002 2020-03-04 15:57:20 -06:00
ltcptgeneral
405ab3ac74 a 2020-03-04 13:47:56 -06:00
ltcptgeneral
94dd51566a refactors 2020-03-04 13:42:54 -06:00
ltcptgeneral
b5718a500a a 2020-03-04 12:58:57 -06:00
ltcptgeneral
2eaa390f2f d 2020-03-04 12:37:58 -06:00
art
9c666b95be moved analysis-master out of data analysis 2020-03-03 22:38:34 -06:00
art
dfc01a13bd c 2020-03-03 21:04:47 -06:00
art
d4328e6027 changed setup.py back 2020-03-03 21:04:17 -06:00
art
f9a3150438 b 2020-03-03 21:00:26 -06:00
art
6decf183dd a 2020-03-03 20:59:52 -06:00
art
67f940eadb made license explit in setup.py 2020-03-03 20:55:46 -06:00
art
56d0578d86 recompiled analysis.py 2020-03-03 20:48:50 -06:00
art
5e9e90507b packagefied analysis (finally) 2020-03-03 20:30:54 -06:00
art
8b4c50827c added setup.py 2020-03-03 20:24:49 -06:00
art
f8cdd73655 created __init__.py for analysis package 2020-03-03 20:17:05 -06:00
art
74dc02ca99 superscript.py v 0.0.1.001 2020-03-03 20:10:29 -06:00
art
5915827d15 superscript.py v 0.0.1.000 2020-03-03 19:39:58 -06:00
art
f9b0343aa1 moved app in dep 2020-03-03 18:48:17 -06:00
art
938caa75d1 superscript.py v 0.0.0.009
changes to config.csv
2020-03-03 18:40:35 -06:00
art
df66d28959 changes, testing 2020-03-03 18:13:03 -06:00
art
2710642f15 superscript.pv v 0.0.0.008
data.py created
2020-03-03 18:02:24 -06:00
art
51b3dd91b5 removed \n s 2020-03-03 16:27:30 -06:00
art
d00cf142c0 superscript.py v 0.0.0.007 2020-03-03 16:01:07 -06:00
art
ae8706ac08 superscript.py v 0.0.0.006 2020-03-03 15:42:37 -06:00
ltcptgeneral
5305e4a30f Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-02-29 01:06:06 -06:00
ltcptgeneral
908a1cd368 a 2020-02-29 01:05:58 -06:00
art
19e0044e0e a 2020-02-26 08:58:27 -06:00
Dev Singh
7ad43e970f Create LICENSE 2020-02-23 13:19:40 -06:00
Dev Singh
fbb3fde754 why are we unlicense? 2020-02-23 13:18:37 -06:00
art
81c81bed94 a 2020-02-20 19:29:21 -06:00
art
f3fc4cefd0 something changed 2020-02-20 19:27:09 -06:00
art
376ea248a4 a 2020-02-20 19:22:06 -06:00
art
9824f9349d fixed jacob 2020-02-20 19:19:20 -06:00
art
eb90582db8 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-02-20 19:12:48 -06:00
art
bad9e497b1 a 2020-02-20 19:12:33 -06:00
jlevine18
c3b993cfce tba_match_result_request.py v0.0.1 2020-02-19 21:50:56 -06:00
art
2cb5c54d8b dep 2020-02-19 19:54:59 -06:00
art
7f705915f0 fixes 2020-02-19 19:53:23 -06:00
art
2a8a21b82a something 2020-02-19 19:52:31 -06:00
art
2e09cba94e superscript v 0.0.0.005 2020-02-19 19:51:45 -06:00
art
de9d151ad6 superscript.py v 0.0.0.004 2020-02-19 19:21:48 -06:00
art
452b55ac6f fix 2020-02-18 20:38:12 -06:00
art
fe31db07f9 analysis.py v 1.1.12.003 2020-02-18 20:32:35 -06:00
art
52d79ea25e analysis.py v 1.1.12.002, superscript.py
v 0.0.0.003
2020-02-18 20:29:22 -06:00
art
20833b29c1 superscript.py v 0.0.0.002 2020-02-18 19:54:09 -06:00
art
978a9a9a25 doc 2020-02-18 16:16:57 -06:00
art
9da4322aa9 analysis.py v 1.1.12.000 2020-02-18 15:25:23 -06:00
art
5bdd77ddc6 superscript v 0.0.0.001 2020-02-18 11:31:20 -06:00
art
2782dc006c fix 2020-01-17 10:21:15 -06:00
art
de6c582b8f analysis.py v 1.1.11.010 2020-01-17 10:18:28 -06:00
art
32bc329e91 something changed idk what 2020-01-08 15:01:33 -06:00
art
4e50a79614 analysis.py v 1.1.11.009 2020-01-07 15:55:49 -06:00
ltcptgeneral
190fbf6cac analysis?py v 1.1.11.008 2020-01-06 23:48:28 -06:00
art
a8bf4e46e9 created superscript.py 2020-01-06 14:55:36 -06:00
ltcptgeneral
478c793917 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2020-01-05 19:08:06 -06:00
ltcptgeneral
4b44c7978a whatever 2020-01-05 19:06:54 -06:00
art
0fbb958dd9 regression v 1.0.0.003 2020-01-04 10:19:31 -06:00
art
031e45ac19 analysis.py v 1.1.11.007 2020-01-04 10:13:25 -06:00
art
96bf376b70 analysis.py v 1.1.11.006 2020-01-04 10:04:20 -06:00
art
eca8d4efc1 quick fix 2020-01-04 09:57:06 -06:00
art
d5a7f52b83 spelling 2019-12-23 12:49:38 -06:00
art
ae4ecbd67c analysis.py v 1.1.11.005 2019-12-23 12:48:13 -06:00
ltcptgeneral
0ba3a56ea7 analysis.py v 1.1.11.004 2019-11-16 16:21:06 -06:00
art
1717cc17a1 analysis.py 1.1.11.003 2019-11-11 10:04:12 -06:00
ltcptgeneral
947f7422dc spelling fix 2019-11-10 13:59:59 -06:00
ltcptgeneral
cf14005b67 analysis?py v 1.1.11.002 2019-11-10 02:04:48 -06:00
ltcptgeneral
08ff6aec8e analysis.py v 1.1.11.001 2019-11-10 01:38:39 -06:00
art
234f54ef5d analysis.py v 1.1.11.000 2019-11-08 13:20:38 -06:00
art
df42ae734e analysis.py v 1.1.10.00 2019-11-08 12:41:37 -06:00
art
4979c4b414 analysis.py v 1.1.9.002 2019-11-08 12:26:42 -06:00
art
d6cc419c40 test 2019-11-08 09:50:54 -06:00
ltcptgeneral
a73ce4080c quick fix 2019-11-06 15:33:56 -06:00
ltcptgeneral
456836bdb8 analysis.py 1.1.9.001 2019-11-06 15:32:21 -06:00
ltcptgeneral
a51f1f134d analysis.py v 1.1.9.000 2019-11-06 15:26:13 -06:00
art
db9ce0c25a quick fix 2019-11-05 16:25:53 -06:00
art
92c8b9c8c3 fixed indentation 2019-11-05 13:45:35 -06:00
art
06b0acb9f8 analysis.py v 1.1.8.000 2019-11-05 13:38:49 -06:00
art
7c957d9ddc analysis.py v 1.1.7.000 2019-11-05 13:14:08 -06:00
art
efab5bfde8 analysis.py v 1.1.6.002 2019-11-05 12:56:53 -06:00
art
c886ca8e3f analysis.py v 1.1.6.001 2019-11-05 12:53:39 -06:00
art
2cf7d73c9c analysis.py v 1.1.6.000 2019-11-05 12:47:04 -06:00
art
f12cbcc847 f 2019-11-04 10:14:28 -06:00
art
df6c184b84 quick fix 2019-11-04 10:10:29 -06:00
art
1ea7306eeb __all__ fixes 2019-11-04 10:08:28 -06:00
art
bb41c26531 something changed, idk 2019-11-01 13:12:01 -05:00
art
1d4b2bd49d visualization v 1.0.0.000, titanlearn v 2.0.1.001 2019-11-01 13:08:32 -05:00
art
8dd2440f08 analysis.py v 1.1.5.001 2019-10-31 11:03:52 -05:00
art
ab9b38da95 titanlearn v 2.0.1.000 2019-10-29 14:21:53 -05:00
art
dacf12f8a4 quick fix 2019-10-29 12:27:16 -05:00
art
3894eb481c fixes 2019-10-29 12:25:18 -05:00
art
0198d6896b restructured file management part 3 2019-10-29 10:53:11 -05:00
art
6902521d6b restructured file management part 2 2019-10-29 10:50:10 -05:00
art
590e8424e7 restructured file management 2019-10-29 10:37:23 -05:00
art
bc6916ab15 quick fix 2019-10-29 10:07:56 -05:00
art
2590a40827 depreciated files, titanlearn v 2.0.0.001 2019-10-29 10:04:56 -05:00
art
68006de8c0 titanlearn.py v 2.0.0.000 2019-10-29 09:41:49 -05:00
art
9f0d366408 depreciated 2019 superscripts and company 2019-10-29 09:23:00 -05:00
art
2bdb15a2b3 analysis.py v 1.1.5.001 2019-10-25 09:50:02 -05:00
art
56b575a753 analysis.py v 1,1,5,001 2019-10-25 09:19:18 -05:00
ltcptgeneral
ff2f0787ae analysis.py v 1.1.5.000 2019-10-09 23:58:08 -05:00
jlevine18
7c121d48fc fix PolyRegKernel 2019-10-09 22:23:56 -05:00
art
8eac3d5af1 ok fixed half of it 2019-10-08 13:49:19 -05:00
art
f47be637a0 jacob fix poly regression! 2019-10-08 13:35:32 -05:00
art
c824087335 removed extra import 2019-10-08 12:58:04 -05:00
art
a92dacc7ff added import math 2019-10-08 09:30:07 -05:00
art
37c3430433 removed regression import 2019-10-07 12:58:57 -05:00
ltcptgeneral
3bcf832db0 fix 2019-10-06 19:12:58 -05:00
art
591ddbde9d refactor 2019-10-05 16:53:03 -05:00
art
eaa0bcd5d8 quick fixes 2019-10-05 16:51:11 -05:00
art
45abb9e24d analysis.py v 1.1.4.000 2019-10-05 16:18:49 -05:00
art
a853e9b02b quick change 2019-10-04 10:37:29 -05:00
art
af20fb0fa7 comments 2019-10-04 10:36:44 -05:00
art
3a17ac5154 analysis.py v 1.1.3.002 2019-10-04 10:34:31 -05:00
art
1cdeab4b6b quick fix 2019-10-04 09:28:25 -05:00
art
b2ce781961 quick refactor of glicko2() 2019-10-04 09:12:12 -05:00
art
400b5bb81e upload trueskill for testing purposes 2019-10-04 09:02:46 -05:00
art
fd7ab3a598 analysis.py v 1.1.3.001 2019-10-04 08:13:28 -05:00
ltcptgeneral
9175c2921a analysis.py v 1.1.3.000 2019-10-04 00:26:21 -05:00
ltcptgeneral
1d3de02763 Merge pull request #3 from titanscout2022/elo
Elo
2019-10-03 11:22:57 -05:00
art
b6299ce397 analysis.py v 1.1.2.003 2019-10-03 10:48:56 -05:00
art
8801a300c4 analysis.py v 1.1.2.002 2019-10-03 10:42:05 -05:00
art
acdcb42e6d quick tests 2019-10-02 20:57:09 -05:00
art
484adfcda8 stuff 2019-10-02 20:56:06 -05:00
art
4d01067a57 analysis.py v 1.1.2.001 2019-10-01 08:59:04 -05:00
ltcptgeneral
0991757ddb reduced random blank lines 2019-09-30 16:09:31 -05:00
ltcptgeneral
de0cb1a4e3 analysis.py v 1.1.2.000, quick fixes 2019-09-30 16:02:32 -05:00
ltcptgeneral
bca13420b2 fixes 2019-09-30 15:49:15 -05:00
ltcptgeneral
236ca3bcfd quick fix 2019-09-30 13:41:15 -05:00
ltcptgeneral
b2aa6357d8 analysis.py v 1.1.1.001 2019-09-30 13:37:19 -05:00
ltcptgeneral
941dd4838a analysis.py v 1.1.1.000 2019-09-30 10:11:53 -05:00
ltcptgeneral
91d727b6ad jacob forgot self.scal_mult 2019-09-27 10:13:17 -05:00
ltcptgeneral
2c00f5b26e Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-09-27 09:49:40 -05:00
jlevine18
4f981df7bb Add files via upload 2019-09-27 09:48:05 -05:00
ltcptgeneral
c24e51e2b6 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-09-27 09:41:07 -05:00
ltcptgeneral
f565744867 added testing files to gitignore 2019-09-27 09:40:50 -05:00
ltcptgeneral
d3ee8621f0 spelling fix 2019-09-26 19:22:44 -05:00
jlevine18
e38c12f765 cudaregress v 1.0.0.002 2019-09-26 13:35:37 -05:00
jlevine18
d71b45a8e9 wait arthur moved this 2019-09-26 13:34:42 -05:00
jlevine18
6f9527c726 cudaregress 1.0.0.002 2019-09-26 13:31:22 -05:00
ltcptgeneral
9a99b8de2a quick fix 2019-09-25 14:14:17 -05:00
ltcptgeneral
c32b0150bd analysis.py v 1.1.0.007 2019-09-25 14:11:20 -05:00
ltcptgeneral
86327e97f9 moved and renamed cudaregress.py to regression.py 2019-09-23 09:58:08 -05:00
jlevine18
4fd18ec7fe global vars to bugfix 2019-09-23 09:28:35 -05:00
jlevine18
dc6f896071 Set device bc I apparently forgot to do that 2019-09-23 00:01:31 -05:00
jlevine18
c5d087dada don't need the testing notebook up here anymore 2019-09-22 23:23:29 -05:00
jlevine18
bda2db7003 Add files via upload 2019-09-22 23:22:21 -05:00
jlevine18
53d4a0ecde added cudaregress.py package 2019-09-22 23:19:46 -05:00
ltcptgeneral
db19127d28 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-09-22 23:10:24 -05:00
jlevine18
3ec7e5fed5 added cuda to cudaregress notebook 2019-09-22 23:05:49 -05:00
ltcptgeneral
8bd07cbd32 quick fix 2019-09-22 21:54:28 -05:00
jlevine18
f5b9a678fc fix cuda regress testing notebook 2019-09-22 21:38:12 -05:00
jlevine18
1c8f8fdfe7 added cudaRegress testing notebook 2019-09-21 13:35:51 -05:00
ltcptgeneral
f63c473166 analysis.py v 1.1.0.006 2019-09-17 12:21:44 -05:00
ltcptgeneral
936354a1a2 analysis.py v 1.1.0.005 2019-09-17 08:46:47 -05:00
ltcptgeneral
43d059b477 analysis.py v 1.1.0.004 2019-09-16 11:11:27 -05:00
ltcptgeneral
173f9b3460 benchmarked 2019-09-13 15:09:33 -05:00
ltcptgeneral
eb51d876a5 analysis.py v 1.1.0.003 2019-09-13 14:38:24 -05:00
ltcptgeneral
bee1edbf25 quick fixes 2019-09-13 14:29:22 -05:00
ltcptgeneral
13c17b092a analysis.py v 1.1.0.002 2019-09-13 13:59:13 -05:00
ltcptgeneral
800601121e moved files to subfolder dep 2019-09-13 13:50:12 -05:00
ltcptgeneral
79e77af304 analysis.py v 1.1.0.001 2019-09-13 12:33:02 -05:00
ltcptgeneral
4d6273fa05 analysis.py v 1.1.0.000 2019-09-13 11:14:13 -05:00
ltcptgeneral
c9567f0d7c Rename analysis-better.py to analysis.py 2019-09-12 11:05:33 -05:00
ltcptgeneral
37d3c2b1d2 Rename analysis.py to analysis-dep.py 2019-09-12 11:04:54 -05:00
ltcptgeneral
b689dada3d analysis-better.py v 1.0.9.000
changelog:
    - refactored
    - numpyed everything
    - removed stats in favor of numpy functions
2019-04-09 09:43:42 -05:00
ltcptgeneral
e914d32b37 Create analysis-better.py 2019-04-09 09:30:37 -05:00
ltcptgeneral
5dc3fa344c Delete temp.txt 2019-04-08 09:38:27 -05:00
ltcptgeneral
c7859bf681 Update .gitignore 2019-04-08 09:34:49 -05:00
ltcptgeneral
620b6de028 quick fixes 2019-04-08 09:26:32 -05:00
ltcptgeneral
c1635f79fe Merge branch 'c' 2019-04-08 09:17:26 -05:00
ltcptgeneral
a9d3ef2b51 Create analysis.cp37-win_amd64.pyd 2019-04-08 09:17:16 -05:00
ltcptgeneral
aa107249fd cython working 2019-04-08 09:16:26 -05:00
ltcptgeneral
0c47283dd5 analysis in c working 2019-04-05 21:01:17 -05:00
ltcptgeneral
f49bb58215 started c-ifying analysis 2019-04-05 17:24:24 -05:00
ltcptgeneral
b91ad29ae4 Delete uuh.png 2019-04-03 14:43:59 -05:00
ltcptgeneral
8a869e037b fixed superscript 2019-04-03 14:39:22 -05:00
ltcptgeneral
20f082b760 beautified 2019-04-03 13:34:31 -05:00
ltcptgeneral
ef81273d4a Delete keytemp.json 2019-04-02 14:07:24 -05:00
ltcptgeneral
3761274ee3 Update .gitignore 2019-04-02 13:43:08 -05:00
ltcptgeneral
506c779d82 Merge branch 'multithread' 2019-04-02 13:40:02 -05:00
ltcptgeneral
892b57a1eb whtever 2019-04-01 13:22:37 -05:00
jlevine18
94cc4adbf9 teams for wisconsin regional 2019-03-28 07:54:08 -05:00
ltcptgeneral
2e189bcfa2 teams added 2019-03-27 23:40:05 -05:00
ltcptgeneral
a21d0b5ec6 Update tbarequest.cpython-37.pyc 2019-03-22 19:40:17 -05:00
ltcptgeneral
ebb5f3b09e Update scores.csv 2019-03-22 19:11:11 -05:00
ltcptgeneral
5c4bed42d6 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-22 15:22:05 -05:00
ltcptgeneral
c15d037109 something changed 2019-03-22 15:21:58 -05:00
jlevine18
56f704c464 Update tbarequest.py 2019-03-22 15:09:52 -05:00
jlevine18
14a0414265 add req_team_info 2019-03-22 14:54:55 -05:00
Jacob Levine
6dbdfe00fc fixed textArea bug 2019-03-22 12:39:16 -05:00
ltcptgeneral
00c9df4239 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-22 11:54:43 -05:00
ltcptgeneral
9c725887c5 created nishant only script 2019-03-22 11:54:40 -05:00
Jacob Levine
9562cc594f fixed another bug 2019-03-22 11:53:15 -05:00
Jacob Levine
56f6752ff7 fixed textArea bug 2019-03-22 11:50:03 -05:00
Jacob Levine
31c8c9ee86 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-22 11:28:45 -05:00
Jacob Levine
0f671daf30 added fields that Arthut needed 2019-03-22 11:28:22 -05:00
Archan Das
e1027a9562 Update teams.csv 2019-03-22 11:12:51 -05:00
Jacob Levine
628dac5835 web3\ 2019-03-22 09:18:59 -05:00
Jacob Levine
fed48bc999 web3\ 2019-03-22 09:16:17 -05:00
Jacob Levine
7be5e15e9a web3 2019-03-22 09:14:41 -05:00
Jacob Levine
365b9e1882 web4 2019-03-22 09:13:44 -05:00
Jacob Levine
e5bb5b6ef7 web3 2019-03-22 09:12:29 -05:00
Jacob Levine
f9e4a6c53d web3 2019-03-22 08:51:42 -05:00
Jacob Levine
e0099aab60 web2 2019-03-22 08:50:37 -05:00
Jacob Levine
925087886c web 2019-03-22 08:49:50 -05:00
Jacob Levine
35f8cd693e archan needs to import! 2019-03-22 08:46:04 -05:00
Jacob Levine
a795a89c2d final fixes 2019-03-22 08:44:42 -05:00
Jacob Levine
92602b3122 change letter 2019-03-22 08:39:01 -05:00
Jacob Levine
aa86a2af7b final fixes 2019-03-22 08:36:33 -05:00
Jacob Levine
6f1cf1828a update archan's script 2019-03-22 08:27:53 -05:00
Jacob Levine
bb5c38fbfe don't sort matches alphabetically, sort them numerically 2019-03-22 07:48:44 -05:00
Jacob Levine
169c1737b2 testing mistakes 2019-03-22 07:37:05 -05:00
Jacob Levine
8244efa09b ok seriously what is going on? 2019-03-22 07:34:00 -05:00
Jacob Levine
4c1abeb200 testing mistakes 2019-03-22 07:32:05 -05:00
Jacob Levine
b41683eaa9 testing mistakes 2019-03-22 07:29:47 -05:00
Jacob Levine
1f29718795 testing mistakes 2019-03-22 07:28:55 -05:00
Jacob Levine
5716a7957e ok 2019-03-22 07:28:11 -05:00
Jacob Levine
7c21c277dd testing mistakes 2019-03-22 07:26:08 -05:00
Jacob Levine
91ddbb5531 wtf 2019-03-22 07:24:27 -05:00
Jacob Levine
21be310e1f testing mistakes 2019-03-22 07:24:15 -05:00
Jacob Levine
0a687648e0 Revert "ok seriously what is going on?"
This reverts commit 8de7078240.
2019-03-22 07:17:26 -05:00
Jacob Levine
80c6b9ba67 Revert "testing mistakes"
This reverts commit 1f20ad7f37.
2019-03-22 07:16:36 -05:00
Jacob Levine
1f20ad7f37 testing mistakes 2019-03-22 07:15:44 -05:00
Jacob Levine
8de7078240 ok seriously what is going on? 2019-03-22 07:05:45 -05:00
Jacob Levine
b88a7f7aa8 wtf 2019-03-22 07:03:31 -05:00
Jacob Levine
313d627fa8 testing mistakes 2019-03-22 07:01:19 -05:00
Jacob Levine
cbe1d9a015 wtf 2019-03-22 06:57:27 -05:00
Jacob Levine
a4288e2a0d move so it doesnt crash 2019-03-22 06:53:28 -05:00
Jacob Levine
4f631a4b79 fix add script for text areas 2019-03-22 06:48:45 -05:00
Jacob Levine
e992483f35 dont be stupid 2019-03-22 01:06:05 -05:00
Jacob Levine
4671eacb6e chrome is still horrible 2019-03-22 01:04:18 -05:00
Jacob Levine
c76a5ddb5e case sensitive 2019-03-22 00:50:20 -05:00
Jacob Levine
1989ec5ad4 chrome is still horrible 2019-03-22 00:49:19 -05:00
Jacob Levine
0124d9db97 chrome is horrible 2019-03-22 00:46:19 -05:00
Jacob Levine
02ce675a0b dont be stupid 2019-03-22 00:40:27 -05:00
Jacob Levine
64da97bfdc dont be stupid 2019-03-22 00:38:37 -05:00
Jacob Levine
23d6eebff1 dont be stupid 2019-03-22 00:36:12 -05:00
Jacob Levine
8dcb59a15c bugfix 23 2019-03-22 00:33:50 -05:00
Jacob Levine
ebe25312b5 af 2019-03-22 00:31:31 -05:00
Jacob Levine
28b5c6868e bugfix 22 2019-03-22 00:30:36 -05:00
Jacob Levine
0c09631813 st 2019-03-22 00:29:34 -05:00
Jacob Levine
23d821b773 bugfix 21 2019-03-22 00:27:44 -05:00
Jacob Levine
cc958e0927 bugfix 20 2019-03-22 00:24:23 -05:00
Jacob Levine
7e23641591 bugfix 19 2019-03-22 00:21:46 -05:00
Jacob Levine
dc8bc17324 bugfix 18 2019-03-22 00:20:01 -05:00
Jacob Levine
23b16d2e92 bugfix 16,17 2019-03-22 00:16:32 -05:00
Jacob Levine
5200dbc4d7 ian stopped naming his questions 2019-03-22 00:09:24 -05:00
Jacob Levine
3fd42c46c9 bugfix 15 2019-03-22 00:08:00 -05:00
Jacob Levine
19015d79e6 bugfix 14 2019-03-22 00:05:35 -05:00
Jacob Levine
3eef220768 ESCAPE STRINGS 2019-03-22 00:03:25 -05:00
Jacob Levine
2f088898f8 minor fixes 2019-03-22 00:01:01 -05:00
Jacob Levine
f091dd9113 bugfix 10 2019-03-21 23:58:46 -05:00
Jacob Levine
2a32386a9e dont be stupid 2019-03-21 23:53:16 -05:00
Jacob Levine
45055b1505 minor fixes 2019-03-21 23:45:25 -05:00
Jacob Levine
afcda88760 dont be stupid 2019-03-21 23:40:46 -05:00
Jacob Levine
3ad0dcd851 fix fix bugfix 6 2019-03-21 23:36:29 -05:00
Jacob Levine
ac32743210 fix bugfix 6 2019-03-21 23:35:14 -05:00
Jacob Levine
978342c480 bugfix 6 2019-03-21 23:33:11 -05:00
Jacob Levine
8e6a927032 bugfix 5 2019-03-21 23:31:50 -05:00
Jacob Levine
0bf52e1c29 bugfix 4 2019-03-21 23:30:38 -05:00
Jacob Levine
06242f0b2a remove random 'm' 2019-03-21 23:29:01 -05:00
Jacob Levine
4398de71ba website for peoria 2019-03-21 23:28:18 -05:00
Jacob Levine
15f504ecc3 typo! 2019-03-21 23:26:45 -05:00
Jacob Levine
06be451456 website for peoria 2019-03-21 23:25:24 -05:00
Jacob Levine
e498f4275e readded css 2019-03-21 23:17:24 -05:00
Jacob Levine
1633ef7862 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-21 23:13:12 -05:00
Jacob Levine
2cff74aa54 website for peoria 2019-03-21 23:12:49 -05:00
Archan Das
040b4dc52a Add files via upload 2019-03-21 22:52:08 -05:00
Archan Das
35d8e5ff77 Add files via upload 2019-03-21 22:50:27 -05:00
ltcptgeneral
d3b39d8167 Delete test.py 2019-03-21 22:17:33 -05:00
ltcptgeneral
c7b3d7e9a3 superscript v 1.0.6.001
changelog:
- fixed multiple bugs
- works now
2019-03-21 18:02:51 -05:00
ltcptgeneral
10f8839bbd WORKING 2019-03-21 17:52:59 -05:00
ltcptgeneral
1eb568c807 Revert "beautified"
This reverts commit 0d8780b3c1.
2019-03-21 17:50:52 -05:00
ltcptgeneral
12cf4a55d7 Revert "yeeted"
This reverts commit 1f2edeba51.
2019-03-21 17:50:46 -05:00
ltcptgeneral
e81f6052e3 Revert "stuff"
This reverts commit 268b01fc93.
2019-03-21 17:50:37 -05:00
ltcptgeneral
bbebc4350c Revert "no"
This reverts commit ac7c169a27.
2019-03-21 17:50:32 -05:00
ltcptgeneral
ac7c169a27 no 2019-03-21 17:43:36 -05:00
ltcptgeneral
268b01fc93 stuff 2019-03-21 17:34:27 -05:00
ltcptgeneral
9c7647aba9 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-21 17:28:23 -05:00
ltcptgeneral
1f2edeba51 yeeted 2019-03-21 17:28:16 -05:00
jlevine18
b3781ada45 Delete Untitled.ipynb 2019-03-21 17:28:04 -05:00
Jacob Levine
0d8780b3c1 beautified 2019-03-21 17:27:31 -05:00
ltcptgeneral
64a89cc58f WORKING!!!! 2019-03-21 17:25:16 -05:00
ltcptgeneral
f092bd3cb1 Update superscript.py 2019-03-21 17:00:38 -05:00
ltcptgeneral
c4309f5679 Update superscript.py 2019-03-21 16:59:29 -05:00
ltcptgeneral
e19bb8dcc1 1 2019-03-21 16:58:37 -05:00
ltcptgeneral
c9436f15f8 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-21 16:57:02 -05:00
jlevine18
8c867dcf95 Update superscript.py 2019-03-21 16:55:04 -05:00
ltcptgeneral
d3e98391d4 Create superscript.py 2019-03-21 16:52:37 -05:00
ltcptgeneral
ef336eb454 a 2019-03-21 16:52:22 -05:00
ltcptgeneral
12f5536026 wtf2 2019-03-21 16:50:32 -05:00
Jacob Levine
6a0d8f4144 fixed null removal script 2019-03-21 16:48:02 -05:00
ltcptgeneral
7f80339fb4 working 2019-03-21 16:17:45 -05:00
ltcptgeneral
9ea074c99c WTF 2019-03-21 15:59:47 -05:00
ltcptgeneral
4188b4b1c3 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-21 15:25:46 -05:00
ltcptgeneral
41ea4e9ed8 test 2019-03-21 15:25:36 -05:00
jlevine18
9fe9084341 add opr request 2019-03-21 15:23:24 -05:00
ltcptgeneral
9f894428c1 Update test.py 2019-03-21 15:14:24 -05:00
ltcptgeneral
4227106b4f Update superscript.py 2019-03-21 15:07:24 -05:00
ltcptgeneral
82754ede58 wtf 2019-03-21 15:06:54 -05:00
ltcptgeneral
a1d0cd37b7 test 2019-03-21 14:38:53 -05:00
ltcptgeneral
5e13ca3b5e Update test.py 2019-03-20 22:15:31 -05:00
ltcptgeneral
7c96233f5b Update test.py 2019-03-20 21:36:49 -05:00
ltcptgeneral
3ecf08cf9b too much iteration 2019-03-20 20:18:55 -05:00
ltcptgeneral
04c561baea Update issue templates 2019-03-20 18:14:59 -05:00
ltcptgeneral
5eaf733651 Update superscript.py 2019-03-20 18:14:32 -05:00
ltcptgeneral
d0435a5528 Create LICENSE 2019-03-20 17:41:10 -05:00
ltcptgeneral
55cc572d5c Create CONTRIBUTING.md 2019-03-20 17:37:38 -05:00
ltcptgeneral
8577d4dafa Update README.md 2019-03-20 17:33:47 -05:00
ltcptgeneral
08aec2537e fix 0 2019-03-20 17:23:41 -05:00
ltcptgeneral
975db73aae key fix? 2019-03-20 16:53:53 -05:00
ltcptgeneral
6cb09240ab Update superscript.py 2019-03-20 16:38:42 -05:00
ltcptgeneral
c74b0f34a6 superscript.py - v 1.0.6.000
changelog:
- added pulldata function
- service now pulls in, computes data, and outputs data as planned
2019-03-20 16:16:48 -05:00
ltcptgeneral
3e47a232cc 1234567890 2019-03-20 14:10:47 -05:00
Jacob Levine
2e356405e1 bugfix 16 2019-03-18 21:06:13 -05:00
Jacob Levine
f59d94282d bugfix 15 2019-03-18 21:02:23 -05:00
Jacob Levine
b0ad3bdf9c bugfix 14 2019-03-18 20:47:16 -05:00
Jacob Levine
733c7cbfe7 bugfix 13 2019-03-18 19:20:27 -05:00
Jacob Levine
a95684213c bugfix 12 2019-03-18 19:16:17 -05:00
Jacob Levine
76b4107999 bugfix 11 2019-03-18 19:15:17 -05:00
Jacob Levine
b2bb2df3f0 bugfix 10, now with template literals 2019-03-18 19:10:18 -05:00
Jacob Levine
3ec4de4fb1 bugfix 9 2019-03-18 18:57:43 -05:00
Jacob Levine
bf1572765c bugfix 8 2019-03-18 18:53:41 -05:00
Jacob Levine
0717ed4979 bugfix 7 2019-03-18 18:41:20 -05:00
Jacob Levine
ab421e4170 bugfix 4 2019-03-18 18:38:45 -05:00
Jacob Levine
c5f6ecae68 bugfix 5 2019-03-18 18:35:59 -05:00
Jacob Levine
7dbffc940a bugfix 4 2019-03-18 18:28:47 -05:00
Jacob Levine
8f4e6e3510 bugfix 3 2019-03-18 18:27:46 -05:00
Jacob Levine
86325e7d2b bugfixes 2 2019-03-18 18:06:11 -05:00
Jacob Levine
cf6c6180d3 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-18 17:55:56 -05:00
Jacob Levine
da315ac908 bugfix 1 2019-03-18 17:54:34 -05:00
jlevine18
3b95963eb1 Merge pull request #1 from titanscout2022/signUps
Sign ups demo
2019-03-18 17:21:40 -05:00
Jacob Levine
1fdd80e31b multiform demo mk 1 2019-03-18 17:13:45 -05:00
Jacob Levine
926db38db9 continue with multi-form 2019-03-17 23:27:46 -05:00
Jacob Levine
f483cbbcfb Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-16 15:49:49 -05:00
Jacob Levine
0d111296af changed to signups. not complete yet 2019-03-16 15:47:56 -05:00
ltcptgeneral
9fb53f4297 Update titanlearn.py 2019-03-16 13:12:59 -05:00
ltcptgeneral
69ef08bfd4 1234567890 2019-03-10 11:42:43 -05:00
ltcptgeneral
0159f116c1 12345678 2019-03-09 16:27:36 -06:00
Jacob Levine
da6f2ce044 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-09 14:08:38 -06:00
Jacob Levine
053001186e added frc elo notebook 2019-03-09 14:05:47 -06:00
jlevine18
177e8ad783 Delete pullmatches.py 2019-03-08 22:19:11 -06:00
Jacob Levine
047f682030 added scoreboard 2019-03-08 22:05:35 -06:00
Jacob Levine
041db246b1 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-08 21:56:15 -06:00
Jacob Levine
54888a3988 added day 1 processing 2019-03-08 21:55:52 -06:00
ltcptgeneral
c726551ec7 Update superscript.py 2019-03-08 19:00:02 -06:00
ltcptgeneral
a36ba0413a superscript v 1.0.5.003
changelog:
- hotfix: actually pushes data correctly now
2019-03-08 17:43:38 -06:00
Jacob Levine
79d0bda1ef fix defaults 2019-03-08 12:54:41 -06:00
Jacob Levine
a7def3c367 reworked questions to comply with Ian's app 2019-03-08 12:48:10 -06:00
Jacob Levine
1ee9867ea6 fix typo 2019-03-08 10:54:14 -06:00
Jacob Levine
44f209f331 added strat options 2019-03-08 10:47:49 -06:00
Jacob Levine
274017806f sets timeout for reload 2019-03-07 23:37:54 -06:00
Jacob Levine
90adb6539a final fix for the night! 2019-03-07 23:33:58 -06:00
Jacob Levine
be4ec9ea51 bugfix 2019-03-07 23:30:33 -06:00
Jacob Levine
b89fab51c3 fix typo 2019-03-07 23:29:16 -06:00
Jacob Levine
6247c7997f added full functionality to scout 2019-03-07 23:26:30 -06:00
Jacob Levine
9baa4450b0 stylinh 2019-03-07 21:25:32 -06:00
Jacob Levine
2a449eba1a one of these times im going to actually catch it 2019-03-07 21:22:04 -06:00
Jacob Levine
dfd5366112 fix typo 2019-03-07 21:21:12 -06:00
Jacob Levine
dc180862df fix typo 2019-03-07 21:20:07 -06:00
Jacob Levine
9d9dcbbb71 fix typo 2019-03-07 21:18:13 -06:00
Jacob Levine
ed151f1707 sections 2019-03-07 21:16:54 -06:00
Jacob Levine
302f6b794d bugfix 2019-03-07 20:55:49 -06:00
Jacob Levine
1925943660 start scout 2019-03-07 20:54:55 -06:00
Jacob Levine
0e358a9a14 final fixes (hopefully this time) 2019-03-07 20:21:05 -06:00
Jacob Levine
2c9e553b57 fix typo 2019-03-07 20:19:58 -06:00
Jacob Levine
ee4ee316dd final page fix 2019-03-07 20:18:54 -06:00
Jacob Levine
12e39ecc84 fix typo 2019-03-07 20:17:10 -06:00
Jacob Levine
eb20ad907e fix mistake 2019-03-07 20:16:14 -06:00
Jacob Levine
61b286c258 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-07 20:14:05 -06:00
Jacob Levine
77231d00cc now you can leave teams 2019-03-07 20:13:32 -06:00
jlevine18
4322396088 arthur don't be stupid 2019-03-07 20:03:20 -06:00
Jacob Levine
c5dc49f442 final profile fix 2019-03-07 19:57:20 -06:00
Jacob Levine
0684f982b7 fix structure 2019-03-07 19:55:30 -06:00
Jacob Levine
b5d8851c44 fix data structure 2019-03-07 19:48:50 -06:00
Jacob Levine
b0782ed74e test bugfix 2019-03-07 19:47:35 -06:00
Jacob Levine
3e76c55801 testing... 2019-03-07 19:46:01 -06:00
Jacob Levine
834068244e test bugfix 2019-03-07 19:43:50 -06:00
Jacob Levine
d833d0a183 fix typo 2019-03-07 19:38:05 -06:00
Jacob Levine
1f50c6dd16 test bugfix 2019-03-07 19:37:06 -06:00
Jacob Levine
9ca336934a Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-03-07 19:32:14 -06:00
Jacob Levine
251390fddf fixed teamlogic 2019-03-07 19:31:34 -06:00
ltcptgeneral
aaa548fb65 hotfix 2000 2019-03-07 09:14:20 -06:00
ltcptgeneral
7710da503b 12 2019-03-06 20:05:50 -06:00
ltcptgeneral
18969b4179 Update superscript.py 2019-03-05 13:36:47 -06:00
ltcptgeneral
ecb6400b06 lotta bug fixes 2019-03-04 16:38:40 -06:00
ltcptgeneral
67393e0e09 1 2019-03-03 22:50:29 -06:00
ltcptgeneral
442d9a9682 Update analysis.py 2019-03-02 20:18:51 -06:00
ltcptgeneral
7434263165 titanscouting app v 1.0.0.003
simple bug fix
2019-03-02 19:58:00 -06:00
ltcptgeneral
d20d0e4e7a titanscouting app v 1.0.0.002 2019-03-02 19:47:31 -06:00
ltcptgeneral
836abc427a ryiop 2019-03-02 16:34:48 -06:00
ltcptgeneral
8cc6b2774e Create README.md 2019-03-02 16:34:12 -06:00
jlevine18
e98e66bdf0 tl.py 2019-03-02 08:18:28 -06:00
ltcptgeneral
791c4e82a5 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-03-01 13:49:36 -06:00
ltcptgeneral
110da31d50 Update titanlearn.py 2019-03-01 13:49:33 -06:00
jlevine18
0e9a706904 Update titanlearn.py 2019-03-01 12:25:41 -06:00
ltcptgeneral
28b5f9d6a2 dumb 2019-03-01 12:18:38 -06:00
ltcptgeneral
00af69a3f5 Update superscript.py 2019-02-28 13:39:35 -06:00
ltcptgeneral
e61403174d sfasf 2019-02-28 13:28:29 -06:00
ltcptgeneral
632a2472a2 bassbsabjasb 2019-02-28 13:13:52 -06:00
ltcptgeneral
d62a07a69e Update superscript.py 2019-02-28 09:04:37 -06:00
ltcptgeneral
85d4a29cf2 Update superscript.py 2019-02-27 14:01:25 -06:00
ltcptgeneral
6678e49cbf superscript.py - v 1.0.5.002
changelog:
- more information given
- performance improvements
2019-02-27 14:00:29 -06:00
ltcptgeneral
839c5d2943 superscript.py - v 1.0.5.001
changelog:
- grammar
2019-02-27 13:43:33 -06:00
ltcptgeneral
79b4cf1158 superscript.py - v 1.0.5.000
changelog:
- service now iterates forever
- ready for production other than pulling json data
2019-02-27 13:38:24 -06:00
ltcptgeneral
9b9d6bcd23 superscript.py - v 1.0.4.001
changelog:
- grammar fixes
2019-02-26 23:18:26 -06:00
ltcptgeneral
2b1dd3ed9b superscript.py - v 1.0.4.000
changelog:
- actually pushes to firebase
2019-02-26 19:39:56 -06:00
ltcptgeneral
7afe68e315 Update .gitignore 2019-02-26 19:10:53 -06:00
ltcptgeneral
0f58ce0fd7 security patch 2019-02-22 12:23:49 -06:00
ltcptgeneral
badcb373ae Update bdata.csv 2019-02-21 12:33:13 -06:00
ltcptgeneral
e5cf8a43d4 superscript.py - v 1.
changelog:
- processes data more efficiently
2019-02-20 22:59:17 -06:00
ltcptgeneral
aba4b44da4 superscript.py - v 1.0.3.000
changelog:
- actually processes data
2019-02-20 11:44:11 -06:00
ltcptgeneral
c4fa9c5f23 qwertyuiop 2019-02-19 13:21:06 -06:00
ltcptgeneral
22688de9e8 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2019-02-19 09:44:55 -06:00
ltcptgeneral
042efb2b5a superscript.py - v 1.0.2.000
changelog:
- added data reading from folder
- nearly crashed computer reading from 20 GiB of data
2019-02-19 09:44:51 -06:00
Jacob Levine
060a77f4b7 fix more typos 2019-02-12 21:00:43 -06:00
Jacob Levine
ffd64eb3d2 fix typos 2019-02-12 21:00:00 -06:00
Jacob Levine
4822be0ece fix typos 2019-02-12 20:55:56 -06:00
Jacob Levine
d3b71287c4 squash bugh 2019-02-12 20:52:03 -06:00
Jacob Levine
67ac98b9ab fix more typos 2019-02-12 20:49:23 -06:00
Jacob Levine
9e0c6e36ee can i set the world record for most typos 2019-02-12 20:48:35 -06:00
Jacob Levine
d0d431fb54 fix even more typos 2019-02-12 20:46:23 -06:00
Jacob Levine
718ca83a1d fix more typos 2019-02-12 20:44:21 -06:00
Jacob Levine
e0c159de00 fix typos 2019-02-12 20:42:49 -06:00
Jacob Levine
6652918ae8 I apparently don't know how to js 2019-02-12 20:41:43 -06:00
Jacob Levine
4f3ecf4361 fix more typos 2019-02-12 20:37:50 -06:00
Jacob Levine
dd5da3b1e8 fix typos 2019-02-12 20:34:05 -06:00
Jacob Levine
45a4387c68 started teams page 2019-02-12 20:20:30 -06:00
Jacob Levine
c6b2840e07 last style fixed before i do something else, for real this time 2019-02-09 15:53:39 -06:00
Jacob Levine
6362f50fd3 last style fixed before i do something else, for real this time 2019-02-09 15:50:34 -06:00
Jacob Levine
d5622c8672 last style fixed before i do something eks 2019-02-09 15:49:21 -06:00
Jacob Levine
3abc50cf7a js dom terms aren't very consistent 2019-02-09 15:44:46 -06:00
Jacob Levine
0f68468f14 fix style inconsistencies 2019-02-09 15:42:16 -06:00
Jacob Levine
6d45200ca3 other style 2019-02-09 15:36:59 -06:00
Jacob Levine
80aee80548 other style 2019-02-09 15:30:27 -06:00
Jacob Levine
3d27f3c127 margins aren't for tables 2019-02-09 15:29:21 -06:00
Jacob Levine
9fd7966c55 other style updates 2019-02-09 15:27:17 -06:00
Jacob Levine
4529ee32e2 no but this ugly html hack should 2019-02-09 15:25:25 -06:00
Jacob Levine
3a5629f0ba does making everything auto fix it? 2019-02-09 15:19:14 -06:00
Jacob Levine
fe74aea4de maybe we can fix it in js 2019-02-09 15:12:17 -06:00
Jacob Levine
76ac58dbab maybe we can fix it in js 2019-02-09 15:10:24 -06:00
Jacob Levine
db0ddec2c6 overflow-x 2019-02-09 14:57:55 -06:00
Jacob Levine
c6980ff71d time to actually start making this look legit 2019-02-09 14:54:03 -06:00
Jacob Levine
a4840003f5 what was i thinking? 2019-02-09 14:46:59 -06:00
Jacob Levine
aad41e57a9 even more styling, if you can call it that 2019-02-09 14:43:14 -06:00
Jacob Levine
24a8500588 more styling, if you can call it that 2019-02-09 14:41:31 -06:00
Jacob Levine
63c69ecc14 styling, if you can call it that 2019-02-09 14:39:32 -06:00
Jacob Levine
1c775fca2c you can now actually see the profile update page 2019-02-09 14:34:01 -06:00
Jacob Levine
1073bc458a typo fix 2019-02-09 14:32:52 -06:00
Jacob Levine
f8dafe61f8 revamped profile page 2019-02-09 14:30:58 -06:00
Jacob Levine
c97e51d9bd even more bugfix 2019-02-09 14:01:32 -06:00
Jacob Levine
2e779a95d2 more bugfix 2019-02-09 14:00:50 -06:00
Jacob Levine
0c609064a6 bugfix 2019-02-09 13:59:23 -06:00
Jacob Levine
059509e018 revamped sign-in, now that we have working checks 2019-02-09 13:57:48 -06:00
Jacob Levine
2c9951d2c9 ok this should fix 2019-02-09 13:33:14 -06:00
Jacob Levine
290110274b even more of a last-ditch effort to make js not multithread everything 2019-02-09 13:32:06 -06:00
Jacob Levine
7d02c6373c even more of a last-ditch effort to make js not multithread everything 2019-02-09 13:04:12 -06:00
Jacob Levine
0b0d36d660 last-ditch effort to make js not multithread everything 2019-02-09 13:01:14 -06:00
Jacob Levine
807c66dd3a ok this should fix 2019-02-09 12:46:20 -06:00
Jacob Levine
f0c0d646b5 ok this should fix 2019-02-09 12:41:52 -06:00
Jacob Levine
390f3d9c4d rephrased check script. are you happy now, JS? 2019-02-09 12:31:25 -06:00
Jacob Levine
19a9995875 i apparently can't type 2019-02-09 12:17:47 -06:00
Jacob Levine
95eab24247 adding standalone profile page 2019-02-09 12:14:55 -06:00
Jacob Levine
3da5a0cbd7 adding timeout 2019-02-09 11:43:47 -06:00
Jacob Levine
447e3e12a3 apperently window loads too fast for firebase 2019-02-09 11:38:57 -06:00
Jacob Levine
5b922fc10b squashing bugs 2019-02-09 11:33:24 -06:00
Jacob Levine
e661af1add Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-02-09 11:29:03 -06:00
Jacob Levine
192d023325 testing signout logic 2019-02-09 11:27:46 -06:00
ltcptgeneral
6b91fe9819 fixed copy paste oppsie 2019-02-08 15:42:33 -06:00
Jacob Levine
82231cb04b styling fixes 2019-02-06 18:20:31 -06:00
Jacob Levine
39dc72add2 onload scripts 2019-02-06 18:19:18 -06:00
Jacob Levine
ac158bf0a9 bugfixes 2019-02-06 18:12:39 -06:00
Jacob Levine
7b2915f4f2 styling fixes 2019-02-06 18:09:47 -06:00
Jacob Levine
64354dbe19 Merge branch 'master' of https://github.com/titanscout2022/tr2022-strategy 2019-02-06 17:52:37 -06:00
Jacob Levine
901c8d25f8 added 3 other pages 2019-02-06 17:51:58 -06:00
ltcptgeneral
b346b01223 android app v 1.0.0.001 2019-02-06 17:43:38 -06:00
ltcptgeneral
73b419dfd6 android app v 1.0.0.000
finished android app
published source code
2019-02-06 17:06:25 -06:00
Jacob Levine
48f34f0472 revert some changes 2019-02-06 16:50:39 -06:00
Jacob Levine
e1769235f3 more styling 2019-02-06 16:45:56 -06:00
Jacob Levine
ac00138ca8 styling 2019-02-06 16:42:15 -06:00
Jacob Levine
28b5801bcc added sidebar 2019-02-06 16:21:41 -06:00
Jacob Levine
f2ed8ab04c sizing 2019-02-06 16:17:07 -06:00
Jacob Levine
781b4dc8b5 bugfix 2019-02-06 16:14:39 -06:00
Jacob Levine
19a236251a added sidebar 2019-02-06 16:08:28 -06:00
Jacob Levine
0d481b01df bugfix 2019-02-06 15:55:22 -06:00
jlevine18
5de2528d34 more bugfix 2019-02-06 15:37:27 -06:00
Jacob Levine
317ca72377 added info change functionality 2019-02-06 15:35:51 -06:00
Jacob Levine
c6e719240a bugfix 2019-02-06 15:25:15 -06:00
Jacob Levine
e554a1df99 reworked fix profile info 2019-02-06 15:22:09 -06:00
Jacob Levine
d9e7a1ed1e testing bugs 2019-02-06 15:04:31 -06:00
Jacob Levine
d968f10737 bugfix 2019-02-06 14:56:17 -06:00
Jacob Levine
dc80127dee bugfix 2019-02-06 14:51:31 -06:00
Jacob Levine
c591c84c75 added info change functionality 2019-02-06 14:46:41 -06:00
Jacob Levine
e290f5ae11 layout changes 2019-02-06 14:15:59 -06:00
Jacob Levine
b8d209b283 new fixes 2019-02-06 13:57:29 -06:00
Jacob Levine
f195b81974 added profile change functionality 2019-02-06 13:24:56 -06:00
ltcptgeneral
1293de346e analysis.py v 1.0.8.005, superscript.py v 1.0.1.000
changelog analysis.py:
- minor fixes
changelog superscript.py:
- added data reading from file
- added superstructure to code
2019-02-05 09:50:10 -06:00
ltcptgeneral
1b41c409cc created superscript.py, tbarequest.py v 1.0.1.000, edited repack_json.py
changelog tbarequest.py:
- fixed a simple error
2019-02-05 09:42:00 -06:00
ltcptgeneral
38d471113f Update .gitignore 2019-02-05 09:02:04 -06:00
ltcptgeneral
b31beb25be oof^2 2019-02-04 12:33:25 -06:00
ltcptgeneral
e3db22d262 Delete temp.txt 2019-02-04 10:50:43 -06:00
ltcptgeneral
e2d2e6687f oof 2019-02-04 10:50:07 -06:00
ltcptgeneral
b64ec05134 removed app bc jacob did fancy shit 2019-01-26 10:45:19 -06:00
ltcptgeneral
511e627899 Update workspace.xml 2019-01-26 10:40:35 -06:00
ltcptgeneral
ab0b2b9992 initialized app project 2019-01-26 10:32:00 -06:00
ltcptgeneral
0021eed5fb analysis.py - v 1.0.8.004
changelog
- removed a few unused dependencies
2019-01-26 10:11:54 -06:00
ltcptgeneral
8c35d8a3f6 yeeted histo_analysis_old() due to depreciation 2019-01-23 09:09:14 -06:00
ltcptgeneral
e5420844de yeeted useless comments 2019-01-22 22:42:37 -06:00
jlevine18
0fca5f58db ApiKey now changed and hidden-don't be stupid jake 2019-01-06 13:41:15 -06:00
Jacob Levine
07880038b0 folder move fix 2019-01-06 13:18:01 -06:00
Jacob Levine
d2d5d4c04e push all website files 2019-01-06 13:14:45 -06:00
jlevine18
d7301e26c3 Add files via upload 2019-01-06 13:02:35 -06:00
jlevine18
752b981e37 Rename website/functions/acorn to website/functions/node_modules/.bin/acorn 2019-01-06 12:57:46 -06:00
jlevine18
5f2db375f3 Add files via upload 2019-01-06 12:56:49 -06:00
jlevine18
cac1b4fba4 Add files via upload 2019-01-06 12:55:50 -06:00
jlevine18
236c4d02b6 Create index.js 2019-01-06 12:55:31 -06:00
jlevine18
8645eace5b Delete style.css 2019-01-06 12:54:41 -06:00
jlevine18
47cce54b3b Delete scripts.js 2019-01-06 12:54:35 -06:00
jlevine18
5a0fe35f86 Delete index.html 2019-01-06 12:54:29 -06:00
jlevine18
d3f8b474d0 upload website 2019-01-06 12:54:08 -06:00
ltcptgeneral
27145495e7 Update analysis.docs 2018-12-30 16:49:44 -06:00
ltcptgeneral
1a8da3fdd5 analysis.py - v 1.0.8.003
changelog:
- added p_value function
2018-12-29 16:28:41 -06:00
ltcptgeneral
444bfb5945 stuff 2018-12-26 17:08:04 -06:00
ltcptgeneral
cfee240e9c pineapple 2018-12-26 12:37:49 -06:00
ltcptgeneral
83a1dd5ced orange 2018-12-26 12:22:31 -06:00
ltcptgeneral
bf75e804cc bannana 2018-12-26 12:22:17 -06:00
ltcptgeneral
83e4f60a37 apple 2018-12-26 12:21:44 -06:00
bearacuda13
ae11605013 Add files via upload 2018-12-26 12:18:40 -06:00
bearacuda13
08b336cf15 Add files via upload 2018-12-26 12:14:05 -06:00
ltcptgeneral
eeeec86be6 temp 2018-12-26 12:06:42 -06:00
ltcptgeneral
9dbd897323 analysis.py - v 1.0.8.002
changelog:
- updated __all__ correctly to contain changes made in v 1.0.8.000 and v 1.0.8.001
2018-12-24 16:44:03 -06:00
jlevine18
71337c0fd5 fix other stupid mistakes 2018-12-24 14:50:04 -06:00
jlevine18
4e015180b6 fix syntax error 2018-12-24 14:42:54 -06:00
jlevine18
70591bc581 started ML module 2018-12-24 09:32:25 -06:00
jlevine18
288f97a3fd visualizer.py is now visualization.py 2018-12-21 11:10:18 -06:00
jlevine18
1126373bf2 Update tbarequest.py 2018-12-21 11:07:21 -06:00
jlevine18
fd0d43d29c added TBA requests module 2018-12-21 11:04:46 -06:00
jlevine18
cc6a7697cf Update visualization.py 2018-12-20 22:01:28 -06:00
jlevine18
2140ea8f77 started visualization module 2018-12-20 21:45:05 -06:00
ltcptgeneral
9dd5cc76f6 analysis.py - v 1.0.8.001
changelog:
- refactors
- bugfixes
2018-12-20 20:49:09 -06:00
ltcptgeneral
7b1e54eed8 refactor analysis.py 2018-12-20 15:05:43 -06:00
ltcptgeneral
188a7bbf1f Update data.csv 2018-12-20 12:21:26 -06:00
ltcptgeneral
b7a0c5286a analysis.py - v 1.0.8.000
changelog:
- depreciated histo_analysis_old
- depreciated debug
- altered basic_analysis to take array data instead of filepath
- refactor
- optimization
2018-12-20 12:21:22 -06:00
ltcptgeneral
32a2d6321c no change 2018-12-13 08:57:19 -06:00
ltcptgeneral
d2f6961693 Update analysis.cpython-37.pyc 2018-12-07 16:56:09 -06:00
ltcptgeneral
107076ac35 added visualizer.py, reorganized folders 2018-12-05 11:31:38 -06:00
ltcptgeneral
0b73460446 Update analysis.cpython-37.pyc 2018-12-04 19:05:13 -06:00
ltcptgeneral
39d5522650 Update analysis_docs.txt 2018-12-01 22:34:30 -06:00
ltcptgeneral
68d6c87589 Update analysis_docs.txt 2018-12-01 22:13:19 -06:00
ltcptgeneral
222c536631 created docs 2018-12-01 21:02:53 -06:00
ltcptgeneral
bd3f695938 a 2018-12-01 14:51:50 -06:00
ltcptgeneral
1b1a7c45bf Update analysis.cpython-37.pyc 2018-12-01 14:51:38 -06:00
ltcptgeneral
8a58fe28fa analysis.py - v 1.0.7.002
changelog:
	- bug fixes
2018-11-29 12:58:53 -06:00
ltcptgeneral
9c67e6f927 analysis.py - v 1.0.7.001
changelog:
	- bug fixes
2018-11-29 12:36:25 -06:00
ltcptgeneral
8d2dedc5a2 update analysis.py 2018-11-29 09:33:18 -06:00
ltcptgeneral
944cb31883 Update analysis.py
a quick update
2018-11-29 09:32:27 -06:00
ltcptgeneral
b38ffe1f08 Update requirements.txt 2018-11-29 09:31:55 -06:00
ltcptgeneral
19f89d3f35 updated stuff 2018-11-29 09:27:08 -06:00
ltcptgeneral
504fc92feb Create analysis.cpython-37.pyc 2018-11-29 09:04:17 -06:00
ltcptgeneral
5eb5e5ed8e removes stuff 2018-11-29 09:00:47 -06:00
ltcptgeneral
88be42de45 removed generate_data.py 2018-11-29 08:53:41 -06:00
ltcptgeneral
704a2d5808 analysis.py - v 1.0.7.000
changelog:
        - added tanh_regression (logistical regression)
	- bug fixes
2018-11-28 16:35:47 -06:00
ltcptgeneral
e915fe538e analysis.py - v 1.0.6.005
changelog:
        - added z_normalize function to normalize dataset
	- bug fixes
2018-11-28 14:29:32 -06:00
ltcptgeneral
5295bef18b Update analysis.cpython-37.pyc 2018-11-28 11:35:21 -06:00
ltcptgeneral
ae69eb7a40 Merge branch 'master' of https://github.com/ltcptgeneral/tr2022-strategy 2018-11-28 11:12:53 -06:00
jlevine18
46f434b815 started website 2018-11-28 11:10:38 -06:00
jlevine18
cce111bd6a Create index.html 2018-11-28 11:06:04 -06:00
43 changed files with 4209 additions and 1083 deletions

2
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,2 @@
FROM python
WORKDIR ~/

View File

@@ -0,0 +1,27 @@
{
"name": "TRA Analysis Development Environment",
"build": {
"dockerfile": "Dockerfile",
},
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint",
"python.testing.pytestPath": "/usr/local/py-utils/bin/pytest"
},
"extensions": [
"mhutchie.git-graph",
"donjayamanne.jupyter",
],
"postCreateCommand": "pip install -r analysis-master/requirements.txt"
}

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

36
.github/workflows/publish-analysis.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Upload Analysis Package
on:
release:
types: [published, edited]
jobs:
deploy:
runs-on: ubuntu-latest
env:
working-directory: ./analysis-master/
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
working-directory: ${{env.working-directory}}
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build package
working-directory: ${{env.working-directory}}
run: |
python setup.py sdist bdist_wheel
- name: Publish package to PyPI
uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}
packages_dir: analysis-master/dist/

38
.github/workflows/ut-analysis.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Analysis Unit Tests
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8]
env:
working-directory: ./analysis-master/
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
working-directory: ${{ env.working-directory }}
- name: Test with pytest
run: |
pytest
working-directory: ${{ env.working-directory }}

38
.github/workflows/ut-superscript.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Superscript Unit Tests
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8]
env:
working-directory: ./data-analysis/
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
working-directory: ${{ env.working-directory }}
- name: Test with pytest
run: |
pytest
working-directory: ${{ env.working-directory }}

39
.gitignore vendored
View File

@@ -1,2 +1,39 @@
benchmark_data.csv benchmark_data.csv
data-analysis/keys/keytemp.json
data-analysis/__pycache__/analysis.cpython-37.pyc
apps/android/source/app/src/main/res/drawable-v24/uuh.png
apps/android/source/app/src/main/java/com/example/titanscouting/tits.java
data-analysis/analysis.cp37-win_amd64.pyd
data-analysis/analysis/analysis.c
data-analysis/analysis/analysis.cp37-win_amd64.pyd
data-analysis/analysis/build/temp.win-amd64-3.7/Release/analysis.cp37-win_amd64.exp
data-analysis/analysis/build/temp.win-amd64-3.7/Release/analysis.cp37-win_amd64.lib
data-analysis/analysis/build/temp.win-amd64-3.7/Release/analysis.obj
data-analysis/test.ipynb
data-analysis/.ipynb_checkpoints/test-checkpoint.ipynb
.vscode/settings.json
.vscode
data-analysis/arthur_pull.ipynb
data-analysis/keys.txt
data-analysis/check_for_new_matches.ipynb
data-analysis/test.ipynb
data-analysis/visualize_pit.ipynb
data-analysis/config/keys.config
analysis-master/analysis/__pycache__/
analysis-master/analysis/metrics/__pycache__/
data-analysis/__pycache__/
analysis-master/analysis.egg-info/
analysis-master/build/
analysis-master/metrics/
data-analysis/config-pop.json
data-analysis/__pycache__/
analysis-master/__pycache__/
analysis-master/.pytest_cache/
data-analysis/.pytest_cache/
analysis-master/tra_analysis.egg-info
analysis-master/tra_analysis/__pycache__
analysis-master/tra_analysis/.ipynb_checkpoints
.pytest_cache
analysis-master/tra_analysis/metrics/__pycache__
analysis-master/dist

66
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,66 @@
# Contributing Guidelines
This project accept contributions via GitHub pull requests.
This document outlines some of the
conventions on development workflow, commit message formatting, contact points,
and other resources to make it easier to get your contribution accepted.
## Certificate of Origin
By contributing to this project, you agree to the [Developer Certificate of
Origin (DCO)](https://developercertificate.org/). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution.
In order to show your agreement with the DCO you should include at the end of the commit message,
the following line: `Signed-off-by: John Doe <john.doe@example.com>`, using your real name.
This can be done easily using the [`-s`](https://github.com/git/git/blob/b2c150d3aa82f6583b9aadfecc5f8fa1c74aca09/Documentation/git-commit.txt#L154-L161) flag on the `git commit`.
Visual Studio code also has a flag to enable signoff on commits
If you find yourself pushed a few commits without `Signed-off-by`, you can still add it afterwards. Read this for help: [fix-DCO.md](https://github.com/src-d/guide/blob/master/developer-community/fix-DCO.md).
## Support Channels
The official support channel, for both users and contributors, is:
- GitHub issues: each repository has its own list of issues.
*Before opening a new issue or submitting a new pull request, it's helpful to
search the project - it's likely that another user has already reported the
issue you're facing, or it's a known issue that we're already aware of.
## How to Contribute
In general, please use conventional approaches to development and contribution such as:
* Create branches for additions or deletions, and or side projects
* Do not commit to master!
* Use Pull Requests (PRs) to indicate that an addition is ready to merge.
PRs are the main and exclusive way to contribute code to source{d} projects.
In order for a PR to be accepted it needs to pass this list of requirements:
- The contribution must be correctly explained with natural language and providing a minimum working example that reproduces it.
- All PRs must be written idiomaticly:
- for Node: formatted according to [AirBnB standards](https://github.com/airbnb/javascript), and no warnings from `eslint` using the AirBnB style guide
- for other languages, similar constraints apply.
- They should in general include tests, and those shall pass.
- In any case, all the PRs have to pass the personal evaluation of at least one of the [maintainers](MAINTAINERS) of the project.
### Format of the commit message
Every commit message should describe what was changed, under which context and, if applicable, the issue it relates to (mentioning a GitHub issue number when applicable):
For small changes, or changes to a testing or personal branch, the commit message should be a short changelog entry
For larger changes or for changes on branches that are more widely used, the commit message should simply reference an entry to some other changelog system. It is encouraged to use some sort of versioning system to log changes. Example commit messages:
```
superscript.py v 2.0.5.006
```
The format can be described more formally as follows:
```
<package> v <version number>
```

29
LICENSE Normal file
View File

@@ -0,0 +1,29 @@
BSD 3-Clause License
Copyright (c) 2020, Titan Robotics FRC 2022
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

3
MAINTAINERS Normal file
View File

@@ -0,0 +1,3 @@
Arthur Lu <learthurgo@gmail.com>
Jacob Levine <jacoblevine18@gmail.com>
Dev Singh <dev@devksingh.com>

5
README.md Normal file
View File

@@ -0,0 +1,5 @@
# red-alliance-analysis
Titan Robotics 2022 Strategy Team Repository for Data Analysis Tools. Included with these tools are the backend data analysis engine formatted as a python package, associated binaries for the analysis package, and premade scripts that can be pulled directly from this repository and will integrate with other Red Alliance applications to quickly deploy FRC scouting tools.
# Installing
`pip install tra_analysis`

1
analysis-master/build.sh Normal file
View File

@@ -0,0 +1 @@
python setup.py sdist bdist_wheel || python3 setup.py sdist bdist_wheel

View File

@@ -0,0 +1,5 @@
FROM python
WORKDIR ~/
COPY ./ ./
RUN pip install -r requirements.txt
CMD ["bash"]

View File

@@ -0,0 +1,3 @@
cd ..
docker build -t tra-analysis-amd64-dev -f docker/Dockerfile .
docker run -it tra-analysis-amd64-dev

View File

@@ -0,0 +1,6 @@
numba
numpy
scipy
scikit-learn
six
matplotlib

26
analysis-master/setup.py Normal file
View File

@@ -0,0 +1,26 @@
import setuptools
requirements = []
with open("requirements.txt", 'r') as file:
for line in file:
requirements.append(line)
setuptools.setup(
name="tra_analysis",
version="2.0.2",
author="The Titan Scouting Team",
author_email="titanscout2022@gmail.com",
description="Analysis package developed by Titan Scouting for The Red Alliance",
long_description="",
long_description_content_type="text/markdown",
url="https://github.com/titanscout2022/tr2022-strategy",
packages=setuptools.find_packages(),
install_requires=requirements,
license = "GNU General Public License v3.0",
classifiers=[
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
],
python_requires='>=3.6',
)

View File

@@ -0,0 +1,31 @@
from tra_analysis import analysis as an
from tra_analysis import metrics
def test_():
test_data_linear = [1, 3, 6, 7, 9]
y_data_ccu = [1, 3, 7, 14, 21]
y_data_ccd = [1, 5, 7, 8.5, 8.66]
test_data_scrambled = [-32, 34, 19, 72, -65, -11, -43, 6, 85, -17, -98, -26, 12, 20, 9, -92, -40, 98, -78, 17, -20, 49, 93, -27, -24, -66, 40, 84, 1, -64, -68, -25, -42, -46, -76, 43, -3, 30, -14, -34, -55, -13, 41, -30, 0, -61, 48, 23, 60, 87, 80, 77, 53, 73, 79, 24, -52, 82, 8, -44, 65, 47, -77, 94, 7, 37, -79, 36, -94, 91, 59, 10, 97, -38, -67, 83, 54, 31, -95, -63, 16, -45, 21, -12, 66, -48, -18, -96, -90, -21, -83, -74, 39, 64, 69, -97, 13, 55, 27, -39]
test_data_sorted = [-98, -97, -96, -95, -94, -92, -90, -83, -79, -78, -77, -76, -74, -68, -67, -66, -65, -64, -63, -61, -55, -52, -48, -46, -45, -44, -43, -42, -40, -39, -38, -34, -32, -30, -27, -26, -25, -24, -21, -20, -18, -17, -14, -13, -12, -11, -3, 0, 1, 6, 7, 8, 9, 10, 12, 13, 16, 17, 19, 20, 21, 23, 24, 27, 30, 31, 34, 36, 37, 39, 40, 41, 43, 47, 48, 49, 53, 54, 55, 59, 60, 64, 65, 66, 69, 72, 73, 77, 79, 80, 82, 83, 84, 85, 87, 91, 93, 94, 97, 98]
assert an.basic_stats(test_data_linear) == {"mean": 5.2, "median": 6.0, "standard-deviation": 2.85657137141714, "variance": 8.16, "minimum": 1.0, "maximum": 9.0}
assert an.z_score(3.2, 6, 1.5) == -1.8666666666666665
assert an.z_normalize([test_data_linear], 1).tolist() == [[0.07537783614444091, 0.22613350843332272, 0.45226701686664544, 0.5276448530110863, 0.6784005252999682]]
assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccu, ["lin"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccd, ["log"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccu, ["exp"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccu, ["ply"])) == True
#assert all(isinstance(item, str) for item in an.regression(test_data_linear, y_data_ccd, ["sig"])) == True
assert an.Metric().elo(1500, 1500, [1, 0], 400, 24) == 1512.0
assert an.Metric().glicko2(1500, 250, 0.06, [1500, 1400], [250, 240], [1, 0]) == (1478.864307445517, 195.99122679202452, 0.05999602937563585)
#assert an.Metric().trueskill([[(25, 8.33), (24, 8.25), (32, 7.5)], [(25, 8.33), (25, 8.33), (21, 6.5)]], [1, 0]) == [(metrics.trueskill.Rating(mu=21.346, sigma=7.875), metrics.trueskill.Rating(mu=20.415, sigma=7.808), metrics.trueskill.Rating(mu=29.037, sigma=7.170)), (metrics.trueskill.Rating(mu=28.654, sigma=7.875), metrics.trueskill.Rating(mu=28.654, sigma=7.875), metrics.trueskill.Rating(mu=23.225, sigma=6.287))]
assert all(a == b for a, b in zip(an.Sort().quicksort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().mergesort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().introsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().heapsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().insertionsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().timsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().selectionsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().shellsort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().bubblesort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().cyclesort(test_data_scrambled), test_data_sorted))
assert all(a == b for a, b in zip(an.Sort().cocktailsort(test_data_scrambled), test_data_sorted))

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,162 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"from decimal import Decimal\n",
"from functools import reduce"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def add(string):\n",
" while(len(re.findall(\"[+]{1}[-]?\", string)) != 0):\n",
" string = re.sub(\"[-]?\\d+[.]?\\d*[+]{1}[-]?\\d+[.]?\\d*\", str(\"%f\" % reduce((lambda x, y: x + y), [Decimal(i) for i in re.split(\"[+]{1}\", re.search(\"[-]?\\d+[.]?\\d*[+]{1}[-]?\\d+[.]?\\d*\", string).group())])), string, 1)\n",
" return string"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"def sub(string):\n",
" while(len(re.findall(\"\\d+[.]?\\d*[-]{1,2}\\d+[.]?\\d*\", string)) != 0):\n",
" g = re.search(\"\\d+[.]?\\d*[-]{1,2}\\d+[.]?\\d*\", string).group()\n",
" if(re.search(\"[-]{1,2}\", g).group() == \"-\"):\n",
" r = re.sub(\"[-]{1}\", \"+-\", g, 1)\n",
" string = re.sub(g, r, string, 1)\n",
" elif(re.search(\"[-]{1,2}\", g).group() == \"--\"):\n",
" r = re.sub(\"[-]{2}\", \"+\", g, 1)\n",
" string = re.sub(g, r, string, 1)\n",
" else:\n",
" pass\n",
" return string"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"def mul(string):\n",
" while(len(re.findall(\"[*]{1}[-]?\", string)) != 0):\n",
" string = re.sub(\"[-]?\\d+[.]?\\d*[*]{1}[-]?\\d+[.]?\\d*\", str(\"%f\" % reduce((lambda x, y: x * y), [Decimal(i) for i in re.split(\"[*]{1}\", re.search(\"[-]?\\d+[.]?\\d*[*]{1}[-]?\\d+[.]?\\d*\", string).group())])), string, 1)\n",
" return string"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def div(string):\n",
" while(len(re.findall(\"[/]{1}[-]?\", string)) != 0):\n",
" string = re.sub(\"[-]?\\d+[.]?\\d*[/]{1}[-]?\\d+[.]?\\d*\", str(\"%f\" % reduce((lambda x, y: x / y), [Decimal(i) for i in re.split(\"[/]{1}\", re.search(\"[-]?\\d+[.]?\\d*[/]{1}[-]?\\d+[.]?\\d*\", string).group())])), string, 1)\n",
" return string"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"def exp(string):\n",
" while(len(re.findall(\"[\\^]{1}[-]?\", string)) != 0):\n",
" string = re.sub(\"[-]?\\d+[.]?\\d*[\\^]{1}[-]?\\d+[.]?\\d*\", str(\"%f\" % reduce((lambda x, y: x ** y), [Decimal(i) for i in re.split(\"[\\^]{1}\", re.search(\"[-]?\\d+[.]?\\d*[\\^]{1}[-]?\\d+[.]?\\d*\", string).group())])), string, 1)\n",
" return string"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"def evaluate(string):\n",
" string = exp(string)\n",
" string = div(string)\n",
" string = mul(string)\n",
" string = sub(string)\n",
" print(string)\n",
" string = add(string)\n",
" return string"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"output_type": "error",
"ename": "SyntaxError",
"evalue": "unexpected EOF while parsing (<ipython-input-13-f9fb4aededd9>, line 1)",
"traceback": [
"\u001b[1;36m File \u001b[1;32m\"<ipython-input-13-f9fb4aededd9>\"\u001b[1;36m, line \u001b[1;32m1\u001b[0m\n\u001b[1;33m def parentheses(string):\u001b[0m\n\u001b[1;37m ^\u001b[0m\n\u001b[1;31mSyntaxError\u001b[0m\u001b[1;31m:\u001b[0m unexpected EOF while parsing\n"
]
}
],
"source": [
"def parentheses(string):"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "-158456325028528675187087900672.000000+0.8\n"
},
{
"output_type": "execute_result",
"data": {
"text/plain": "'-158456325028528675187087900672.000000'"
},
"metadata": {},
"execution_count": 22
}
],
"source": [
"string = \"8^32*4/-2+0.8\"\n",
"evaluate(string)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6-final"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,7 @@
import numpy as np
def calculate(starting_score, opposing_score, observed, N, K):
expected = 1/(1+10**((np.array(opposing_score) - starting_score)/N))
return starting_score + K*(np.sum(observed) - np.sum(expected))

View File

@@ -0,0 +1,99 @@
import math
class Glicko2:
_tau = 0.5
def getRating(self):
return (self.__rating * 173.7178) + 1500
def setRating(self, rating):
self.__rating = (rating - 1500) / 173.7178
rating = property(getRating, setRating)
def getRd(self):
return self.__rd * 173.7178
def setRd(self, rd):
self.__rd = rd / 173.7178
rd = property(getRd, setRd)
def __init__(self, rating = 1500, rd = 350, vol = 0.06):
self.setRating(rating)
self.setRd(rd)
self.vol = vol
def _preRatingRD(self):
self.__rd = math.sqrt(math.pow(self.__rd, 2) + math.pow(self.vol, 2))
def update_player(self, rating_list, RD_list, outcome_list):
rating_list = [(x - 1500) / 173.7178 for x in rating_list]
RD_list = [x / 173.7178 for x in RD_list]
v = self._v(rating_list, RD_list)
self.vol = self._newVol(rating_list, RD_list, outcome_list, v)
self._preRatingRD()
self.__rd = 1 / math.sqrt((1 / math.pow(self.__rd, 2)) + (1 / v))
tempSum = 0
for i in range(len(rating_list)):
tempSum += self._g(RD_list[i]) * \
(outcome_list[i] - self._E(rating_list[i], RD_list[i]))
self.__rating += math.pow(self.__rd, 2) * tempSum
def _newVol(self, rating_list, RD_list, outcome_list, v):
i = 0
delta = self._delta(rating_list, RD_list, outcome_list, v)
a = math.log(math.pow(self.vol, 2))
tau = self._tau
x0 = a
x1 = 0
while x0 != x1:
# New iteration, so x(i) becomes x(i-1)
x0 = x1
d = math.pow(self.__rating, 2) + v + math.exp(x0)
h1 = -(x0 - a) / math.pow(tau, 2) - 0.5 * math.exp(x0) \
/ d + 0.5 * math.exp(x0) * math.pow(delta / d, 2)
h2 = -1 / math.pow(tau, 2) - 0.5 * math.exp(x0) * \
(math.pow(self.__rating, 2) + v) \
/ math.pow(d, 2) + 0.5 * math.pow(delta, 2) * math.exp(x0) \
* (math.pow(self.__rating, 2) + v - math.exp(x0)) / math.pow(d, 3)
x1 = x0 - (h1 / h2)
return math.exp(x1 / 2)
def _delta(self, rating_list, RD_list, outcome_list, v):
tempSum = 0
for i in range(len(rating_list)):
tempSum += self._g(RD_list[i]) * (outcome_list[i] - self._E(rating_list[i], RD_list[i]))
return v * tempSum
def _v(self, rating_list, RD_list):
tempSum = 0
for i in range(len(rating_list)):
tempE = self._E(rating_list[i], RD_list[i])
tempSum += math.pow(self._g(RD_list[i]), 2) * tempE * (1 - tempE)
return 1 / tempSum
def _E(self, p2rating, p2RD):
return 1 / (1 + math.exp(-1 * self._g(p2RD) * \
(self.__rating - p2rating)))
def _g(self, RD):
return 1 / math.sqrt(1 + 3 * math.pow(RD, 2) / math.pow(math.pi, 2))
def did_not_compete(self):
self._preRatingRD()

View File

@@ -0,0 +1,907 @@
from __future__ import absolute_import
from itertools import chain
import math
from six import iteritems
from six.moves import map, range, zip
from six import iterkeys
import copy
try:
from numbers import Number
except ImportError:
Number = (int, long, float, complex)
inf = float('inf')
class Gaussian(object):
#: Precision, the inverse of the variance.
pi = 0
#: Precision adjusted mean, the precision multiplied by the mean.
tau = 0
def __init__(self, mu=None, sigma=None, pi=0, tau=0):
if mu is not None:
if sigma is None:
raise TypeError('sigma argument is needed')
elif sigma == 0:
raise ValueError('sigma**2 should be greater than 0')
pi = sigma ** -2
tau = pi * mu
self.pi = pi
self.tau = tau
@property
def mu(self):
return self.pi and self.tau / self.pi
@property
def sigma(self):
return math.sqrt(1 / self.pi) if self.pi else inf
def __mul__(self, other):
pi, tau = self.pi + other.pi, self.tau + other.tau
return Gaussian(pi=pi, tau=tau)
def __truediv__(self, other):
pi, tau = self.pi - other.pi, self.tau - other.tau
return Gaussian(pi=pi, tau=tau)
__div__ = __truediv__ # for Python 2
def __eq__(self, other):
return self.pi == other.pi and self.tau == other.tau
def __lt__(self, other):
return self.mu < other.mu
def __le__(self, other):
return self.mu <= other.mu
def __gt__(self, other):
return self.mu > other.mu
def __ge__(self, other):
return self.mu >= other.mu
def __repr__(self):
return 'N(mu={:.3f}, sigma={:.3f})'.format(self.mu, self.sigma)
def _repr_latex_(self):
latex = r'\mathcal{{ N }}( {:.3f}, {:.3f}^2 )'.format(self.mu, self.sigma)
return '$%s$' % latex
class Matrix(list):
def __init__(self, src, height=None, width=None):
if callable(src):
f, src = src, {}
size = [height, width]
if not height:
def set_height(height):
size[0] = height
size[0] = set_height
if not width:
def set_width(width):
size[1] = width
size[1] = set_width
try:
for (r, c), val in f(*size):
src[r, c] = val
except TypeError:
raise TypeError('A callable src must return an interable '
'which generates a tuple containing '
'coordinate and value')
height, width = tuple(size)
if height is None or width is None:
raise TypeError('A callable src must call set_height and '
'set_width if the size is non-deterministic')
if isinstance(src, list):
is_number = lambda x: isinstance(x, Number)
unique_col_sizes = set(map(len, src))
everything_are_number = filter(is_number, sum(src, []))
if len(unique_col_sizes) != 1 or not everything_are_number:
raise ValueError('src must be a rectangular array of numbers')
two_dimensional_array = src
elif isinstance(src, dict):
if not height or not width:
w = h = 0
for r, c in iterkeys(src):
if not height:
h = max(h, r + 1)
if not width:
w = max(w, c + 1)
if not height:
height = h
if not width:
width = w
two_dimensional_array = []
for r in range(height):
row = []
two_dimensional_array.append(row)
for c in range(width):
row.append(src.get((r, c), 0))
else:
raise TypeError('src must be a list or dict or callable')
super(Matrix, self).__init__(two_dimensional_array)
@property
def height(self):
return len(self)
@property
def width(self):
return len(self[0])
def transpose(self):
height, width = self.height, self.width
src = {}
for c in range(width):
for r in range(height):
src[c, r] = self[r][c]
return type(self)(src, height=width, width=height)
def minor(self, row_n, col_n):
height, width = self.height, self.width
if not (0 <= row_n < height):
raise ValueError('row_n should be between 0 and %d' % height)
elif not (0 <= col_n < width):
raise ValueError('col_n should be between 0 and %d' % width)
two_dimensional_array = []
for r in range(height):
if r == row_n:
continue
row = []
two_dimensional_array.append(row)
for c in range(width):
if c == col_n:
continue
row.append(self[r][c])
return type(self)(two_dimensional_array)
def determinant(self):
height, width = self.height, self.width
if height != width:
raise ValueError('Only square matrix can calculate a determinant')
tmp, rv = copy.deepcopy(self), 1.
for c in range(width - 1, 0, -1):
pivot, r = max((abs(tmp[r][c]), r) for r in range(c + 1))
pivot = tmp[r][c]
if not pivot:
return 0.
tmp[r], tmp[c] = tmp[c], tmp[r]
if r != c:
rv = -rv
rv *= pivot
fact = -1. / pivot
for r in range(c):
f = fact * tmp[r][c]
for x in range(c):
tmp[r][x] += f * tmp[c][x]
return rv * tmp[0][0]
def adjugate(self):
height, width = self.height, self.width
if height != width:
raise ValueError('Only square matrix can be adjugated')
if height == 2:
a, b = self[0][0], self[0][1]
c, d = self[1][0], self[1][1]
return type(self)([[d, -b], [-c, a]])
src = {}
for r in range(height):
for c in range(width):
sign = -1 if (r + c) % 2 else 1
src[r, c] = self.minor(r, c).determinant() * sign
return type(self)(src, height, width)
def inverse(self):
if self.height == self.width == 1:
return type(self)([[1. / self[0][0]]])
return (1. / self.determinant()) * self.adjugate()
def __add__(self, other):
height, width = self.height, self.width
if (height, width) != (other.height, other.width):
raise ValueError('Must be same size')
src = {}
for r in range(height):
for c in range(width):
src[r, c] = self[r][c] + other[r][c]
return type(self)(src, height, width)
def __mul__(self, other):
if self.width != other.height:
raise ValueError('Bad size')
height, width = self.height, other.width
src = {}
for r in range(height):
for c in range(width):
src[r, c] = sum(self[r][x] * other[x][c]
for x in range(self.width))
return type(self)(src, height, width)
def __rmul__(self, other):
if not isinstance(other, Number):
raise TypeError('The operand should be a number')
height, width = self.height, self.width
src = {}
for r in range(height):
for c in range(width):
src[r, c] = other * self[r][c]
return type(self)(src, height, width)
def __repr__(self):
return '{}({})'.format(type(self).__name__, super(Matrix, self).__repr__())
def _repr_latex_(self):
rows = [' && '.join(['%.3f' % cell for cell in row]) for row in self]
latex = r'\begin{matrix} %s \end{matrix}' % r'\\'.join(rows)
return '$%s$' % latex
def _gen_erfcinv(erfc, math=math):
def erfcinv(y):
"""The inverse function of erfc."""
if y >= 2:
return -100.
elif y <= 0:
return 100.
zero_point = y < 1
if not zero_point:
y = 2 - y
t = math.sqrt(-2 * math.log(y / 2.))
x = -0.70711 * \
((2.30753 + t * 0.27061) / (1. + t * (0.99229 + t * 0.04481)) - t)
for i in range(2):
err = erfc(x) - y
x += err / (1.12837916709551257 * math.exp(-(x ** 2)) - x * err)
return x if zero_point else -x
return erfcinv
def _gen_ppf(erfc, math=math):
erfcinv = _gen_erfcinv(erfc, math)
def ppf(x, mu=0, sigma=1):
return mu - sigma * math.sqrt(2) * erfcinv(2 * x)
return ppf
def erfc(x):
z = abs(x)
t = 1. / (1. + z / 2.)
r = t * math.exp(-z * z - 1.26551223 + t * (1.00002368 + t * (
0.37409196 + t * (0.09678418 + t * (-0.18628806 + t * (
0.27886807 + t * (-1.13520398 + t * (1.48851587 + t * (
-0.82215223 + t * 0.17087277
)))
)))
)))
return 2. - r if x < 0 else r
def cdf(x, mu=0, sigma=1):
return 0.5 * erfc(-(x - mu) / (sigma * math.sqrt(2)))
def pdf(x, mu=0, sigma=1):
return (1 / math.sqrt(2 * math.pi) * abs(sigma) *
math.exp(-(((x - mu) / abs(sigma)) ** 2 / 2)))
ppf = _gen_ppf(erfc)
def choose_backend(backend):
if backend is None: # fallback
return cdf, pdf, ppf
elif backend == 'mpmath':
try:
import mpmath
except ImportError:
raise ImportError('Install "mpmath" to use this backend')
return mpmath.ncdf, mpmath.npdf, _gen_ppf(mpmath.erfc, math=mpmath)
elif backend == 'scipy':
try:
from scipy.stats import norm
except ImportError:
raise ImportError('Install "scipy" to use this backend')
return norm.cdf, norm.pdf, norm.ppf
raise ValueError('%r backend is not defined' % backend)
def available_backends():
backends = [None]
for backend in ['mpmath', 'scipy']:
try:
__import__(backend)
except ImportError:
continue
backends.append(backend)
return backends
class Node(object):
pass
class Variable(Node, Gaussian):
def __init__(self):
self.messages = {}
super(Variable, self).__init__()
def set(self, val):
delta = self.delta(val)
self.pi, self.tau = val.pi, val.tau
return delta
def delta(self, other):
pi_delta = abs(self.pi - other.pi)
if pi_delta == inf:
return 0.
return max(abs(self.tau - other.tau), math.sqrt(pi_delta))
def update_message(self, factor, pi=0, tau=0, message=None):
message = message or Gaussian(pi=pi, tau=tau)
old_message, self[factor] = self[factor], message
return self.set(self / old_message * message)
def update_value(self, factor, pi=0, tau=0, value=None):
value = value or Gaussian(pi=pi, tau=tau)
old_message = self[factor]
self[factor] = value * old_message / self
return self.set(value)
def __getitem__(self, factor):
return self.messages[factor]
def __setitem__(self, factor, message):
self.messages[factor] = message
def __repr__(self):
args = (type(self).__name__, super(Variable, self).__repr__(),
len(self.messages), '' if len(self.messages) == 1 else 's')
return '<%s %s with %d connection%s>' % args
class Factor(Node):
def __init__(self, variables):
self.vars = variables
for var in variables:
var[self] = Gaussian()
def down(self):
return 0
def up(self):
return 0
@property
def var(self):
assert len(self.vars) == 1
return self.vars[0]
def __repr__(self):
args = (type(self).__name__, len(self.vars),
'' if len(self.vars) == 1 else 's')
return '<%s with %d connection%s>' % args
class PriorFactor(Factor):
def __init__(self, var, val, dynamic=0):
super(PriorFactor, self).__init__([var])
self.val = val
self.dynamic = dynamic
def down(self):
sigma = math.sqrt(self.val.sigma ** 2 + self.dynamic ** 2)
value = Gaussian(self.val.mu, sigma)
return self.var.update_value(self, value=value)
class LikelihoodFactor(Factor):
def __init__(self, mean_var, value_var, variance):
super(LikelihoodFactor, self).__init__([mean_var, value_var])
self.mean = mean_var
self.value = value_var
self.variance = variance
def calc_a(self, var):
return 1. / (1. + self.variance * var.pi)
def down(self):
# update value.
msg = self.mean / self.mean[self]
a = self.calc_a(msg)
return self.value.update_message(self, a * msg.pi, a * msg.tau)
def up(self):
# update mean.
msg = self.value / self.value[self]
a = self.calc_a(msg)
return self.mean.update_message(self, a * msg.pi, a * msg.tau)
class SumFactor(Factor):
def __init__(self, sum_var, term_vars, coeffs):
super(SumFactor, self).__init__([sum_var] + term_vars)
self.sum = sum_var
self.terms = term_vars
self.coeffs = coeffs
def down(self):
vals = self.terms
msgs = [var[self] for var in vals]
return self.update(self.sum, vals, msgs, self.coeffs)
def up(self, index=0):
coeff = self.coeffs[index]
coeffs = []
for x, c in enumerate(self.coeffs):
try:
if x == index:
coeffs.append(1. / coeff)
else:
coeffs.append(-c / coeff)
except ZeroDivisionError:
coeffs.append(0.)
vals = self.terms[:]
vals[index] = self.sum
msgs = [var[self] for var in vals]
return self.update(self.terms[index], vals, msgs, coeffs)
def update(self, var, vals, msgs, coeffs):
pi_inv = 0
mu = 0
for val, msg, coeff in zip(vals, msgs, coeffs):
div = val / msg
mu += coeff * div.mu
if pi_inv == inf:
continue
try:
# numpy.float64 handles floating-point error by different way.
# For example, it can just warn RuntimeWarning on n/0 problem
# instead of throwing ZeroDivisionError. So div.pi, the
# denominator has to be a built-in float.
pi_inv += coeff ** 2 / float(div.pi)
except ZeroDivisionError:
pi_inv = inf
pi = 1. / pi_inv
tau = pi * mu
return var.update_message(self, pi, tau)
class TruncateFactor(Factor):
def __init__(self, var, v_func, w_func, draw_margin):
super(TruncateFactor, self).__init__([var])
self.v_func = v_func
self.w_func = w_func
self.draw_margin = draw_margin
def up(self):
val = self.var
msg = self.var[self]
div = val / msg
sqrt_pi = math.sqrt(div.pi)
args = (div.tau / sqrt_pi, self.draw_margin * sqrt_pi)
v = self.v_func(*args)
w = self.w_func(*args)
denom = (1. - w)
pi, tau = div.pi / denom, (div.tau + sqrt_pi * v) / denom
return val.update_value(self, pi, tau)
#: Default initial mean of ratings.
MU = 25.
#: Default initial standard deviation of ratings.
SIGMA = MU / 3
#: Default distance that guarantees about 76% chance of winning.
BETA = SIGMA / 2
#: Default dynamic factor.
TAU = SIGMA / 100
#: Default draw probability of the game.
DRAW_PROBABILITY = .10
#: A basis to check reliability of the result.
DELTA = 0.0001
def calc_draw_probability(draw_margin, size, env=None):
if env is None:
env = global_env()
return 2 * env.cdf(draw_margin / (math.sqrt(size) * env.beta)) - 1
def calc_draw_margin(draw_probability, size, env=None):
if env is None:
env = global_env()
return env.ppf((draw_probability + 1) / 2.) * math.sqrt(size) * env.beta
def _team_sizes(rating_groups):
team_sizes = [0]
for group in rating_groups:
team_sizes.append(len(group) + team_sizes[-1])
del team_sizes[0]
return team_sizes
def _floating_point_error(env):
if env.backend == 'mpmath':
msg = 'Set "mpmath.mp.dps" to higher'
else:
msg = 'Cannot calculate correctly, set backend to "mpmath"'
return FloatingPointError(msg)
class Rating(Gaussian):
def __init__(self, mu=None, sigma=None):
if isinstance(mu, tuple):
mu, sigma = mu
elif isinstance(mu, Gaussian):
mu, sigma = mu.mu, mu.sigma
if mu is None:
mu = global_env().mu
if sigma is None:
sigma = global_env().sigma
super(Rating, self).__init__(mu, sigma)
def __int__(self):
return int(self.mu)
def __long__(self):
return long(self.mu)
def __float__(self):
return float(self.mu)
def __iter__(self):
return iter((self.mu, self.sigma))
def __repr__(self):
c = type(self)
args = ('.'.join([c.__module__, c.__name__]), self.mu, self.sigma)
return '%s(mu=%.3f, sigma=%.3f)' % args
class TrueSkill(object):
def __init__(self, mu=MU, sigma=SIGMA, beta=BETA, tau=TAU,
draw_probability=DRAW_PROBABILITY, backend=None):
self.mu = mu
self.sigma = sigma
self.beta = beta
self.tau = tau
self.draw_probability = draw_probability
self.backend = backend
if isinstance(backend, tuple):
self.cdf, self.pdf, self.ppf = backend
else:
self.cdf, self.pdf, self.ppf = choose_backend(backend)
def create_rating(self, mu=None, sigma=None):
if mu is None:
mu = self.mu
if sigma is None:
sigma = self.sigma
return Rating(mu, sigma)
def v_win(self, diff, draw_margin):
x = diff - draw_margin
denom = self.cdf(x)
return (self.pdf(x) / denom) if denom else -x
def v_draw(self, diff, draw_margin):
abs_diff = abs(diff)
a, b = draw_margin - abs_diff, -draw_margin - abs_diff
denom = self.cdf(a) - self.cdf(b)
numer = self.pdf(b) - self.pdf(a)
return ((numer / denom) if denom else a) * (-1 if diff < 0 else +1)
def w_win(self, diff, draw_margin):
x = diff - draw_margin
v = self.v_win(diff, draw_margin)
w = v * (v + x)
if 0 < w < 1:
return w
raise _floating_point_error(self)
def w_draw(self, diff, draw_margin):
abs_diff = abs(diff)
a, b = draw_margin - abs_diff, -draw_margin - abs_diff
denom = self.cdf(a) - self.cdf(b)
if not denom:
raise _floating_point_error(self)
v = self.v_draw(abs_diff, draw_margin)
return (v ** 2) + (a * self.pdf(a) - b * self.pdf(b)) / denom
def validate_rating_groups(self, rating_groups):
# check group sizes
if len(rating_groups) < 2:
raise ValueError('Need multiple rating groups')
elif not all(rating_groups):
raise ValueError('Each group must contain multiple ratings')
# check group types
group_types = set(map(type, rating_groups))
if len(group_types) != 1:
raise TypeError('All groups should be same type')
elif group_types.pop() is Rating:
raise TypeError('Rating cannot be a rating group')
# normalize rating_groups
if isinstance(rating_groups[0], dict):
dict_rating_groups = rating_groups
rating_groups = []
keys = []
for dict_rating_group in dict_rating_groups:
rating_group, key_group = [], []
for key, rating in iteritems(dict_rating_group):
rating_group.append(rating)
key_group.append(key)
rating_groups.append(tuple(rating_group))
keys.append(tuple(key_group))
else:
rating_groups = list(rating_groups)
keys = None
return rating_groups, keys
def validate_weights(self, weights, rating_groups, keys=None):
if weights is None:
weights = [(1,) * len(g) for g in rating_groups]
elif isinstance(weights, dict):
weights_dict, weights = weights, []
for x, group in enumerate(rating_groups):
w = []
weights.append(w)
for y, rating in enumerate(group):
if keys is not None:
y = keys[x][y]
w.append(weights_dict.get((x, y), 1))
return weights
def factor_graph_builders(self, rating_groups, ranks, weights):
flatten_ratings = sum(map(tuple, rating_groups), ())
flatten_weights = sum(map(tuple, weights), ())
size = len(flatten_ratings)
group_size = len(rating_groups)
# create variables
rating_vars = [Variable() for x in range(size)]
perf_vars = [Variable() for x in range(size)]
team_perf_vars = [Variable() for x in range(group_size)]
team_diff_vars = [Variable() for x in range(group_size - 1)]
team_sizes = _team_sizes(rating_groups)
# layer builders
def build_rating_layer():
for rating_var, rating in zip(rating_vars, flatten_ratings):
yield PriorFactor(rating_var, rating, self.tau)
def build_perf_layer():
for rating_var, perf_var in zip(rating_vars, perf_vars):
yield LikelihoodFactor(rating_var, perf_var, self.beta ** 2)
def build_team_perf_layer():
for team, team_perf_var in enumerate(team_perf_vars):
if team > 0:
start = team_sizes[team - 1]
else:
start = 0
end = team_sizes[team]
child_perf_vars = perf_vars[start:end]
coeffs = flatten_weights[start:end]
yield SumFactor(team_perf_var, child_perf_vars, coeffs)
def build_team_diff_layer():
for team, team_diff_var in enumerate(team_diff_vars):
yield SumFactor(team_diff_var,
team_perf_vars[team:team + 2], [+1, -1])
def build_trunc_layer():
for x, team_diff_var in enumerate(team_diff_vars):
if callable(self.draw_probability):
# dynamic draw probability
team_perf1, team_perf2 = team_perf_vars[x:x + 2]
args = (Rating(team_perf1), Rating(team_perf2), self)
draw_probability = self.draw_probability(*args)
else:
# static draw probability
draw_probability = self.draw_probability
size = sum(map(len, rating_groups[x:x + 2]))
draw_margin = calc_draw_margin(draw_probability, size, self)
if ranks[x] == ranks[x + 1]: # is a tie?
v_func, w_func = self.v_draw, self.w_draw
else:
v_func, w_func = self.v_win, self.w_win
yield TruncateFactor(team_diff_var,
v_func, w_func, draw_margin)
# build layers
return (build_rating_layer, build_perf_layer, build_team_perf_layer,
build_team_diff_layer, build_trunc_layer)
def run_schedule(self, build_rating_layer, build_perf_layer,
build_team_perf_layer, build_team_diff_layer,
build_trunc_layer, min_delta=DELTA):
if min_delta <= 0:
raise ValueError('min_delta must be greater than 0')
layers = []
def build(builders):
layers_built = [list(build()) for build in builders]
layers.extend(layers_built)
return layers_built
# gray arrows
layers_built = build([build_rating_layer,
build_perf_layer,
build_team_perf_layer])
rating_layer, perf_layer, team_perf_layer = layers_built
for f in chain(*layers_built):
f.down()
# arrow #1, #2, #3
team_diff_layer, trunc_layer = build([build_team_diff_layer,
build_trunc_layer])
team_diff_len = len(team_diff_layer)
for x in range(10):
if team_diff_len == 1:
# only two teams
team_diff_layer[0].down()
delta = trunc_layer[0].up()
else:
# multiple teams
delta = 0
for x in range(team_diff_len - 1):
team_diff_layer[x].down()
delta = max(delta, trunc_layer[x].up())
team_diff_layer[x].up(1) # up to right variable
for x in range(team_diff_len - 1, 0, -1):
team_diff_layer[x].down()
delta = max(delta, trunc_layer[x].up())
team_diff_layer[x].up(0) # up to left variable
# repeat until to small update
if delta <= min_delta:
break
# up both ends
team_diff_layer[0].up(0)
team_diff_layer[team_diff_len - 1].up(1)
# up the remainder of the black arrows
for f in team_perf_layer:
for x in range(len(f.vars) - 1):
f.up(x)
for f in perf_layer:
f.up()
return layers
def rate(self, rating_groups, ranks=None, weights=None, min_delta=DELTA):
rating_groups, keys = self.validate_rating_groups(rating_groups)
weights = self.validate_weights(weights, rating_groups, keys)
group_size = len(rating_groups)
if ranks is None:
ranks = range(group_size)
elif len(ranks) != group_size:
raise ValueError('Wrong ranks')
# sort rating groups by rank
by_rank = lambda x: x[1][1]
sorting = sorted(enumerate(zip(rating_groups, ranks, weights)),
key=by_rank)
sorted_rating_groups, sorted_ranks, sorted_weights = [], [], []
for x, (g, r, w) in sorting:
sorted_rating_groups.append(g)
sorted_ranks.append(r)
# make weights to be greater than 0
sorted_weights.append(max(min_delta, w_) for w_ in w)
# build factor graph
args = (sorted_rating_groups, sorted_ranks, sorted_weights)
builders = self.factor_graph_builders(*args)
args = builders + (min_delta,)
layers = self.run_schedule(*args)
# make result
rating_layer, team_sizes = layers[0], _team_sizes(sorted_rating_groups)
transformed_groups = []
for start, end in zip([0] + team_sizes[:-1], team_sizes):
group = []
for f in rating_layer[start:end]:
group.append(Rating(float(f.var.mu), float(f.var.sigma)))
transformed_groups.append(tuple(group))
by_hint = lambda x: x[0]
unsorting = sorted(zip((x for x, __ in sorting), transformed_groups),
key=by_hint)
if keys is None:
return [g for x, g in unsorting]
# restore the structure with input dictionary keys
return [dict(zip(keys[x], g)) for x, g in unsorting]
def quality(self, rating_groups, weights=None):
rating_groups, keys = self.validate_rating_groups(rating_groups)
weights = self.validate_weights(weights, rating_groups, keys)
flatten_ratings = sum(map(tuple, rating_groups), ())
flatten_weights = sum(map(tuple, weights), ())
length = len(flatten_ratings)
# a vector of all of the skill means
mean_matrix = Matrix([[r.mu] for r in flatten_ratings])
# a matrix whose diagonal values are the variances (sigma ** 2) of each
# of the players.
def variance_matrix(height, width):
variances = (r.sigma ** 2 for r in flatten_ratings)
for x, variance in enumerate(variances):
yield (x, x), variance
variance_matrix = Matrix(variance_matrix, length, length)
# the player-team assignment and comparison matrix
def rotated_a_matrix(set_height, set_width):
t = 0
for r, (cur, _next) in enumerate(zip(rating_groups[:-1],
rating_groups[1:])):
for x in range(t, t + len(cur)):
yield (r, x), flatten_weights[x]
t += 1
x += 1
for x in range(x, x + len(_next)):
yield (r, x), -flatten_weights[x]
set_height(r + 1)
set_width(x + 1)
rotated_a_matrix = Matrix(rotated_a_matrix)
a_matrix = rotated_a_matrix.transpose()
# match quality further derivation
_ata = (self.beta ** 2) * rotated_a_matrix * a_matrix
_atsa = rotated_a_matrix * variance_matrix * a_matrix
start = mean_matrix.transpose() * a_matrix
middle = _ata + _atsa
end = rotated_a_matrix * mean_matrix
# make result
e_arg = (-0.5 * start * middle.inverse() * end).determinant()
s_arg = _ata.determinant() / middle.determinant()
return math.exp(e_arg) * math.sqrt(s_arg)
def expose(self, rating):
k = self.mu / self.sigma
return rating.mu - k * rating.sigma
def make_as_global(self):
return setup(env=self)
def __repr__(self):
c = type(self)
if callable(self.draw_probability):
f = self.draw_probability
draw_probability = '.'.join([f.__module__, f.__name__])
else:
draw_probability = '%.1f%%' % (self.draw_probability * 100)
if self.backend is None:
backend = ''
elif isinstance(self.backend, tuple):
backend = ', backend=...'
else:
backend = ', backend=%r' % self.backend
args = ('.'.join([c.__module__, c.__name__]), self.mu, self.sigma,
self.beta, self.tau, draw_probability, backend)
return ('%s(mu=%.3f, sigma=%.3f, beta=%.3f, tau=%.3f, '
'draw_probability=%s%s)' % args)
def rate_1vs1(rating1, rating2, drawn=False, min_delta=DELTA, env=None):
if env is None:
env = global_env()
ranks = [0, 0 if drawn else 1]
teams = env.rate([(rating1,), (rating2,)], ranks, min_delta=min_delta)
return teams[0][0], teams[1][0]
def quality_1vs1(rating1, rating2, env=None):
if env is None:
env = global_env()
return env.quality([(rating1,), (rating2,)])
def global_env():
try:
global_env.__trueskill__
except AttributeError:
# setup the default environment
setup()
return global_env.__trueskill__
def setup(mu=MU, sigma=SIGMA, beta=BETA, tau=TAU,
draw_probability=DRAW_PROBABILITY, backend=None, env=None):
if env is None:
env = TrueSkill(mu, sigma, beta, tau, draw_probability, backend)
global_env.__trueskill__ = env
return env
def rate(rating_groups, ranks=None, weights=None, min_delta=DELTA):
return global_env().rate(rating_groups, ranks, weights, min_delta)
def quality(rating_groups, weights=None):
return global_env().quality(rating_groups, weights)
def expose(rating):
return global_env().expose(rating)

View File

@@ -0,0 +1,220 @@
# Titan Robotics Team 2022: CUDA-based Regressions Module
# Written by Arthur Lu & Jacob Levine
# Notes:
# this module has been automatically inegrated into analysis.py, and should be callable as a class from the package
# this module is cuda-optimized and vectorized (except for one small part)
# setup:
__version__ = "1.0.0.004"
# changelog should be viewed using print(analysis.regression.__changelog__)
__changelog__ = """
1.0.0.004:
- bug fixes
- fixed changelog
1.0.0.003:
- bug fixes
1.0.0.002:
-Added more parameters to log, exponential, polynomial
-Added SigmoidalRegKernelArthur, because Arthur apparently needs
to train the scaling and shifting of sigmoids
1.0.0.001:
-initial release, with linear, log, exponential, polynomial, and sigmoid kernels
-already vectorized (except for polynomial generation) and CUDA-optimized
"""
__author__ = (
"Jacob Levine <jlevine@imsa.edu>",
"Arthur Lu <learthurgo@gmail.com>"
)
__all__ = [
'factorial',
'take_all_pwrs',
'num_poly_terms',
'set_device',
'LinearRegKernel',
'SigmoidalRegKernel',
'LogRegKernel',
'PolyRegKernel',
'ExpRegKernel',
'SigmoidalRegKernelArthur',
'SGDTrain',
'CustomTrain'
]
import torch
global device
device = "cuda:0" if torch.torch.cuda.is_available() else "cpu"
#todo: document completely
def set_device(self, new_device):
device=new_device
class LinearRegKernel():
parameters= []
weights=None
bias=None
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.bias]
def forward(self,mtx):
long_bias=self.bias.repeat([1,mtx.size()[1]])
return torch.matmul(self.weights,mtx)+long_bias
class SigmoidalRegKernel():
parameters= []
weights=None
bias=None
sigmoid=torch.nn.Sigmoid()
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.bias]
def forward(self,mtx):
long_bias=self.bias.repeat([1,mtx.size()[1]])
return self.sigmoid(torch.matmul(self.weights,mtx)+long_bias)
class SigmoidalRegKernelArthur():
parameters= []
weights=None
in_bias=None
scal_mult=None
out_bias=None
sigmoid=torch.nn.Sigmoid()
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.in_bias=torch.rand(1, requires_grad=True, device=device)
self.scal_mult=torch.rand(1, requires_grad=True, device=device)
self.out_bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.in_bias, self.scal_mult, self.out_bias]
def forward(self,mtx):
long_in_bias=self.in_bias.repeat([1,mtx.size()[1]])
long_out_bias=self.out_bias.repeat([1,mtx.size()[1]])
return (self.scal_mult*self.sigmoid(torch.matmul(self.weights,mtx)+long_in_bias))+long_out_bias
class LogRegKernel():
parameters= []
weights=None
in_bias=None
scal_mult=None
out_bias=None
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.in_bias=torch.rand(1, requires_grad=True, device=device)
self.scal_mult=torch.rand(1, requires_grad=True, device=device)
self.out_bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.in_bias, self.scal_mult, self.out_bias]
def forward(self,mtx):
long_in_bias=self.in_bias.repeat([1,mtx.size()[1]])
long_out_bias=self.out_bias.repeat([1,mtx.size()[1]])
return (self.scal_mult*torch.log(torch.matmul(self.weights,mtx)+long_in_bias))+long_out_bias
class ExpRegKernel():
parameters= []
weights=None
in_bias=None
scal_mult=None
out_bias=None
def __init__(self, num_vars):
self.weights=torch.rand(num_vars, requires_grad=True, device=device)
self.in_bias=torch.rand(1, requires_grad=True, device=device)
self.scal_mult=torch.rand(1, requires_grad=True, device=device)
self.out_bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.in_bias, self.scal_mult, self.out_bias]
def forward(self,mtx):
long_in_bias=self.in_bias.repeat([1,mtx.size()[1]])
long_out_bias=self.out_bias.repeat([1,mtx.size()[1]])
return (self.scal_mult*torch.exp(torch.matmul(self.weights,mtx)+long_in_bias))+long_out_bias
class PolyRegKernel():
parameters= []
weights=None
bias=None
power=None
def __init__(self, num_vars, power):
self.power=power
num_terms=self.num_poly_terms(num_vars, power)
self.weights=torch.rand(num_terms, requires_grad=True, device=device)
self.bias=torch.rand(1, requires_grad=True, device=device)
self.parameters=[self.weights,self.bias]
def num_poly_terms(self,num_vars, power):
if power == 0:
return 0
return int(self.factorial(num_vars+power-1) / self.factorial(power) / self.factorial(num_vars-1)) + self.num_poly_terms(num_vars, power-1)
def factorial(self,n):
if n==0:
return 1
else:
return n*self.factorial(n-1)
def take_all_pwrs(self, vec, pwr):
#todo: vectorize (kinda)
combins=torch.combinations(vec, r=pwr, with_replacement=True)
out=torch.ones(combins.size()[0]).to(device).to(torch.float)
for i in torch.t(combins).to(device).to(torch.float):
out *= i
if pwr == 1:
return out
else:
return torch.cat((out,self.take_all_pwrs(vec, pwr-1)))
def forward(self,mtx):
#TODO: Vectorize the last part
cols=[]
for i in torch.t(mtx):
cols.append(self.take_all_pwrs(i,self.power))
new_mtx=torch.t(torch.stack(cols))
long_bias=self.bias.repeat([1,mtx.size()[1]])
return torch.matmul(self.weights,new_mtx)+long_bias
def SGDTrain(self, kernel, data, ground, loss=torch.nn.MSELoss(), iterations=1000, learning_rate=.1, return_losses=False):
optim=torch.optim.SGD(kernel.parameters, lr=learning_rate)
data_cuda=data.to(device)
ground_cuda=ground.to(device)
if (return_losses):
losses=[]
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data_cuda)
ls=loss(pred,ground_cuda)
losses.append(ls.item())
ls.backward()
optim.step()
return [kernel,losses]
else:
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data_cuda)
ls=loss(pred,ground_cuda)
ls.backward()
optim.step()
return kernel
def CustomTrain(self, kernel, optim, data, ground, loss=torch.nn.MSELoss(), iterations=1000, return_losses=False):
data_cuda=data.to(device)
ground_cuda=ground.to(device)
if (return_losses):
losses=[]
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data)
ls=loss(pred,ground)
losses.append(ls.item())
ls.backward()
optim.step()
return [kernel,losses]
else:
for i in range(iterations):
with torch.set_grad_enabled(True):
optim.zero_grad()
pred=kernel.forward(data_cuda)
ls=loss(pred,ground_cuda)
ls.backward()
optim.step()
return kernel

View File

@@ -0,0 +1,122 @@
# Titan Robotics Team 2022: ML Module
# Written by Arthur Lu & Jacob Levine
# Notes:
# this should be imported as a python module using 'import titanlearn'
# this should be included in the local directory or environment variable
# this module is optimized for multhreaded computing
# this module learns from its mistakes far faster than 2022's captains
# setup:
__version__ = "2.0.1.001"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
2.0.1.001:
- removed matplotlib import
- removed graphloss()
2.0.1.000:
- added net, dataset, dataloader, and stdtrain template definitions
- added graphloss function
2.0.0.001:
- added clear functions
2.0.0.000:
- complete rewrite planned
- depreciated 1.0.0.xxx versions
- added simple training loop
1.0.0.xxx:
-added generation of ANNS, basic SGD training
"""
__author__ = (
"Arthur Lu <arthurlu@ttic.edu>,"
"Jacob Levine <jlevine@ttic.edu>,"
)
__all__ = [
'clear',
'net',
'dataset',
'dataloader',
'train',
'stdtrainer',
]
import torch
from os import system, name
import numpy as np
def clear():
if name == 'nt':
_ = system('cls')
else:
_ = system('clear')
class net(torch.nn.Module): #template for standard neural net
def __init__(self):
super(Net, self).__init__()
def forward(self, input):
pass
class dataset(torch.utils.data.Dataset): #template for standard dataset
def __init__(self):
super(torch.utils.data.Dataset).__init__()
def __getitem__(self, index):
pass
def __len__(self):
pass
def dataloader(dataset, batch_size, num_workers, shuffle = True):
return torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers)
def train(device, net, epochs, trainloader, optimizer, criterion): #expects standard dataloader, whch returns (inputs, labels)
dataset_len = trainloader.dataset.__len__()
iter_count = 0
running_loss = 0
running_loss_list = []
for epoch in range(epochs): # loop over the dataset multiple times
for i, data in enumerate(trainloader, 0):
inputs = data[0].to(device)
labels = data[1].to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels.to(torch.float))
loss.backward()
optimizer.step()
# monitoring steps below
iter_count += 1
running_loss += loss.item()
running_loss_list.append(running_loss)
clear()
print("training on: " + device)
print("iteration: " + str(i) + "/" + str(int(dataset_len / trainloader.batch_size)) + " | " + "epoch: " + str(epoch) + "/" + str(epochs))
print("current batch loss: " + str(loss.item))
print("running loss: " + str(running_loss / iter_count))
return net, running_loss_list
print("finished training")
def stdtrainer(net, criterion, optimizer, dataloader, epochs, batch_size):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = net.to(device)
criterion = criterion.to(device)
optimizer = optimizer.to(device)
trainloader = dataloader
return train(device, net, epochs, trainloader, optimizer, criterion)

View File

@@ -0,0 +1,58 @@
# Titan Robotics Team 2022: Visualization Module
# Written by Arthur Lu & Jacob Levine
# Notes:
# this should be imported as a python module using 'import visualization'
# this should be included in the local directory or environment variable
# fancy
# setup:
__version__ = "1.0.0.001"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
1.0.0.001:
- added graphhistogram function as a fragment of visualize_pit.py
1.0.0.000:
- created visualization.py
- added graphloss()
- added imports
"""
__author__ = (
"Arthur Lu <arthurlu@ttic.edu>,"
"Jacob Levine <jlevine@ttic.edu>,"
)
__all__ = [
'graphloss',
]
import matplotlib.pyplot as plt
import numpy as np
def graphloss(losses):
x = range(0, len(losses))
plt.plot(x, losses)
plt.show()
def graphhistogram(data, figsize, sharey = True): # expects library with key as variable and contents as occurances
fig, ax = plt.subplots(1, len(data), sharey=sharey, figsize=figsize)
i = 0
for variable in data:
ax[i].hist(data[variable])
ax[i].invert_xaxis()
ax[i].set_xlabel('Variable')
ax[i].set_ylabel('Frequency')
ax[i].set_title(variable)
plt.yticks(np.arange(len(data[variable])))
i+=1
plt.show()

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -1,16 +0,0 @@
import random
def generate(filename, x, y, low, high):
file = open(filename, "w")
for i in range (0, y, 1):
temp = ""
for j in range (0, x - 1, 1):
temp = str(random.uniform(low, high)) + "," + temp
temp = temp + str(random.uniform(low, high))
file.write(temp + "\n")

View File

@@ -1,28 +0,0 @@
import os
import json
import ordereddict
import collections
import unicodecsv
content = open("realtimeDatabaseExport2018.json").read()
dict_content = json.loads(content)
list_of_new_data = []
for datak, datav in dict_content.iteritems():
for teamk, teamv in datav["teams"].iteritems():
for matchk, matchv in teamv.iteritems():
for detailk, detailv in matchv.iteritems():
new_data = collections.OrderedDict(detailv)
new_data["uuid"] = detailk
new_data["match"] = matchk
new_data["team"] = teamk
list_of_new_data.append(new_data)
allkey = reduce(lambda x, y: x.union(y.keys()), list_of_new_data, set())
output_file = open('realtimeDatabaseExport2018.csv', 'wb')
dict_writer = unicodecsv.DictWriter(csvfile=output_file, fieldnames=allkey)
dict_writer.writerow(dict((fn,fn) for fn in dict_writer.fieldnames))
dict_writer.writerows(list_of_new_data)
output_file.close()

45
data-analysis/config.json Normal file
View File

@@ -0,0 +1,45 @@
{
"team": "",
"competition": "",
"key":{
"database":"",
"tba":""
},
"statistics":{
"match":{
"balls-blocked":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-collected":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-lower-teleop":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-lower-auto":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-started":["basic_stats","historical_analyss","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-upper-teleop":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-upper-auto":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"]
},
"metric":{
"elo":{
"score":1500,
"N":400,
"K":24
},
"gl2":{
"score":1500,
"rd":250,
"vol":0.06
},
"ts":{
"mu":25,
"sigma":8.33
}
},
"pit":{
"wheel-mechanism":true,
"low-balls":true,
"high-balls":true,
"wheel-success":true,
"strategic-focus":true,
"climb-mechanism":true,
"attitude":true
}
}
}

129
data-analysis/data.py Normal file
View File

@@ -0,0 +1,129 @@
import requests
import pymongo
import pandas as pd
import time
def pull_new_tba_matches(apikey, competition, cutoff):
api_key= apikey
x=requests.get("https://www.thebluealliance.com/api/v3/event/"+competition+"/matches/simple", headers={"X-TBA-Auth_Key":api_key})
out = []
for i in x.json():
if i["actual_time"] != None and i["actual_time"]-cutoff >= 0 and i["comp_level"] == "qm":
out.append({"match" : i['match_number'], "blue" : list(map(lambda x: int(x[3:]), i['alliances']['blue']['team_keys'])), "red" : list(map(lambda x: int(x[3:]), i['alliances']['red']['team_keys'])), "winner": i["winning_alliance"]})
return out
def get_team_match_data(apikey, competition, team_num):
client = pymongo.MongoClient(apikey)
db = client.data_scouting
mdata = db.matchdata
out = {}
for i in mdata.find({"competition" : competition, "team_scouted": team_num}):
out[i['match']] = i['data']
return pd.DataFrame(out)
def get_team_pit_data(apikey, competition, team_num):
client = pymongo.MongoClient(apikey)
db = client.data_scouting
mdata = db.pitdata
out = {}
return mdata.find_one({"competition" : competition, "team_scouted": team_num})["data"]
def get_team_metrics_data(apikey, competition, team_num):
client = pymongo.MongoClient(apikey)
db = client.data_processing
mdata = db.team_metrics
return mdata.find_one({"competition" : competition, "team": team_num})
def get_match_data_formatted(apikey, competition):
client = pymongo.MongoClient(apikey)
db = client.data_scouting
mdata = db.teamlist
x=mdata.find_one({"competition":competition})
out = {}
for i in x:
try:
out[int(i)] = unkeyify_2l(get_team_match_data(apikey, competition, int(i)).transpose().to_dict())
except:
pass
return out
def get_metrics_data_formatted(apikey, competition):
client = pymongo.MongoClient(apikey)
db = client.data_scouting
mdata = db.teamlist
x=mdata.find_one({"competition":competition})
out = {}
for i in x:
try:
out[int(i)] = d.get_team_metrics_data(apikey, competition, int(i))
except:
pass
return out
def get_pit_data_formatted(apikey, competition):
client = pymongo.MongoClient(apikey)
db = client.data_scouting
mdata = db.teamlist
x=mdata.find_one({"competition":competition})
out = {}
for i in x:
try:
out[int(i)] = get_team_pit_data(apikey, competition, int(i))
except:
pass
return out
def get_pit_variable_data(apikey, competition):
client = pymongo.MongoClient(apikey)
db = client.data_processing
mdata = db.team_pit
out = {}
return mdata.find()
def get_pit_variable_formatted(apikey, competition):
temp = get_pit_variable_data(apikey, competition)
out = {}
for i in temp:
out[i["variable"]] = i["data"]
return out
def push_team_tests_data(apikey, competition, team_num, data, dbname = "data_processing", colname = "team_tests"):
client = pymongo.MongoClient(apikey)
db = client[dbname]
mdata = db[colname]
mdata.replace_one({"competition" : competition, "team": team_num}, {"_id": competition+str(team_num)+"am", "competition" : competition, "team" : team_num, "data" : data}, True)
def push_team_metrics_data(apikey, competition, team_num, data, dbname = "data_processing", colname = "team_metrics"):
client = pymongo.MongoClient(apikey)
db = client[dbname]
mdata = db[colname]
mdata.replace_one({"competition" : competition, "team": team_num}, {"_id": competition+str(team_num)+"am", "competition" : competition, "team" : team_num, "metrics" : data}, True)
def push_team_pit_data(apikey, competition, variable, data, dbname = "data_processing", colname = "team_pit"):
client = pymongo.MongoClient(apikey)
db = client[dbname]
mdata = db[colname]
mdata.replace_one({"competition" : competition, "variable": variable}, {"competition" : competition, "variable" : variable, "data" : data}, True)
def get_analysis_flags(apikey, flag):
client = pymongo.MongoClient(apikey)
db = client.data_processing
mdata = db.flags
return mdata.find_one({flag:{"$exists":True}})
def set_analysis_flags(apikey, flag, data):
client = pymongo.MongoClient(apikey)
db = client.data_processing
mdata = db.flags
return mdata.replace_one({flag:{"$exists":True}}, data, True)
def unkeyify_2l(layered_dict):
out = {}
for i in layered_dict.keys():
add = []
sortkey = []
for j in layered_dict[i].keys():
add.append([j,layered_dict[i][j]])
add.sort(key = lambda x: x[0])
out[i] = list(map(lambda x: x[1], add))
return out

View File

@@ -0,0 +1,4 @@
requests
pymongo
pandas
dnspython

View File

@@ -0,0 +1,407 @@
# Titan Robotics Team 2022: Superscript Script
# Written by Arthur Lu, Jacob Levine, and Dev Singh
# Notes:
# setup:
__version__ = "0.0.6.002"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
0.0.6.003:
- rename analysis imports to tra_analysis for PyPI publishing
0.0.6.002:
- integrated get_team_rankings.py as get_team_metrics() function
- integrated visualize_pit.py as graph_pit_histogram() function
0.0.6.001:
- bug fixes with analysis.Metric() calls
- modified metric functions to use config.json defined default values
0.0.6.000:
- removed main function
- changed load_config function
- added save_config function
- added load_match function
- renamed simpleloop to matchloop
- moved simplestats function inside matchloop
- renamed load_metrics to load_metric
- renamed metricsloop to metricloop
- split push to database functions amon push_match, push_metric, push_pit
- moved
0.0.5.002:
- made changes due to refactoring of analysis
0.0.5.001:
- text fixes
- removed matplotlib requirement
0.0.5.000:
- improved user interface
0.0.4.002:
- removed unessasary code
0.0.4.001:
- fixed bug where X range for regression was determined before sanitization
- better sanitized data
0.0.4.000:
- fixed spelling issue in __changelog__
- addressed nan bug in regression
- fixed errors on line 335 with metrics calling incorrect key "glicko2"
- fixed errors in metrics computing
0.0.3.000:
- added analysis to pit data
0.0.2.001:
- minor stability patches
- implemented db syncing for timestamps
- fixed bugs
0.0.2.000:
- finalized testing and small fixes
0.0.1.004:
- finished metrics implement, trueskill is bugged
0.0.1.003:
- working
0.0.1.002:
- started implement of metrics
0.0.1.001:
- cleaned up imports
0.0.1.000:
- tested working, can push to database
0.0.0.009:
- tested working
- prints out stats for the time being, will push to database later
0.0.0.008:
- added data import
- removed tba import
- finished main method
0.0.0.007:
- added load_config
- optimized simpleloop for readibility
- added __all__ entries
- added simplestats engine
- pending testing
0.0.0.006:
- fixes
0.0.0.005:
- imported pickle
- created custom database object
0.0.0.004:
- fixed simpleloop to actually return a vector
0.0.0.003:
- added metricsloop which is unfinished
0.0.0.002:
- added simpleloop which is untested until data is provided
0.0.0.001:
- created script
- added analysis, numba, numpy imports
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"Jacob Levine <jlevine@imsa.edu>",
)
__all__ = [
"load_config",
"save_config",
"get_previous_time",
"load_match",
"matchloop",
"load_metric",
"metricloop",
"load_pit",
"pitloop",
"push_match",
"push_metric",
"push_pit",
]
# imports:
from tra_analysis import analysis as an
import data as d
import json
import numpy as np
from os import system, name
from pathlib import Path
import matplotlib.pyplot as plt
import time
import warnings
def load_config(file):
config_vector = {}
with open(file) as f:
config_vector = json.load(f)
return config_vector
def save_config(file, config_vector):
with open(file) as f:
json.dump(config_vector, f)
def get_previous_time(apikey):
previous_time = d.get_analysis_flags(apikey, "latest_update")
if previous_time == None:
d.set_analysis_flags(apikey, "latest_update", 0)
previous_time = 0
else:
previous_time = previous_time["latest_update"]
return previous_time
def load_match(apikey, competition):
return d.get_match_data_formatted(apikey, competition)
def matchloop(apikey, competition, data, tests): # expects 3D array with [Team][Variable][Match]
def simplestats(data, test):
data = np.array(data)
data = data[np.isfinite(data)]
ranges = list(range(len(data)))
if test == "basic_stats":
return an.basic_stats(data)
if test == "historical_analysis":
return an.histo_analysis([ranges, data])
if test == "regression_linear":
return an.regression(ranges, data, ['lin'])
if test == "regression_logarithmic":
return an.regression(ranges, data, ['log'])
if test == "regression_exponential":
return an.regression(ranges, data, ['exp'])
if test == "regression_polynomial":
return an.regression(ranges, data, ['ply'])
if test == "regression_sigmoidal":
return an.regression(ranges, data, ['sig'])
return_vector = {}
for team in data:
variable_vector = {}
for variable in data[team]:
test_vector = {}
variable_data = data[team][variable]
if variable in tests:
for test in tests[variable]:
test_vector[test] = simplestats(variable_data, test)
else:
pass
variable_vector[variable] = test_vector
return_vector[team] = variable_vector
push_match(apikey, competition, return_vector)
def load_metric(apikey, competition, match, group_name, metrics):
group = {}
for team in match[group_name]:
db_data = d.get_team_metrics_data(apikey, competition, team)
if d.get_team_metrics_data(apikey, competition, team) == None:
elo = {"score": metrics["elo"]["score"]}
gl2 = {"score": metrics["gl2"]["score"], "rd": metrics["gl2"]["rd"], "vol": metrics["gl2"]["vol"]}
ts = {"mu": metrics["ts"]["mu"], "sigma": metrics["ts"]["sigma"]}
group[team] = {"elo": elo, "gl2": gl2, "ts": ts}
else:
metrics = db_data["metrics"]
elo = metrics["elo"]
gl2 = metrics["gl2"]
ts = metrics["ts"]
group[team] = {"elo": elo, "gl2": gl2, "ts": ts}
return group
def metricloop(tbakey, apikey, competition, timestamp, metrics): # listener based metrics update
elo_N = metrics["elo"]["N"]
elo_K = metrics["elo"]["K"]
matches = d.pull_new_tba_matches(tbakey, competition, timestamp)
red = {}
blu = {}
for match in matches:
red = load_metric(apikey, competition, match, "red", metrics)
blu = load_metric(apikey, competition, match, "blue", metrics)
elo_red_total = 0
elo_blu_total = 0
gl2_red_score_total = 0
gl2_blu_score_total = 0
gl2_red_rd_total = 0
gl2_blu_rd_total = 0
gl2_red_vol_total = 0
gl2_blu_vol_total = 0
for team in red:
elo_red_total += red[team]["elo"]["score"]
gl2_red_score_total += red[team]["gl2"]["score"]
gl2_red_rd_total += red[team]["gl2"]["rd"]
gl2_red_vol_total += red[team]["gl2"]["vol"]
for team in blu:
elo_blu_total += blu[team]["elo"]["score"]
gl2_blu_score_total += blu[team]["gl2"]["score"]
gl2_blu_rd_total += blu[team]["gl2"]["rd"]
gl2_blu_vol_total += blu[team]["gl2"]["vol"]
red_elo = {"score": elo_red_total / len(red)}
blu_elo = {"score": elo_blu_total / len(blu)}
red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)}
blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)}
if match["winner"] == "red":
observations = {"red": 1, "blu": 0}
elif match["winner"] == "blue":
observations = {"red": 0, "blu": 1}
else:
observations = {"red": 0.5, "blu": 0.5}
red_elo_delta = an.Metric().elo(red_elo["score"], blu_elo["score"], observations["red"], elo_N, elo_K) - red_elo["score"]
blu_elo_delta = an.Metric().elo(blu_elo["score"], red_elo["score"], observations["blu"], elo_N, elo_K) - blu_elo["score"]
new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metric().glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]])
new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metric().glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]])
red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]}
blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]}
for team in red:
red[team]["elo"]["score"] = red[team]["elo"]["score"] + red_elo_delta
red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"]
red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"]
red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"]
for team in blu:
blu[team]["elo"]["score"] = blu[team]["elo"]["score"] + blu_elo_delta
blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"]
blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"]
blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"]
temp_vector = {}
temp_vector.update(red)
temp_vector.update(blu)
push_metric(apikey, competition, temp_vector)
def load_pit(apikey, competition):
return d.get_pit_data_formatted(apikey, competition)
def pitloop(apikey, competition, pit, tests):
return_vector = {}
for team in pit:
for variable in pit[team]:
if variable in tests:
if not variable in return_vector:
return_vector[variable] = []
return_vector[variable].append(pit[team][variable])
push_pit(apikey, competition, return_vector)
def push_match(apikey, competition, results):
for team in results:
d.push_team_tests_data(apikey, competition, team, results[team])
def push_metric(apikey, competition, metric):
for team in metric:
d.push_team_metrics_data(apikey, competition, team, metric[team])
def push_pit(apikey, competition, pit):
for variable in pit:
d.push_team_pit_data(apikey, competition, variable, pit[variable])
def get_team_metrics(apikey, tbakey, competition):
metrics = d.get_metrics_data_formatted(apikey, competition)
elo = {}
gl2 = {}
for team in metrics:
elo[team] = metrics[team]["metrics"]["elo"]["score"]
gl2[team] = metrics[team]["metrics"]["gl2"]["score"]
elo = {k: v for k, v in sorted(elo.items(), key=lambda item: item[1])}
gl2 = {k: v for k, v in sorted(gl2.items(), key=lambda item: item[1])}
elo_ranked = []
for team in elo:
elo_ranked.append({"team": str(team), "elo": str(elo[team])})
gl2_ranked = []
for team in gl2:
gl2_ranked.append({"team": str(team), "gl2": str(gl2[team])})
return {"elo-ranks": elo_ranked, "glicko2-ranks": gl2_ranked}
def graph_pit_histogram(apikey, competition, figsize=(80,15)):
pit = d.get_pit_variable_formatted(apikey, competition)
fig, ax = plt.subplots(1, len(pit), sharey=True, figsize=figsize)
i = 0
for variable in pit:
ax[i].hist(pit[variable])
ax[i].invert_xaxis()
ax[i].set_xlabel('')
ax[i].set_ylabel('Frequency')
ax[i].set_title(variable)
plt.yticks(np.arange(len(pit[variable])))
i+=1
plt.show()

55
data-analysis/test.py Normal file
View File

@@ -0,0 +1,55 @@
import threading
from multiprocessing import Process, Queue
import time
from os import system
class testcls():
i = 0
j = 0
t1_en = True
t2_en = True
def main(self):
t1 = Process(name = "task1", target = self.task1)
t2 = Process(name = "task2", target = self.task2)
t1.start()
t2.start()
#print(self.i)
#print(self.j)
def task1(self):
self.i += 1
time.sleep(1)
if(self.i < 10):
t1 = Process(name = "task1", target = self.task1)
t1.start()
def task2(self):
self.j -= 1
time.sleep(1)
if(self.j > -10):
t2 = t2 = Process(name = "task2", target = self.task2)
t2.start()
"""
if __name__ == "__main__":
tmain = threading.Thread(name = "main", target = main)
tmain.start()
t = 0
while(True):
system("clear")
for thread in threading.enumerate():
if thread.getName() != "MainThread":
print(thread.getName())
print(str(len(threading.enumerate())))
print(i)
print(j)
time.sleep(0.1)
t += 1
if(t == 100):
t1_en = False
t2_en = False
"""

View File

@@ -0,0 +1,2 @@
def test_():
assert 1 == 1

91
data-analysis/tra.py Normal file
View File

@@ -0,0 +1,91 @@
import json
import superscript as su
import threading
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
match = False
metric = False
pit = False
match_enable = True
metric_enable = True
pit_enable = True
config = {}
def main():
global match
global metric
global pit
global match_enable
global metric_enable
global pit_enable
global config
config = su.load_config("config.json")
while(True):
if match_enable == True and match == False:
def target():
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["match"]
data = su.load_match(apikey, competition)
su.matchloop(apikey, competition, data, tests)
match = False
return
match = True
task = threading.Thread(name = "match", target=target)
task.start()
if metric_enable == True and metric == False:
def target():
apikey = config["key"]["database"]
tbakey = config["key"]["tba"]
competition = config["competition"]
metric = config["statistics"]["metric"]
timestamp = su.get_previous_time(apikey)
su.metricloop(tbakey, apikey, competition, timestamp, metric)
metric = False
return
match = True
task = threading.Thread(name = "metric", target=target)
task.start()
if pit_enable == True and pit == False:
def target():
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["pit"]
data = su.load_pit(apikey, competition)
su.pitloop(apikey, competition, data, tests)
pit = False
return
pit = True
task = threading.Thread(name = "pit", target=target)
task.start()
task = threading.Thread(name = "main", target=main)
task.start()