Merge pull request #38 from titanscouting/master

pull master into master-staged
This commit is contained in:
Arthur Lu 2020-08-10 20:33:28 -05:00 committed by GitHub
commit 64cf52d749
36 changed files with 980 additions and 214 deletions

View File

@ -23,5 +23,5 @@
"mhutchie.git-graph",
"donjayamanne.jupyter",
],
"postCreateCommand": "pip install -r analysis-master/requirements.txt"
"postCreateCommand": "pip install tra-analysis"
}

36
.github/workflows/publish-analysis.yml vendored Normal file
View File

@ -0,0 +1,36 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Upload Analysis Package
on:
release:
types: [published, edited]
jobs:
deploy:
runs-on: ubuntu-latest
env:
working-directory: ./analysis-master/
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
working-directory: ${{env.working-directory}}
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build package
working-directory: ${{env.working-directory}}
run: |
python setup.py sdist bdist_wheel
- name: Publish package to PyPI
uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}
packages_dir: analysis-master/dist/

7
.gitignore vendored
View File

@ -31,3 +31,10 @@ data-analysis/__pycache__/
analysis-master/__pycache__/
analysis-master/.pytest_cache/
data-analysis/.pytest_cache/
data-analysis/test.py
analysis-master/tra_analysis.egg-info
analysis-master/tra_analysis/__pycache__
analysis-master/tra_analysis/.ipynb_checkpoints
.pytest_cache
analysis-master/tra_analysis/metrics/__pycache__
analysis-master/dist

View File

@ -1,3 +1,3 @@
Arthur Lu <learthurgo@gmail.com>
Jacob Levine <jacoblevine18@gmail.com>
Dev Singh <dev@singhk.dev>
Dev Singh <dev@devksingh.com>

View File

@ -1,2 +1,34 @@
# red-alliance-analysis
# Red Alliance Analysis &middot; ![GitHub release (latest by date)](https://img.shields.io/github/v/release/titanscout2022/red-alliance-analysis)
Titan Robotics 2022 Strategy Team Repository for Data Analysis Tools. Included with these tools are the backend data analysis engine formatted as a python package, associated binaries for the analysis package, and premade scripts that can be pulled directly from this repository and will integrate with other Red Alliance applications to quickly deploy FRC scouting tools.
# Getting Started
## Prerequisites
* Python >= 3.6
* Pip which can be installed by running `python -m pip install -U pip` after installing python
## Installing
### Standard Platforms
For the latest version of tra-analysis, run `pip install tra-analysis` or `pip install tra_analysis`. The requirements for tra-analysis should be automatically installed.
### Exotic Platforms (Android)
[Termux](https://termux.com/) is recommended for a linux environemnt on Android. Consult the [documentation]() for advice on installing the prerequisites. After installing the prerequisites, the package should be installed normally with `pip install tra-analysis` or `pip install tra_analysis`.
## Use
tra-analysis operates like any other python package. Consult the [documentation]() for more information.
# Supported Platforms
Although any modern 64 bit platform should be supported, the following platforms have been tested to be working:
* AMD64 (Tested on Zen, Zen+, and Zen 2)
* Intel 64/x86_64/x64 (Tested on Kaby Lake)
* ARM64 (Tested on Broadcom BCM2836 SoC, Broadcom BCM2711 SoC)
###
The following OSes have been tested to be working:
* Linux Kernel 3.16, 4.4, 4.15, 4.19, 5.4
* Ubuntu 16.04, 18.04, 20.04
* Debian (and Debian derivaives) Jessie, Buster
* Windows 7, 10
###
The following python versions are supported:
* python 3.6 (not tested)
* python 3.7
* python 3.8
# Contributing
Read our included contributing guidelines (`CONTRIBUTING.md`) for more information and feel free to reach out to any current maintainer for more information.
# Build Statuses
![Analysis Unit Tests](https://github.com/titanscout2022/red-alliance-analysis/workflows/Analysis%20Unit%20Tests/badge.svg)
![Superscript Unit Tests](https://github.com/titanscout2022/red-alliance-analysis/workflows/Superscript%20Unit%20Tests/badge.svg?branch=master)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -7,17 +7,17 @@ with open("requirements.txt", 'r') as file:
requirements.append(line)
setuptools.setup(
name="analysis",
version="1.0.0.012",
name="tra_analysis",
version="2.0.3",
author="The Titan Scouting Team",
author_email="titanscout2022@gmail.com",
description="analysis package developed by Titan Scouting for The Red Alliance",
description="Analysis package developed by Titan Scouting for The Red Alliance",
long_description="",
long_description_content_type="text/markdown",
url="https://github.com/titanscout2022/tr2022-strategy",
packages=setuptools.find_packages(),
install_requires=requirements,
license = "GNU General Public License v3.0",
license = "BSD 3-Clause License",
classifiers=[
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",

View File

@ -1,5 +1,5 @@
from analysis import analysis as an
from analysis import metrics
from tra_analysis import analysis as an
from tra_analysis import metrics
def test_():
test_data_linear = [1, 3, 6, 7, 9]

View File

@ -1,17 +1,19 @@
# Titan Robotics Team 2022: Data Analysis Module
# Written by Arthur Lu & Jacob Levine
# Written by Arthur Lu, Jacob Levine, and Dev Singh
# Notes:
# this should be imported as a python module using 'from analysis import analysis'
# this should be imported as a python module using 'from tra_analysis import analysis'
# this should be included in the local directory or environment variable
# this module has been optimized for multhreaded computing
# current benchmark of optimization: 1.33 times faster
# setup:
__version__ = "1.2.2.000"
__version__ = "2.2.1"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
1.2.2.000:
2.2.1:
changed all references to parent package analysis to tra_analysis
2.2.0:
- added Sort class
- added several array sorting functions to Sort class including:
- quick sort
@ -27,25 +29,25 @@ __changelog__ = """changelog:
- tested all sorting algorithms with both lists and numpy arrays
- depreciated sort function from Array class
- added warnings as an import
1.2.1.004:
2.1.4:
- added sort and search functions to Array class
1.2.1.003:
2.1.3:
- changed output of basic_stats and histo_analysis to libraries
- fixed __all__
1.2.1.002:
2.1.2:
- renamed ArrayTest class to Array
1.2.1.001:
2.1.1:
- added add, mul, neg, and inv functions to ArrayTest class
- added normalize function to ArrayTest class
- added dot and cross functions to ArrayTest class
1.2.1.000:
2.1.0:
- added ArrayTest class
- added elementwise mean, median, standard deviation, variance, min, max functions to ArrayTest class
- added elementwise_stats to ArrayTest which encapsulates elementwise statistics
- appended to __all__ to reflect changes
1.2.0.006:
2.0.6:
- renamed func functions in regression to lin, log, exp, and sig
1.2.0.005:
2.0.5:
- moved random_forrest_regressor and random_forrest_classifier to RandomForrest class
- renamed Metrics to Metric
- renamed RegressionMetrics to RegressionMetric
@ -53,166 +55,166 @@ __changelog__ = """changelog:
- renamed CorrelationTests to CorrelationTest
- renamed StatisticalTests to StatisticalTest
- reflected rafactoring to all mentions of above classes/functions
1.2.0.004:
2.0.4:
- fixed __all__ to reflected the correct functions and classes
- fixed CorrelationTests and StatisticalTests class functions to require self invocation
- added missing math import
- fixed KNN class functions to require self invocation
- fixed Metrics class functions to require self invocation
- various spelling fixes in CorrelationTests and StatisticalTests
1.2.0.003:
2.0.3:
- bug fixes with CorrelationTests and StatisticalTests
- moved glicko2 and trueskill to the metrics subpackage
- moved elo to a new metrics subpackage
1.2.0.002:
2.0.2:
- fixed docs
1.2.0.001:
2.0.1:
- fixed docs
1.2.0.000:
2.0.0:
- cleaned up wild card imports with scipy and sklearn
- added CorrelationTests class
- added StatisticalTests class
- added several correlation tests to CorrelationTests
- added several statistical tests to StatisticalTests
1.1.13.009:
1.13.9:
- moved elo, glicko2, trueskill functions under class Metrics
1.1.13.008:
1.13.8:
- moved Glicko2 to a seperate package
1.1.13.007:
1.13.7:
- fixed bug with trueskill
1.1.13.006:
1.13.6:
- cleaned up imports
1.1.13.005:
1.13.5:
- cleaned up package
1.1.13.004:
1.13.4:
- small fixes to regression to improve performance
1.1.13.003:
1.13.3:
- filtered nans from regression
1.1.13.002:
1.13.2:
- removed torch requirement, and moved Regression back to regression.py
1.1.13.001:
1.13.1:
- bug fix with linear regression not returning a proper value
- cleaned up regression
- fixed bug with polynomial regressions
1.1.13.000:
1.13.0:
- fixed all regressions to now properly work
1.1.12.006:
1.12.6:
- fixed bg with a division by zero in histo_analysis
1.1.12.005:
1.12.5:
- fixed numba issues by removing numba from elo, glicko2 and trueskill
1.1.12.004:
1.12.4:
- renamed gliko to glicko
1.1.12.003:
1.12.3:
- removed depreciated code
1.1.12.002:
1.12.2:
- removed team first time trueskill instantiation in favor of integration in superscript.py
1.1.12.001:
1.12.1:
- improved readibility of regression outputs by stripping tensor data
- used map with lambda to acheive the improved readibility
- lost numba jit support with regression, and generated_jit hangs at execution
- TODO: reimplement correct numba integration in regression
1.1.12.000:
1.12.0:
- temporarily fixed polynomial regressions by using sklearn's PolynomialFeatures
1.1.11.010:
1.11.010:
- alphabeticaly ordered import lists
1.1.11.009:
1.11.9:
- bug fixes
1.1.11.008:
1.11.8:
- bug fixes
1.1.11.007:
1.11.7:
- bug fixes
1.1.11.006:
1.11.6:
- tested min and max
- bug fixes
1.1.11.005:
1.11.5:
- added min and max in basic_stats
1.1.11.004:
1.11.4:
- bug fixes
1.1.11.003:
1.11.3:
- bug fixes
1.1.11.002:
1.11.2:
- consolidated metrics
- fixed __all__
1.1.11.001:
1.11.1:
- added test/train split to RandomForestClassifier and RandomForestRegressor
1.1.11.000:
1.11.0:
- added RandomForestClassifier and RandomForestRegressor
- note: untested
1.1.10.000:
1.10.0:
- added numba.jit to remaining functions
1.1.9.002:
1.9.2:
- kernelized PCA and KNN
1.1.9.001:
1.9.1:
- fixed bugs with SVM and NaiveBayes
1.1.9.000:
1.9.0:
- added SVM class, subclasses, and functions
- note: untested
1.1.8.000:
1.8.0:
- added NaiveBayes classification engine
- note: untested
1.1.7.000:
1.7.0:
- added knn()
- added confusion matrix to decisiontree()
1.1.6.002:
1.6.2:
- changed layout of __changelog to be vscode friendly
1.1.6.001:
1.6.1:
- added additional hyperparameters to decisiontree()
1.1.6.000:
1.6.0:
- fixed __version__
- fixed __all__ order
- added decisiontree()
1.1.5.003:
1.5.3:
- added pca
1.1.5.002:
1.5.2:
- reduced import list
- added kmeans clustering engine
1.1.5.001:
1.5.1:
- simplified regression by using .to(device)
1.1.5.000:
1.5.0:
- added polynomial regression to regression(); untested
1.1.4.000:
1.4.0:
- added trueskill()
1.1.3.002:
1.3.2:
- renamed regression class to Regression, regression_engine() to regression gliko2_engine class to Gliko2
1.1.3.001:
1.3.1:
- changed glicko2() to return tuple instead of array
1.1.3.000:
1.3.0:
- added glicko2_engine class and glicko()
- verified glicko2() accuracy
1.1.2.003:
1.2.3:
- fixed elo()
1.1.2.002:
1.2.2:
- added elo()
- elo() has bugs to be fixed
1.1.2.001:
1.2.1:
- readded regrression import
1.1.2.000:
1.2.0:
- integrated regression.py as regression class
- removed regression import
- fixed metadata for regression class
- fixed metadata for analysis class
1.1.1.001:
1.1.1:
- regression_engine() bug fixes, now actaully regresses
1.1.1.000:
1.1.0:
- added regression_engine()
- added all regressions except polynomial
1.1.0.007:
1.0.7:
- updated _init_device()
1.1.0.006:
1.0.6:
- removed useless try statements
1.1.0.005:
1.0.5:
- removed impossible outcomes
1.1.0.004:
1.0.4:
- added performance metrics (r^2, mse, rms)
1.1.0.003:
1.0.3:
- resolved nopython mode for mean, median, stdev, variance
1.1.0.002:
1.0.2:
- snapped (removed) majority of uneeded imports
- forced object mode (bad) on all jit
- TODO: stop numba complaining about not being able to compile in nopython mode
1.1.0.001:
1.0.1:
- removed from sklearn import * to resolve uneeded wildcard imports
1.1.0.000:
1.0.0:
- removed c_entities,nc_entities,obstacles,objectives from __all__
- applied numba.jit to all functions
- depreciated and removed stdev_z_split
@ -221,93 +223,93 @@ __changelog__ = """changelog:
- depreciated and removed all nonessential functions (basic_analysis, benchmark, strip_data)
- optimized z_normalize using sklearn.preprocessing.normalize
- TODO: implement kernel/function based pytorch regression optimizer
1.0.9.000:
0.9.0:
- refactored
- numpyed everything
- removed stats in favor of numpy functions
1.0.8.005:
0.8.5:
- minor fixes
1.0.8.004:
0.8.4:
- removed a few unused dependencies
1.0.8.003:
0.8.3:
- added p_value function
1.0.8.002:
- updated __all__ correctly to contain changes made in v 1.0.8.000 and v 1.0.8.001
1.0.8.001:
0.8.2:
- updated __all__ correctly to contain changes made in v 0.8.0 and v 0.8.1
0.8.1:
- refactors
- bugfixes
1.0.8.000:
0.8.0:
- depreciated histo_analysis_old
- depreciated debug
- altered basic_analysis to take array data instead of filepath
- refactor
- optimization
1.0.7.002:
0.7.2:
- bug fixes
1.0.7.001:
0.7.1:
- bug fixes
1.0.7.000:
0.7.0:
- added tanh_regression (logistical regression)
- bug fixes
1.0.6.005:
0.6.5:
- added z_normalize function to normalize dataset
- bug fixes
1.0.6.004:
0.6.4:
- bug fixes
1.0.6.003:
0.6.3:
- bug fixes
1.0.6.002:
0.6.2:
- bug fixes
1.0.6.001:
0.6.1:
- corrected __all__ to contain all of the functions
1.0.6.000:
0.6.0:
- added calc_overfit, which calculates two measures of overfit, error and performance
- added calculating overfit to optimize_regression
1.0.5.000:
0.5.0:
- added optimize_regression function, which is a sample function to find the optimal regressions
- optimize_regression function filters out some overfit funtions (functions with r^2 = 1)
- planned addition: overfit detection in the optimize_regression function
1.0.4.002:
0.4.2:
- added __changelog__
- updated debug function with log and exponential regressions
1.0.4.001:
0.4.1:
- added log regressions
- added exponential regressions
- added log_regression and exp_regression to __all__
1.0.3.008:
0.3.8:
- added debug function to further consolidate functions
1.0.3.007:
0.3.7:
- added builtin benchmark function
- added builtin random (linear) data generation function
- added device initialization (_init_device)
1.0.3.006:
0.3.6:
- reorganized the imports list to be in alphabetical order
- added search and regurgitate functions to c_entities, nc_entities, obstacles, objectives
1.0.3.005:
0.3.5:
- major bug fixes
- updated historical analysis
- depreciated old historical analysis
1.0.3.004:
0.3.4:
- added __version__, __author__, __all__
- added polynomial regression
- added root mean squared function
- added r squared function
1.0.3.003:
0.3.3:
- bug fixes
- added c_entities
1.0.3.002:
0.3.2:
- bug fixes
- added nc_entities, obstacles, objectives
- consolidated statistics.py to analysis.py
1.0.3.001:
0.3.1:
- compiled 1d, column, and row basic stats into basic stats function
1.0.3.000:
0.3.0:
- added historical analysis function
1.0.2.xxx:
0.2.x:
- added z score test
1.0.1.xxx:
0.1.x:
- major bug fixes
1.0.0.xxx:
0.0.x:
- added loading csv
- added 1d, column, row basic stats
"""
@ -342,11 +344,11 @@ __all__ = [
# now back to your regularly scheduled programming:
# imports (now in alphabetical order! v 1.0.3.006):
# imports (now in alphabetical order! v 0.3.006):
import csv
from analysis.metrics import elo as Elo
from analysis.metrics import glicko2 as Glicko2
from tra_analysis.metrics import elo as Elo
from tra_analysis.metrics import glicko2 as Glicko2
import math
import numba
from numba import jit
@ -355,7 +357,7 @@ import scipy
from scipy import optimize, stats
import sklearn
from sklearn import preprocessing, pipeline, linear_model, metrics, cluster, decomposition, tree, neighbors, naive_bayes, svm, model_selection, ensemble
from analysis.metrics import trueskill as Trueskill
from tra_analysis.metrics import trueskill as Trueskill
import warnings
class error(ValueError):

View File

@ -5,20 +5,20 @@
# this module is cuda-optimized and vectorized (except for one small part)
# setup:
__version__ = "1.0.0.004"
__version__ = "0.0.4"
# changelog should be viewed using print(analysis.regression.__changelog__)
__changelog__ = """
1.0.0.004:
0.0.4:
- bug fixes
- fixed changelog
1.0.0.003:
0.0.3:
- bug fixes
1.0.0.002:
0.0.2:
-Added more parameters to log, exponential, polynomial
-Added SigmoidalRegKernelArthur, because Arthur apparently needs
to train the scaling and shifting of sigmoids
1.0.0.001:
0.0.1:
-initial release, with linear, log, exponential, polynomial, and sigmoid kernels
-already vectorized (except for polynomial generation) and CUDA-optimized
"""

View File

@ -7,23 +7,23 @@
# this module learns from its mistakes far faster than 2022's captains
# setup:
__version__ = "2.0.1.001"
__version__ = "1.1.1"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
2.0.1.001:
1.1.1:
- removed matplotlib import
- removed graphloss()
2.0.1.000:
1.1.0:
- added net, dataset, dataloader, and stdtrain template definitions
- added graphloss function
2.0.0.001:
1.0.1:
- added clear functions
2.0.0.000:
1.0.0:
- complete rewrite planned
- depreciated 1.0.0.xxx versions
- added simple training loop
1.0.0.xxx:
0.0.x:
-added generation of ANNS, basic SGD training
"""

View File

@ -6,13 +6,13 @@
# fancy
# setup:
__version__ = "1.0.0.001"
__version__ = "0.0.1"
#changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
1.0.0.001:
0.0.1:
- added graphhistogram function as a fragment of visualize_pit.py
1.0.0.000:
0.0.0:
- created visualization.py
- added graphloss()
- added imports

View File

@ -0,0 +1 @@
2020ilch

View File

View File

@ -0,0 +1,14 @@
balls-blocked,basic_stats,historical_analysis,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
balls-collected,basic_stats,historical_analysis,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
balls-lower-teleop,basic_stats,historical_analysis,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
balls-lower-auto,basic_stats,historical_analysis,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
balls-started,basic_stats,historical_analyss,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
balls-upper-teleop,basic_stats,historical_analysis,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
balls-upper-auto,basic_stats,historical_analysis,regression_linear,regression_logarithmic,regression_exponential,regression_polynomial,regression_sigmoidal
wheel-mechanism
low-balls
high-balls
wheel-success
strategic-focus
climb-mechanism
attitude

View File

@ -1,19 +1,19 @@
# Titan Robotics Team 2022: Superscript Script
# Written by Arthur Lu & Jacob Levine
# Written by Arthur Lu, Jacob Levine, and Dev Singh
# Notes:
# setup:
__version__ = "0.0.6.002"
__version__ = "0.6.2"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
0.0.6.002:
0.6.2:
- integrated get_team_rankings.py as get_team_metrics() function
- integrated visualize_pit.py as graph_pit_histogram() function
0.0.6.001:
0.6.1:
- bug fixes with analysis.Metric() calls
- modified metric functions to use config.json defined default values
0.0.6.000:
0.6.0:
- removed main function
- changed load_config function
- added save_config function
@ -24,66 +24,66 @@ __changelog__ = """changelog:
- renamed metricsloop to metricloop
- split push to database functions amon push_match, push_metric, push_pit
- moved
0.0.5.002:
0.5.2:
- made changes due to refactoring of analysis
0.0.5.001:
0.5.1:
- text fixes
- removed matplotlib requirement
0.0.5.000:
0.5.0:
- improved user interface
0.0.4.002:
0.4.2:
- removed unessasary code
0.0.4.001:
0.4.1:
- fixed bug where X range for regression was determined before sanitization
- better sanitized data
0.0.4.000:
0.4.0:
- fixed spelling issue in __changelog__
- addressed nan bug in regression
- fixed errors on line 335 with metrics calling incorrect key "glicko2"
- fixed errors in metrics computing
0.0.3.000:
0.3.0:
- added analysis to pit data
0.0.2.001:
0.2.1:
- minor stability patches
- implemented db syncing for timestamps
- fixed bugs
0.0.2.000:
0.2.0:
- finalized testing and small fixes
0.0.1.004:
0.1.4:
- finished metrics implement, trueskill is bugged
0.0.1.003:
0.1.3:
- working
0.0.1.002:
0.1.2:
- started implement of metrics
0.0.1.001:
0.1.1:
- cleaned up imports
0.0.1.000:
0.1.0:
- tested working, can push to database
0.0.0.009:
0.0.9:
- tested working
- prints out stats for the time being, will push to database later
0.0.0.008:
0.0.8:
- added data import
- removed tba import
- finished main method
0.0.0.007:
0.0.7:
- added load_config
- optimized simpleloop for readibility
- added __all__ entries
- added simplestats engine
- pending testing
0.0.0.006:
0.0.6:
- fixes
0.0.0.005:
0.0.5:
- imported pickle
- created custom database object
0.0.0.004:
0.0.4:
- fixed simpleloop to actually return a vector
0.0.0.003:
0.0.3:
- added metricsloop which is unfinished
0.0.0.002:
0.0.2:
- added simpleloop which is untested until data is provided
0.0.0.001:
0.0.1:
- created script
- added analysis, numba, numpy imports
"""
@ -110,7 +110,7 @@ __all__ = [
# imports:
from analysis import analysis as an
from tra_analysis import analysis as an
import data as d
import json
import numpy as np

View File

@ -0,0 +1,378 @@
# Titan Robotics Team 2022: Superscript Script
# Written by Arthur Lu & Jacob Levine
# Notes:
# setup:
__version__ = "0.0.5.002"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
0.0.5.002:
- made changes due to refactoring of analysis
0.0.5.001:
- text fixes
- removed matplotlib requirement
0.0.5.000:
- improved user interface
0.0.4.002:
- removed unessasary code
0.0.4.001:
- fixed bug where X range for regression was determined before sanitization
- better sanitized data
0.0.4.000:
- fixed spelling issue in __changelog__
- addressed nan bug in regression
- fixed errors on line 335 with metrics calling incorrect key "glicko2"
- fixed errors in metrics computing
0.0.3.000:
- added analysis to pit data
0.0.2.001:
- minor stability patches
- implemented db syncing for timestamps
- fixed bugs
0.0.2.000:
- finalized testing and small fixes
0.0.1.004:
- finished metrics implement, trueskill is bugged
0.0.1.003:
- working
0.0.1.002:
- started implement of metrics
0.0.1.001:
- cleaned up imports
0.0.1.000:
- tested working, can push to database
0.0.0.009:
- tested working
- prints out stats for the time being, will push to database later
0.0.0.008:
- added data import
- removed tba import
- finished main method
0.0.0.007:
- added load_config
- optimized simpleloop for readibility
- added __all__ entries
- added simplestats engine
- pending testing
0.0.0.006:
- fixes
0.0.0.005:
- imported pickle
- created custom database object
0.0.0.004:
- fixed simpleloop to actually return a vector
0.0.0.003:
- added metricsloop which is unfinished
0.0.0.002:
- added simpleloop which is untested until data is provided
0.0.0.001:
- created script
- added analysis, numba, numpy imports
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"Jacob Levine <jlevine@imsa.edu>",
)
__all__ = [
"main",
"load_config",
"simpleloop",
"simplestats",
"metricsloop"
]
# imports:
from tra_analysis import analysis as an
import data as d
import numpy as np
from os import system, name
from pathlib import Path
import time
import warnings
def main():
warnings.filterwarnings("ignore")
while(True):
current_time = time.time()
print("[OK] time: " + str(current_time))
start = time.time()
config = load_config(Path("config/stats.config"))
competition = an.load_csv(Path("config/competition.config"))[0][0]
print("[OK] configs loaded")
apikey = an.load_csv(Path("config/keys.config"))[0][0]
tbakey = an.load_csv(Path("config/keys.config"))[1][0]
print("[OK] loaded keys")
previous_time = d.get_analysis_flags(apikey, "latest_update")
if(previous_time == None):
d.set_analysis_flags(apikey, "latest_update", 0)
previous_time = 0
else:
previous_time = previous_time["latest_update"]
print("[OK] analysis backtimed to: " + str(previous_time))
print("[OK] loading data")
start = time.time()
data = d.get_match_data_formatted(apikey, competition)
pit_data = d.pit = d.get_pit_data_formatted(apikey, competition)
print("[OK] loaded data in " + str(time.time() - start) + " seconds")
print("[OK] running tests")
start = time.time()
results = simpleloop(data, config)
print("[OK] finished tests in " + str(time.time() - start) + " seconds")
print("[OK] running metrics")
start = time.time()
metricsloop(tbakey, apikey, competition, previous_time)
print("[OK] finished metrics in " + str(time.time() - start) + " seconds")
print("[OK] running pit analysis")
start = time.time()
pit = pitloop(pit_data, config)
print("[OK] finished pit analysis in " + str(time.time() - start) + " seconds")
d.set_analysis_flags(apikey, "latest_update", {"latest_update":current_time})
print("[OK] pushing to database")
start = time.time()
push_to_database(apikey, competition, results, pit)
print("[OK] pushed to database in " + str(time.time() - start) + " seconds")
clear()
def clear():
# for windows
if name == 'nt':
_ = system('cls')
# for mac and linux(here, os.name is 'posix')
else:
_ = system('clear')
def load_config(file):
config_vector = {}
file = an.load_csv(file)
for line in file:
config_vector[line[0]] = line[1:]
return config_vector
def simpleloop(data, tests): # expects 3D array with [Team][Variable][Match]
return_vector = {}
for team in data:
variable_vector = {}
for variable in data[team]:
test_vector = {}
variable_data = data[team][variable]
if(variable in tests):
for test in tests[variable]:
test_vector[test] = simplestats(variable_data, test)
else:
pass
variable_vector[variable] = test_vector
return_vector[team] = variable_vector
return return_vector
def simplestats(data, test):
data = np.array(data)
data = data[np.isfinite(data)]
ranges = list(range(len(data)))
if(test == "basic_stats"):
return an.basic_stats(data)
if(test == "historical_analysis"):
return an.histo_analysis([ranges, data])
if(test == "regression_linear"):
return an.regression(ranges, data, ['lin'])
if(test == "regression_logarithmic"):
return an.regression(ranges, data, ['log'])
if(test == "regression_exponential"):
return an.regression(ranges, data, ['exp'])
if(test == "regression_polynomial"):
return an.regression(ranges, data, ['ply'])
if(test == "regression_sigmoidal"):
return an.regression(ranges, data, ['sig'])
def push_to_database(apikey, competition, results, pit):
for team in results:
d.push_team_tests_data(apikey, competition, team, results[team])
for variable in pit:
d.push_team_pit_data(apikey, competition, variable, pit[variable])
def metricsloop(tbakey, apikey, competition, timestamp): # listener based metrics update
elo_N = 400
elo_K = 24
matches = d.pull_new_tba_matches(tbakey, competition, timestamp)
red = {}
blu = {}
for match in matches:
red = load_metrics(apikey, competition, match, "red")
blu = load_metrics(apikey, competition, match, "blue")
elo_red_total = 0
elo_blu_total = 0
gl2_red_score_total = 0
gl2_blu_score_total = 0
gl2_red_rd_total = 0
gl2_blu_rd_total = 0
gl2_red_vol_total = 0
gl2_blu_vol_total = 0
for team in red:
elo_red_total += red[team]["elo"]["score"]
gl2_red_score_total += red[team]["gl2"]["score"]
gl2_red_rd_total += red[team]["gl2"]["rd"]
gl2_red_vol_total += red[team]["gl2"]["vol"]
for team in blu:
elo_blu_total += blu[team]["elo"]["score"]
gl2_blu_score_total += blu[team]["gl2"]["score"]
gl2_blu_rd_total += blu[team]["gl2"]["rd"]
gl2_blu_vol_total += blu[team]["gl2"]["vol"]
red_elo = {"score": elo_red_total / len(red)}
blu_elo = {"score": elo_blu_total / len(blu)}
red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)}
blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)}
if(match["winner"] == "red"):
observations = {"red": 1, "blu": 0}
elif(match["winner"] == "blue"):
observations = {"red": 0, "blu": 1}
else:
observations = {"red": 0.5, "blu": 0.5}
red_elo_delta = an.Metrics.elo(red_elo["score"], blu_elo["score"], observations["red"], elo_N, elo_K) - red_elo["score"]
blu_elo_delta = an.Metrics.elo(blu_elo["score"], red_elo["score"], observations["blu"], elo_N, elo_K) - blu_elo["score"]
new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metrics.glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]])
new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metrics.glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]])
red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]}
blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]}
for team in red:
red[team]["elo"]["score"] = red[team]["elo"]["score"] + red_elo_delta
red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"]
red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"]
red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"]
for team in blu:
blu[team]["elo"]["score"] = blu[team]["elo"]["score"] + blu_elo_delta
blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"]
blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"]
blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"]
temp_vector = {}
temp_vector.update(red)
temp_vector.update(blu)
for team in temp_vector:
d.push_team_metrics_data(apikey, competition, team, temp_vector[team])
def load_metrics(apikey, competition, match, group_name):
group = {}
for team in match[group_name]:
db_data = d.get_team_metrics_data(apikey, competition, team)
if d.get_team_metrics_data(apikey, competition, team) == None:
elo = {"score": 1500}
gl2 = {"score": 1500, "rd": 250, "vol": 0.06}
ts = {"mu": 25, "sigma": 25/3}
#d.push_team_metrics_data(apikey, competition, team, {"elo":elo, "gl2":gl2,"trueskill":ts})
group[team] = {"elo": elo, "gl2": gl2, "ts": ts}
else:
metrics = db_data["metrics"]
elo = metrics["elo"]
gl2 = metrics["gl2"]
ts = metrics["ts"]
group[team] = {"elo": elo, "gl2": gl2, "ts": ts}
return group
def pitloop(pit, tests):
return_vector = {}
for team in pit:
for variable in pit[team]:
if(variable in tests):
if(not variable in return_vector):
return_vector[variable] = []
return_vector[variable].append(pit[team][variable])
return return_vector
main()
"""
Metrics Defaults:
elo starting score = 1500
elo N = 400
elo K = 24
gl2 starting score = 1500
gl2 starting rd = 350
gl2 starting vol = 0.06
"""

188
data-analysis/tasks.py Normal file
View File

@ -0,0 +1,188 @@
import json
import superscript as su
import threading
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
class Tasker():
match_ = False
metric_ = False
pit_ = False
match_enable = True
metric_enable = True
pit_enable = True
config = {}
def __init__(self):
self.config = su.load_config("config.json")
def match(self):
self.match_ = True
apikey = self.config["key"]["database"]
competition = self.config["competition"]
tests = self.config["statistics"]["match"]
data = su.load_match(apikey, competition)
su.matchloop(apikey, competition, data, tests)
self.match_ = False
if self.match_enable == True and self.match_ == False:
task = threading.Thread(name = "match", target = match)
task.start()
def metric():
self.metric_ = True
apikey = self.config["key"]["database"]
tbakey = self.config["key"]["tba"]
competition = self.config["competition"]
metric = self.config["statistics"]["metric"]
timestamp = su.get_previous_time(apikey)
su.metricloop(tbakey, apikey, competition, timestamp, metric)
self.metric_ = False
if self.metric_enable == True and self.metric_ == False:
task = threading.Thread(name = "match", target = metric)
task.start()
def pit():
self.pit_ = True
apikey = self.config["key"]["database"]
competition = self.config["competition"]
tests = self.config["statistics"]["pit"]
data = su.load_pit(apikey, competition)
su.pitloop(apikey, competition, data, tests)
self.pit_ = False
if self.pit_enable == True and self.pit_ == False:
task = threading.Thread(name = "pit", target = pit)
task.start()
def start_match():
task = threading.Thread(name = "match", target = match)
task.start()
def start_metric():
task = threading.Thread(name = "match", target = metric)
task.start()
def start_pit():
task = threading.Thread(name = "pit", target = pit)
task.start()
def stop_match():
self.match_enable = False
def stop_metric():
self.metric_enable = False
def stop_pit():
self.pit_enable = False
def get_match():
return self.match_
def get_metric():
return self.metric_
def get_pit():
return self.pit_
def get_match_enable():
return self.match_enable
def get_metric_enable():
return self.metric_enable
def get_pit_enable():
return self.pit_enable
"""
def main():
init()
start_match()
start_metric()
start_pit()
exit = False
while(not exit):
i = input("> ")
cmds = i.split(" ")
cmds = [x for x in cmds if x != ""]
l = len(cmds)
if(l == 0):
pass
else:
if(cmds[0] == "exit"):
if(l == 1):
exit = True
else:
print("exit command expected no arguments but encountered " + str(l - 1))
if(cmds[0] == "status"):
if(l == 1):
print("status command expected 1 argument but encountered none\ntype status help for usage")
elif(l > 2):
print("status command expected 1 argument but encountered " + str(l - 1))
elif(cmds[1] == "threads"):
threads = threading.enumerate()
threads = [x.getName() for x in threads]
print("running threads:")
for thread in threads:
print(" " + thread)
elif(cmds[1] == "flags"):
print("current flags:")
print(" match running: " + match_)
print(" metric running: " + metric_)
print(" pit running: " + pit_)
print(" match enable: " + match_enable)
print(" metric enable: " + metric_enable)
print(" pit enable: " + pit_enable)
elif(cmds[1] == "config"):
print("current config:")
print(json.dumps(config))
elif(cmds[1] == "all"):
threads = threading.enumerate()
threads = [x.getName() for x in threads]
print("running threads:")
for thread in threads:
print(" " + thread)
print("current flags:")
print(" match running: " + match_)
print(" metric running: " + metric_)
print(" pit running: " + pit_)
print(" match enable: " + match_enable)
print(" metric enable: " + metric_enable)
print(" pit enable: " + pit_enable)
elif(cmds[1] == "help"):
print("usage: status [arg]\nDisplays the status of the tra data analysis threads.\nArguments:\n threads - prints the stuatus ofcurrently running threads\n flags - prints the status of control and indicator flags\n config - prints the current configuration information\n all - prints all statuses\n <name_of_thread> - prints the status of a specific thread")
else:
threads = threading.enumerate()
threads = [x.getName() for x in threads]
if(cmds[1] in threads):
print(cmds[1] + " is running")
if(__name__ == "__main__"):
main()
"""

33
data-analysis/tra-cli.py Normal file
View File

@ -0,0 +1,33 @@
import argparse
from tasks import Tasker
import test
import threading
from multiprocessing import Process, Queue
t = Tasker()
task_map = {"match":None, "metric":None, "pit":None, "test":None}
status_map = {"match":None, "metric":None, "pit":None}
status_map.update(task_map)
parser = argparse.ArgumentParser(prog = "TRA")
subparsers = parser.add_subparsers(title = "command", metavar = "C", help = "//commandhelp//")
parser_start = subparsers.add_parser("start", help = "//starthelp//")
parser_start.add_argument("targets", metavar = "T", nargs = "*", choices = task_map.keys())
parser_start.set_defaults(which = "start")
parser_stop = subparsers.add_parser("stop", help = "//stophelp//")
parser_stop.add_argument("targets", metavar = "T", nargs = "*", choices = task_map.keys())
parser_stop.set_defaults(which = "stop")
parser_status = subparsers.add_parser("status", help = "//stophelp//")
parser_status.add_argument("targets", metavar = "T", nargs = "*", choices = status_map.keys())
parser_status.set_defaults(which = "status")
args = parser.parse_args()
if(args.which == "start" and "test" in args.targets):
a = test.testcls()
tmain = Process(name = "main", target = a.main)
tmain.start()

View File

@ -6,9 +6,9 @@ __author__ = (
"Arthur Lu <learthurgo@gmail.com>",
)
match = False
metric = False
pit = False
match_ = False
metric_ = False
pit_ = False
match_enable = True
metric_enable = True
@ -16,76 +16,151 @@ pit_enable = True
config = {}
def main():
def __init__(self):
global match
global metric
global pit
global match_
global metric_
global pit_
global match_enable
global metric_enable
global pit_enable
global config
config = su.load_config("config.json")
while(True):
def match(self):
if match_enable == True and match == False:
match_ = True
def target():
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["match"]
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["match"]
data = su.load_match(apikey, competition)
su.matchloop(apikey, competition, data, tests)
data = su.load_match(apikey, competition)
su.matchloop(apikey, competition, data, tests)
match_ = False
match = False
return
if match_enable == True and match_ == False:
match = True
task = threading.Thread(name = "match", target=target)
task.start()
task = threading.Thread(name = "match", target = match)
task.start()
if metric_enable == True and metric == False:
def metric():
def target():
metric_ = True
apikey = config["key"]["database"]
tbakey = config["key"]["tba"]
competition = config["competition"]
metric = config["statistics"]["metric"]
apikey = config["key"]["database"]
tbakey = config["key"]["tba"]
competition = config["competition"]
metric = config["statistics"]["metric"]
timestamp = su.get_previous_time(apikey)
timestamp = su.get_previous_time(apikey)
su.metricloop(tbakey, apikey, competition, timestamp, metric)
su.metricloop(tbakey, apikey, competition, timestamp, metric)
metric = False
return
metric_ = False
match = True
task = threading.Thread(name = "metric", target=target)
task.start()
if metric_enable == True and metric_ == False:
if pit_enable == True and pit == False:
task = threading.Thread(name = "match", target = metric)
task.start()
def target():
def pit():
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["pit"]
pit_ = True
data = su.load_pit(apikey, competition)
su.pitloop(apikey, competition, data, tests)
apikey = config["key"]["database"]
competition = config["competition"]
tests = config["statistics"]["pit"]
pit = False
return
data = su.load_pit(apikey, competition)
su.pitloop(apikey, competition, data, tests)
pit = True
task = threading.Thread(name = "pit", target=target)
task.start()
pit_ = False
task = threading.Thread(name = "main", target=main)
task.start()
if pit_enable == True and pit_ == False:
task = threading.Thread(name = "pit", target = pit)
task.start()
def start_match():
task = threading.Thread(name = "match", target = match)
task.start()
def start_metric():
task = threading.Thread(name = "match", target = metric)
task.start()
def start_pit():
task = threading.Thread(name = "pit", target = pit)
task.start()
def main():
init()
start_match()
start_metric()
start_pit()
exit = False
while(not exit):
i = input("> ")
cmds = i.split(" ")
cmds = [x for x in cmds if x != ""]
l = len(cmds)
if(l == 0):
pass
else:
if(cmds[0] == "exit"):
if(l == 1):
exit = True
else:
print("exit command expected no arguments but encountered " + str(l - 1))
if(cmds[0] == "status"):
if(l == 1):
print("status command expected 1 argument but encountered none\ntype status help for usage")
elif(l > 2):
print("status command expected 1 argument but encountered " + str(l - 1))
elif(cmds[1] == "threads"):
threads = threading.enumerate()
threads = [x.getName() for x in threads]
print("running threads:")
for thread in threads:
print(" " + thread)
elif(cmds[1] == "flags"):
print("current flags:")
print(" match running: " + match_)
print(" metric running: " + metric_)
print(" pit running: " + pit_)
print(" match enable: " + match_enable)
print(" metric enable: " + metric_enable)
print(" pit enable: " + pit_enable)
elif(cmds[1] == "config"):
print("current config:")
print(json.dumps(config))
elif(cmds[1] == "all"):
threads = threading.enumerate()
threads = [x.getName() for x in threads]
print("running threads:")
for thread in threads:
print(" " + thread)
print("current flags:")
print(" match running: " + match_)
print(" metric running: " + metric_)
print(" pit running: " + pit_)
print(" match enable: " + match_enable)
print(" metric enable: " + metric_enable)
print(" pit enable: " + pit_enable)
elif(cmds[1] == "help"):
print("usage: status [arg]\nDisplays the status of the tra data analysis threads.\nArguments:\n threads - prints the stuatus ofcurrently running threads\n flags - prints the status of control and indicator flags\n config - prints the current configuration information\n all - prints all statuses\n <name_of_thread> - prints the status of a specific thread")
else:
threads = threading.enumerate()
threads = [x.getName() for x in threads]
if(cmds[1] in threads):
print(cmds[1] + " is running")
if(__name__ == "__main__"):
main()