53 Commits

Author SHA1 Message Date
snyk-bot
7833cf26ec fix: src/requirements.txt to reduce vulnerabilities
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-SETUPTOOLS-9964606
2025-05-06 19:45:33 +00:00
Dev Singh
df37947b21 Merge branch 'competition' 2023-03-17 10:11:56 -05:00
Dev Singh
8e6c44db65 updates for 2023ilpe 2023-03-17 10:09:48 -05:00
Dev
11398290eb fix 2022-04-08 15:40:33 -05:00
Dev
d847f6d6a7 hi 2022-04-08 15:20:41 -05:00
Dev Singh
b5c8a91fad update data.py for new metrics structure 2022-03-30 01:30:55 -05:00
Dev Singh
8e5fa7eace reintroduce cutoff usage 2022-03-29 18:43:28 -05:00
Arthur Lu
69c6059ff8 added result logging for metric module 2022-03-29 21:17:58 +00:00
Dev Singh
fdcdadb8b2 only use gl2 2022-03-28 23:17:25 -05:00
Arthur Lu
cdd81295fc commented metrics in module.py 2022-03-28 22:02:39 +00:00
Dev Singh
82ec2d85cc clear metrics only on script start 2022-03-28 15:47:46 -05:00
Dev Singh
ac8002aaf8 delete on start 2022-03-28 14:43:24 -05:00
Dev Singh
25e4babd71 Revert "experimental trueskill support"
This reverts commit 3fe2922e97.
2022-03-28 14:21:23 -05:00
Dev Singh
3fe2922e97 experimental trueskill support 2022-03-28 10:13:42 -05:00
Dev
9752fd323b remove time check 2022-03-25 13:39:24 -05:00
Dev
ef63c1de7e add ability to load manual JSON for metrics 2022-03-24 16:19:47 -05:00
Dev
8908f05cbe fix prod issues 2022-03-16 18:51:10 -05:00
Dev
143218dda3 split sbatch commands 2022-03-16 18:39:30 -05:00
Dev
def2fc9b73 change gitignore, add up submit
Former-commit-id: 927f0a1a4c3dd0aff6fb4fca5f99ea62bc61584f
2022-03-14 20:55:00 -05:00
Dev
e8a5bb75f8 add sbatch script
Former-commit-id: f521f7b3f69df71171cd046a40bcbcb6637967a6
2022-03-13 21:56:34 -05:00
Arthur Lu
c9dd09f5e9 fixed file name
Former-commit-id: 7d113b378854316c3af5e5c58f589c5c062040b4
2022-03-13 18:54:50 -07:00
Arthur Lu
3c6e3ac58e added pandas to requirements,
readded dev docker files


Former-commit-id: 3fa4ef57e5e9542ce245ab0ef9b8320e21c9507c
2022-03-14 01:53:50 +00:00
Arthur Lu
8c28c24d60 removed all unessasary files,
moved important files to folder "competition"


Former-commit-id: 59becb22abc3305a36e2876351e6c7306e3f551e
2022-03-14 01:33:24 +00:00
Dev Singh
de4d3d4967 Update README.md 2021-10-21 14:21:20 -05:00
Arthur Lu
d56411253c fixed badge url 2021-08-26 18:20:11 -07:00
Arthur Lu
c415225afe Update release badge 2021-08-26 18:11:25 -07:00
Arthur Lu
d684813ee0 Merge pull request #10 from titanscouting/automate-build
Automate build
2021-06-09 14:58:21 -07:00
Arthur Lu
26079f3180 fixed pathing for build-CLI.*
added temp directory to gitignore
2021-04-27 07:26:14 +00:00
Arthur Lu
99e722c400 removed ThreadPoolExecutor import 2021-04-25 06:05:33 +00:00
Arthur Lu
f5a0e0fe8c added sample build-cli workflow 2021-04-25 03:51:01 +00:00
Arthur Lu
28e423942f added .gitattributes 2021-04-15 19:41:10 +00:00
Arthur Lu
8977f8c277 added compiled binaries with no file endings
to gitignore
2021-04-13 04:05:46 +00:00
Arthur Lu
2b0f718aa5 removed compiled binaries
added compiled binaries in /dist/ to gitignore
2021-04-13 04:03:07 +00:00
Arthur Lu
30469a3211 removed matplotlib import
removed plotting pit analysis
fixed warning supression for win exe
superscript v 0.8.6
2021-04-12 15:13:54 -07:00
Arthur Lu
391d4e1996 created batch script for windows compilation 2021-04-12 14:39:00 -07:00
Arthur Lu
224f64e8b7 better fix for devcontainer.json 2021-04-12 06:30:21 +00:00
Arthur Lu
aa7d7ca927 quick patch for devcontainer.json 2021-04-12 06:27:50 +00:00
Arthur Lu
d10c16d483 superscript v 0.8.5 2021-04-10 06:08:18 +00:00
Arthur Lu
f211d00f2d superscript v 0.8.4 2021-04-09 23:45:16 +00:00
Arthur Lu
69c707689b superscript v 0.8.3 2021-04-03 20:47:45 +00:00
Arthur Lu
d2f9c802b3 built and verified threading fixes 2021-04-02 22:04:06 +00:00
Arthur Lu
99e28f5e83 fixed .gitignore
added build-CLI script
fixed threading in superscript
2021-04-02 21:58:35 +00:00
Arthur Lu
18dbc174bd deleted config.json
changed superscript config lookup to relative path
added additional requirements to requirements.txt
added build spec file for superscript
2021-04-02 21:35:05 +00:00
Arthur Lu
79689d69c8 fixed spelling in default config,
added config to git ignore
2021-04-02 01:28:25 +00:00
Dev Singh
80c3f1224b Merge pull request #3 from titanscouting/superscript-main
Merge initial changes
2021-04-01 13:40:29 -05:00
Dev Singh
960a1b3165 fix ut and file structure 2021-04-01 13:38:53 -05:00
Arthur Lu
89fcd366d3 Merge branch 'master' into superscript-main 2021-04-01 11:34:44 -07:00
Dev Singh
79cde44108 Create SECURITY.md 2021-04-01 13:11:38 -05:00
Dev Singh
2b896db9a9 Create MAINTAINERS 2021-04-01 13:11:22 -05:00
Dev Singh
483897c011 Merge pull request #1 from titanscouting/add-license-1
Create LICENSE
2021-04-01 13:11:03 -05:00
Dev Singh
9287d98fe2 Create LICENSE 2021-04-01 13:10:50 -05:00
Dev Singh
991751a340 Create CONTRIBUTING.md 2021-04-01 13:10:14 -05:00
Dev Singh
9d2476b5eb Create README.md 2021-04-01 13:09:18 -05:00
27 changed files with 1057 additions and 468 deletions

View File

@@ -1,6 +1,7 @@
FROM python:slim FROM ubuntu:20.04
WORKDIR / WORKDIR /
RUN apt-get -y update; apt-get -y upgrade RUN apt-get -y update
RUN apt-get -y install git binutils RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends tzdata
COPY requirements.txt . RUN apt-get install -y python3 python3-dev git python3-pip python3-kivy python-is-python3 libgl1-mesa-dev build-essential
RUN pip install -r requirements.txt RUN ln -s $(which pip3) /usr/bin/pip
RUN pip install pymongo pandas numpy scipy scikit-learn matplotlib pylint kivy

View File

@@ -0,0 +1,2 @@
FROM titanscout2022/tra-analysis-base:latest
WORKDIR /

View File

@@ -1,7 +1,7 @@
{ {
"name": "TRA Analysis Development Environment", "name": "TRA Analysis Development Environment",
"build": { "build": {
"dockerfile": "Dockerfile", "dockerfile": "dev-dockerfile",
}, },
"settings": { "settings": {
"terminal.integrated.shell.linux": "/bin/bash", "terminal.integrated.shell.linux": "/bin/bash",
@@ -18,5 +18,5 @@
"ms-python.python", "ms-python.python",
"waderyan.gitblame" "waderyan.gitblame"
], ],
"postCreateCommand": "" "postCreateCommand": "/usr/bin/pip3 install -r ${containerWorkspaceFolder}/src/requirements.txt && /usr/bin/pip3 install --no-cache-dir pylint && /usr/bin/pip3 install pytest"
} }

View File

@@ -1,6 +1,7 @@
cerberus cerberus
dnspython dnspython
numpy numpy
pandas
pyinstaller pyinstaller
pylint pylint
pymongo pymongo

View File

@@ -1,38 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,7 +1,7 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Build Superscript Linux name: Superscript Unit Tests
on: on:
release: release:
@@ -11,25 +11,7 @@ jobs:
generate: generate:
name: Build Linux name: Build Linux
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout master - name: Checkout master
uses: actions/checkout@master uses: actions/checkout@master
- name: Install Dependencies
run: pip install -r requirements.txt
working-directory: src/
- name: Give Execute Permission
run: chmod +x build-CLI.sh
working-directory: build/
- name: Build Binary
run: ./build-CLI.sh
working-directory: build/
- name: Copy Binary to Root Dir
run: cp superscript ..
working-directory: dist/
- name: Upload Release Asset
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
file: superscript
asset_name: superscript
tag: ${{ github.ref }}

6
.gitignore vendored
View File

@@ -9,6 +9,9 @@
**/tra_analysis/ **/tra_analysis/
**/temp/* **/temp/*
**/errorlog.txt
/dist/superscript.*
/dist/superscript
**/*.pid **/*.pid
**/profile.* **/profile.*
@@ -16,3 +19,6 @@
**/*.log **/*.log
**/errorlog.txt **/errorlog.txt
/dist/* /dist/*
slurm-tra-superscript.out
config*.json

View File

@@ -1,4 +1,4 @@
# Red Alliance Analysis · ![GitHub release (latest by date)](https://img.shields.io/github/v/release/titanscout2022/red-alliance-analysis) # Red Alliance Analysis · ![GitHub release (latest by date)](https://img.shields.io/github/v/release/titanscouting/tra-superscript)
Titan Robotics 2022 Strategy Team Repository for Data Analysis Tools. Included with these tools are the backend data analysis engine formatted as a python package, associated binaries for the analysis package, and premade scripts that can be pulled directly from this repository and will integrate with other Red Alliance applications to quickly deploy FRC scouting tools. Titan Robotics 2022 Strategy Team Repository for Data Analysis Tools. Included with these tools are the backend data analysis engine formatted as a python package, associated binaries for the analysis package, and premade scripts that can be pulled directly from this repository and will integrate with other Red Alliance applications to quickly deploy FRC scouting tools.

View File

@@ -2,4 +2,4 @@ set pathtospec="../src/superscript.spec"
set pathtodist="../dist/" set pathtodist="../dist/"
set pathtowork="temp/" set pathtowork="temp/"
pyinstaller --clean --distpath %pathtodist% --workpath %pathtowork% %pathtospec% pyinstaller --onefile --clean --distpath %pathtodist% --workpath %pathtowork% %pathtospec%

View File

@@ -1,5 +1,5 @@
pathtospec="superscript.spec" pathtospec="../src/superscript.spec"
pathtodist="../dist/" pathtodist="../dist/"
pathtowork="temp/" pathtowork="temp/"
pyinstaller --clean --distpath ${pathtodist} --workpath ${pathtowork} ${pathtospec} pyinstaller --onefile --clean --distpath ${pathtodist} --workpath ${pathtowork} ${pathtospec}

View File

@@ -1,50 +0,0 @@
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['../src/superscript.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=['dnspython', 'sklearn.utils._weight_vector', 'sklearn.utils._typedefs', 'sklearn.neighbors._partition_nodes', 'requests'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=['matplotlib'],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False
)
pyz = PYZ(
a.pure,
a.zipped_data,
cipher=block_cipher
)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='superscript',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='superscript'
)

View File

@@ -1,16 +1,26 @@
from calendar import c
import requests import requests
import pull import pull
import pandas as pd import pandas as pd
import json
def pull_new_tba_matches(apikey, competition, cutoff): def pull_new_tba_matches(apikey, competition, last_match):
api_key= apikey api_key= apikey
x=requests.get("https://www.thebluealliance.com/api/v3/event/"+competition+"/matches/simple", headers={"X-TBA-Auth-Key":api_key}) x=requests.get("https://www.thebluealliance.com/api/v3/event/"+competition+"/matches/simple", headers={"X-TBA-Auth-Key":api_key})
json = x.json()
out = [] out = []
for i in x.json(): for i in json:
if i["actual_time"] != None and i["actual_time"]-cutoff >= 0 and i["comp_level"] == "qm": if i["actual_time"] != None and i["comp_level"] == "qm" and i["match_number"] > last_match :
out.append({"match" : i['match_number'], "blue" : list(map(lambda x: int(x[3:]), i['alliances']['blue']['team_keys'])), "red" : list(map(lambda x: int(x[3:]), i['alliances']['red']['team_keys'])), "winner": i["winning_alliance"]}) out.append({"match" : i['match_number'], "blue" : list(map(lambda x: int(x[3:]), i['alliances']['blue']['team_keys'])), "red" : list(map(lambda x: int(x[3:]), i['alliances']['red']['team_keys'])), "winner": i["winning_alliance"]})
out.sort(key=lambda x: x['match'])
return out return out
def pull_new_tba_matches_manual(apikey, competition, cutoff):
filename = competition+"-wins.json"
with open(filename, 'r') as f:
data = json.load(f)
return data
def get_team_match_data(client, competition, team_num): def get_team_match_data(client, competition, team_num):
db = client.data_scouting db = client.data_scouting
mdata = db.matchdata mdata = db.matchdata
@@ -19,6 +29,12 @@ def get_team_match_data(client, competition, team_num):
out[i['match']] = i['data'] out[i['match']] = i['data']
return pd.DataFrame(out) return pd.DataFrame(out)
def clear_metrics(client, competition):
db = client.data_processing
data = db.team_metrics
data.delete_many({competition: competition})
return True
def get_team_pit_data(client, competition, team_num): def get_team_pit_data(client, competition, team_num):
db = client.data_scouting db = client.data_scouting
mdata = db.pitdata mdata = db.pitdata
@@ -28,7 +44,15 @@ def get_team_pit_data(client, competition, team_num):
def get_team_metrics_data(client, competition, team_num): def get_team_metrics_data(client, competition, team_num):
db = client.data_processing db = client.data_processing
mdata = db.team_metrics mdata = db.team_metrics
return mdata.find_one({"competition" : competition, "team": team_num}) temp = mdata.find_one({"team": team_num})
if temp != None:
if competition in temp['metrics'].keys():
temp = temp['metrics'][competition]
else :
temp = None
else:
temp = None
return temp
def get_match_data_formatted(client, competition): def get_match_data_formatted(client, competition):
teams_at_comp = pull.get_teams_at_competition(competition) teams_at_comp = pull.get_teams_at_competition(competition)
@@ -51,7 +75,7 @@ def get_metrics_data_formatted(client, competition):
return out return out
def get_pit_data_formatted(client, competition): def get_pit_data_formatted(client, competition):
x=requests.get("https://titanscouting.epochml.org/api/fetchAllTeamNicknamesAtCompetition?competition="+competition) x=requests.get("https://scouting.titanrobotics2022.com/api/fetchAllTeamNicknamesAtCompetition?competition="+competition)
x = x.json() x = x.json()
x = x['data'] x = x['data']
x = x.keys() x = x.keys()
@@ -84,7 +108,7 @@ def push_team_tests_data(client, competition, team_num, data, dbname = "data_pro
def push_team_metrics_data(client, competition, team_num, data, dbname = "data_processing", colname = "team_metrics"): def push_team_metrics_data(client, competition, team_num, data, dbname = "data_processing", colname = "team_metrics"):
db = client[dbname] db = client[dbname]
mdata = db[colname] mdata = db[colname]
mdata.replace_one({"competition" : competition, "team": team_num}, {"_id": competition+str(team_num)+"am", "competition" : competition, "team" : team_num, "metrics" : data}, True) mdata.update_one({"team": team_num}, {"$set": {"metrics.{}".format(competition): data}}, upsert=True)
def push_team_pit_data(client, competition, variable, data, dbname = "data_processing", colname = "team_pit"): def push_team_pit_data(client, competition, variable, data, dbname = "data_processing", colname = "team_pit"):
db = client[dbname] db = client[dbname]
@@ -94,12 +118,12 @@ def push_team_pit_data(client, competition, variable, data, dbname = "data_proce
def get_analysis_flags(client, flag): def get_analysis_flags(client, flag):
db = client.data_processing db = client.data_processing
mdata = db.flags mdata = db.flags
return mdata.find_one({flag:{"$exists":True}}) return mdata.find_one({"_id": "2022"})
def set_analysis_flags(client, flag, data): def set_analysis_flags(client, flag, data):
db = client.data_processing db = client.data_processing
mdata = db.flags mdata = db.flags
return mdata.replace_one({flag:{"$exists":True}}, data, True) return mdata.update_one({"_id": "2022"}, {"$set": data})
def unkeyify_2l(layered_dict): def unkeyify_2l(layered_dict):
out = {} out = {}
@@ -153,22 +177,17 @@ def load_metric(client, competition, match, group_name, metrics):
db_data = get_team_metrics_data(client, competition, team) db_data = get_team_metrics_data(client, competition, team)
if db_data == None: if db_data == None:
elo = {"score": metrics["elo"]["score"]}
gl2 = {"score": metrics["gl2"]["score"], "rd": metrics["gl2"]["rd"], "vol": metrics["gl2"]["vol"]} gl2 = {"score": metrics["gl2"]["score"], "rd": metrics["gl2"]["rd"], "vol": metrics["gl2"]["vol"]}
ts = {"mu": metrics["ts"]["mu"], "sigma": metrics["ts"]["sigma"]}
group[team] = {"elo": elo, "gl2": gl2, "ts": ts} group[team] = {"gl2": gl2}
else: else:
metrics = db_data["metrics"] metrics = db_data
elo = metrics["elo"]
gl2 = metrics["gl2"] gl2 = metrics["gl2"]
ts = metrics["ts"]
group[team] = {"elo": elo, "gl2": gl2, "ts": ts} group[team] = {"gl2": gl2}
return group return group

View File

@@ -96,20 +96,11 @@ sample_json = """
}, },
"metric":{ "metric":{
"tests":{ "tests":{
"elo":{
"score":1500,
"N":400,
"K":24
},
"gl2":{ "gl2":{
"score":1500, "score":1500,
"rd":250, "rd":250,
"vol":0.06 "vol":0.06
}, },
"ts":{
"mu":25,
"sigma":8.33
}
} }
}, },
"pit":{ "pit":{

View File

@@ -3,6 +3,7 @@ import data as d
import signal import signal
import numpy as np import numpy as np
from tra_analysis import Analysis as an from tra_analysis import Analysis as an
from tqdm import tqdm
class Module(metaclass = abc.ABCMeta): class Module(metaclass = abc.ABCMeta):
@@ -169,25 +170,21 @@ class Metric (Module):
self._push_results() self._push_results()
def _load_data(self): def _load_data(self):
self.data = d.pull_new_tba_matches(self.tbakey, self.competition, self.timestamp) self.last_match = d.get_analysis_flags(self.apikey, 'metrics_last_match')['metrics_last_match']
print("Previous last match", self.last_match)
self.data = d.pull_new_tba_matches(self.tbakey, self.competition, self.last_match)
def _process_data(self): def _process_data(self):
elo_N = self.config["tests"]["elo"]["N"] self.results = {}
elo_K = self.config["tests"]["elo"]["K"] self.match = self.last_match
matches = self.data matches = self.data
red = {} red = {}
blu = {} blu = {}
for match in tqdm(matches, desc="Metrics"): # grab matches and loop through each one
for match in matches: self.match = max(self.match, int(match['match']))
red = d.load_metric(self.apikey, self.competition, match, "red", self.config["tests"]) # get the current ratings for red
red = d.load_metric(self.apikey, self.competition, match, "red", self.config["tests"]) blu = d.load_metric(self.apikey, self.competition, match, "blue", self.config["tests"]) # get the current ratings for blue
blu = d.load_metric(self.apikey, self.competition, match, "blue", self.config["tests"])
elo_red_total = 0
elo_blu_total = 0
gl2_red_score_total = 0 gl2_red_score_total = 0
gl2_blu_score_total = 0 gl2_blu_score_total = 0
@@ -198,72 +195,63 @@ class Metric (Module):
gl2_red_vol_total = 0 gl2_red_vol_total = 0
gl2_blu_vol_total = 0 gl2_blu_vol_total = 0
for team in red: for team in red: # for each team in red, add up gl2 score components
elo_red_total += red[team]["elo"]["score"]
gl2_red_score_total += red[team]["gl2"]["score"] gl2_red_score_total += red[team]["gl2"]["score"]
gl2_red_rd_total += red[team]["gl2"]["rd"] gl2_red_rd_total += red[team]["gl2"]["rd"]
gl2_red_vol_total += red[team]["gl2"]["vol"] gl2_red_vol_total += red[team]["gl2"]["vol"]
for team in blu: for team in blu: # for each team in blue, add up gl2 score components
elo_blu_total += blu[team]["elo"]["score"]
gl2_blu_score_total += blu[team]["gl2"]["score"] gl2_blu_score_total += blu[team]["gl2"]["score"]
gl2_blu_rd_total += blu[team]["gl2"]["rd"] gl2_blu_rd_total += blu[team]["gl2"]["rd"]
gl2_blu_vol_total += blu[team]["gl2"]["vol"] gl2_blu_vol_total += blu[team]["gl2"]["vol"]
red_elo = {"score": elo_red_total / len(red)}
blu_elo = {"score": elo_blu_total / len(blu)}
red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)} red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)} # average the scores by dividing by 3
blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)} blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)} # average the scores by dividing by 3
if match["winner"] == "red": if match["winner"] == "red": # if red won, set observations to {"red": 1, "blu": 0}
observations = {"red": 1, "blu": 0} observations = {"red": 1, "blu": 0}
elif match["winner"] == "blue": elif match["winner"] == "blue": # if blue won, set observations to {"red": 0, "blu": 1}
observations = {"red": 0, "blu": 1} observations = {"red": 0, "blu": 1}
else: else: # otherwise it was a tie and observations is {"red": 0.5, "blu": 0.5}
observations = {"red": 0.5, "blu": 0.5} observations = {"red": 0.5, "blu": 0.5}
red_elo_delta = an.Metric().elo(red_elo["score"], blu_elo["score"], observations["red"], elo_N, elo_K) - red_elo["score"]
blu_elo_delta = an.Metric().elo(blu_elo["score"], red_elo["score"], observations["blu"], elo_N, elo_K) - blu_elo["score"]
new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metric().glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]]) new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metric().glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]]) # calculate new scores for gl2 for red
new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metric().glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]]) new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metric().glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]]) # calculate new scores for gl2 for blue
red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]} red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]} # calculate gl2 deltas for red
blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]} blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]} # calculate gl2 deltas for blue
for team in red: for team in red: # for each team on red, add the previous score with the delta to find the new score
red[team]["elo"]["score"] = red[team]["elo"]["score"] + red_elo_delta
red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"] red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"]
red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"] red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"]
red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"] red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"]
for team in blu: for team in blu: # for each team on blue, add the previous score with the delta to find the new score
blu[team]["elo"]["score"] = blu[team]["elo"]["score"] + blu_elo_delta
blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"] blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"]
blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"] blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"]
blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"] blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"]
temp_vector = {} temp_vector = {}
temp_vector.update(red) temp_vector.update(red) # update the team's score with the temporay vector
temp_vector.update(blu) temp_vector.update(blu)
d.push_metric(self.apikey, self.competition, temp_vector) self.results[match['match']] = temp_vector
d.push_metric(self.apikey, self.competition, temp_vector) # push new scores to db
print("New last match", self.match)
d.set_analysis_flags(self.apikey, 'metrics_last_match', {'metrics_last_match': self.match})
def _push_results(self): def _push_results(self):
pass pass

View File

@@ -2,7 +2,7 @@ import requests
from exceptions import APIError from exceptions import APIError
from dep import load_config from dep import load_config
url = "https://titanscouting.epochml.org" url = "https://scouting.titanrobotics2022.com"
config_tra = {} config_tra = {}
load_config("config.json", config_tra) load_config("config.json", config_tra)
trakey = config_tra['persistent']['key']['tra'] trakey = config_tra['persistent']['key']['tra']

View File

@@ -0,0 +1,15 @@
cerberus
dnspython
numpy
pandas
pyinstaller
pylint
pymongo
pyparsing
python-daemon
pyzmq
requests
scikit-learn
scipy
six
tra-analysis

402
competition/superscript.py Normal file
View File

@@ -0,0 +1,402 @@
# Titan Robotics Team 2022: Superscript Script
# Written by Arthur Lu, Jacob Levine, and Dev Singh
# Notes:
# setup:
__version__ = "1.0.0"
# changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog:
1.0.0:
- superscript now runs in PEP 3143 compliant well behaved daemon on Linux systems
- linux superscript daemon has integrated websocket output to monitor progress/status remotely
- linux daemon now sends stderr to errorlog.log
- added verbose option to linux superscript to allow for interactive output
- moved pymongo import to superscript.py
- added profile option to linux superscript to profile runtime of script
- reduced memory usage slightly by consolidating the unwrapped input data
- added debug option, which performs one loop of analysis and dumps results to local files
- added event and time delay options to config
- event delay pauses loop until even listener recieves an update
- time delay pauses loop until the time specified has elapsed since the BEGINNING of previous loop
- added options to pull config information from database (reatins option to use local config file)
- config-preference option selects between prioritizing local config and prioritizing database config
- synchronize-config option selects whether to update the non prioritized config with the prioritized one
- divided config options between persistent ones (keys), and variable ones (everything else)
- generalized behavior of various core components by collecting loose functions in several dependencies into classes
- module.py contains classes, each one represents a single data analysis routine
- config.py contains the Configuration class, which stores the configuration information and abstracts the getter methods
0.9.3:
- improved data loading performance by removing redundant PyMongo client creation (120s to 14s)
- passed singular instance of PyMongo client as standin for apikey parameter in all data.py functions
0.9.2:
- removed unessasary imports from data
- minor changes to interface
0.9.1:
- fixed bugs in configuration item loading exception handling
0.9.0:
- moved printing and logging related functions to interface.py (changelog will stay in this file)
- changed function return files for load_config and save_config to standard C values (0 for success, 1 for error)
- added local variables for config location
- moved dataset getting and setting functions to dataset.py (changelog will stay in this file)
- moved matchloop, metricloop, pitloop and helper functions (simplestats) to processing.py
0.8.6:
- added proper main function
0.8.5:
- added more gradeful KeyboardInterrupt exiting
- redirected stderr to errorlog.txt
0.8.4:
- added better error message for missing config.json
- added automatic config.json creation
- added splash text with version and system info
0.8.3:
- updated matchloop with new regression format (requires tra_analysis 3.x)
0.8.2:
- readded while true to main function
- added more thread config options
0.8.1:
- optimized matchloop further by bypassing GIL
0.8.0:
- added multithreading to matchloop
- tweaked user log
0.7.0:
- finished implementing main function
0.6.2:
- integrated get_team_rankings.py as get_team_metrics() function
- integrated visualize_pit.py as graph_pit_histogram() function
0.6.1:
- bug fixes with analysis.Metric() calls
- modified metric functions to use config.json defined default values
0.6.0:
- removed main function
- changed load_config function
- added save_config function
- added load_match function
- renamed simpleloop to matchloop
- moved simplestats function inside matchloop
- renamed load_metrics to load_metric
- renamed metricsloop to metricloop
- split push to database functions amon push_match, push_metric, push_pit
- moved
0.5.2:
- made changes due to refactoring of analysis
0.5.1:
- text fixes
- removed matplotlib requirement
0.5.0:
- improved user interface
0.4.2:
- removed unessasary code
0.4.1:
- fixed bug where X range for regression was determined before sanitization
- better sanitized data
0.4.0:
- fixed spelling issue in __changelog__
- addressed nan bug in regression
- fixed errors on line 335 with metrics calling incorrect key "glicko2"
- fixed errors in metrics computing
0.3.0:
- added analysis to pit data
0.2.1:
- minor stability patches
- implemented db syncing for timestamps
- fixed bugs
0.2.0:
- finalized testing and small fixes
0.1.4:
- finished metrics implement, trueskill is bugged
0.1.3:
- working
0.1.2:
- started implement of metrics
0.1.1:
- cleaned up imports
0.1.0:
- tested working, can push to database
0.0.9:
- tested working
- prints out stats for the time being, will push to database later
0.0.8:
- added data import
- removed tba import
- finished main method
0.0.7:
- added load_config
- optimized simpleloop for readibility
- added __all__ entries
- added simplestats engine
- pending testing
0.0.6:
- fixes
0.0.5:
- imported pickle
- created custom database object
0.0.4:
- fixed simpleloop to actually return a vector
0.0.3:
- added metricsloop which is unfinished
0.0.2:
- added simpleloop which is untested until data is provided
0.0.1:
- created script
- added analysis, numba, numpy imports
"""
__author__ = (
"Arthur Lu <learthurgo@gmail.com>",
"Jacob Levine <jlevine@imsa.edu>",
)
# imports:
import os, sys, time
import pymongo # soon to be deprecated
import traceback
import warnings
from config import Configuration, ConfigurationError
from data import get_previous_time, set_current_time, check_new_database_matches, clear_metrics
from interface import Logger
from module import Match, Metric, Pit
import zmq
config_path = "config.json"
def main(logger, verbose, profile, debug, socket_send = None):
def close_all():
if "client" in locals():
client.close()
warnings.filterwarnings("ignore")
logger.splash(__version__)
modules = {"match": Match, "metric": Metric, "pit": Pit}
while True:
try:
loop_start = time.time()
logger.info("current time: " + str(loop_start))
socket_send("current time: " + str(loop_start))
config = Configuration(config_path)
logger.info("found and loaded config at <" + config_path + ">")
socket_send("found and loaded config at <" + config_path + ">")
apikey, tbakey = config.database, config.tba
logger.info("found and loaded database and tba keys")
socket_send("found and loaded database and tba keys")
client = pymongo.MongoClient(apikey)
logger.info("established connection to database")
socket_send("established connection to database")
previous_time = get_previous_time(client)
logger.info("analysis backtimed to: " + str(previous_time))
socket_send("analysis backtimed to: " + str(previous_time))
config.resolve_config_conflicts(logger, client)
config_modules, competition = config.modules, config.competition
for m in config_modules:
if m in modules:
start = time.time()
current_module = modules[m](config_modules[m], client, tbakey, previous_time, competition)
valid = current_module.validate_config()
if not valid:
continue
current_module.run()
logger.info(m + " module finished in " + str(time.time() - start) + " seconds")
socket_send(m + " module finished in " + str(time.time() - start) + " seconds")
if debug:
logger.save_module_to_file(m, current_module.data, current_module.results) # logging flag check done in logger
set_current_time(client, loop_start)
close_all()
logger.info("closed threads and database client")
logger.info("finished all tasks in " + str(time.time() - loop_start) + " seconds, looping")
socket_send("closed threads and database client")
socket_send("finished all tasks in " + str(time.time() - loop_start) + " seconds, looping")
if profile:
return 0
if debug:
return 0
event_delay = config["variable"]["event-delay"]
if event_delay:
logger.info("loop delayed until database returns new matches")
socket_send("loop delayed until database returns new matches")
new_match = False
while not new_match:
time.sleep(1)
new_match = check_new_database_matches(client, competition)
logger.info("database returned new matches")
socket_send("database returned new matches")
else:
loop_delay = float(config["variable"]["loop-delay"])
remaining_time = loop_delay - (time.time() - loop_start)
if remaining_time > 0:
logger.info("loop delayed by " + str(remaining_time) + " seconds")
socket_send("loop delayed by " + str(remaining_time) + " seconds")
time.sleep(remaining_time)
except KeyboardInterrupt:
close_all()
logger.info("detected KeyboardInterrupt, exiting")
socket_send("detected KeyboardInterrupt, exiting")
return 0
except ConfigurationError as e:
str_e = "".join(traceback.format_exception(e))
logger.error("encountered a configuration error: " + str(e))
logger.error(str_e)
socket_send("encountered a configuration error: " + str(e))
socket_send(str_e)
close_all()
return 1
except Exception as e:
str_e = "".join(traceback.format_exception(e))
logger.error("encountered an exception while running")
logger.error(str_e)
socket_send("encountered an exception while running")
socket_send(str_e)
close_all()
return 1
def start(pid_path, verbose, profile, debug):
if profile:
def send(msg):
pass
logger = Logger(verbose, profile, debug)
import cProfile, pstats, io
profile = cProfile.Profile()
profile.enable()
exit_code = main(logger, verbose, profile, debug, socket_send = send)
profile.disable()
f = open("profile.txt", 'w+')
ps = pstats.Stats(profile, stream = f).sort_stats('cumtime')
ps.print_stats()
sys.exit(exit_code)
elif verbose:
def send(msg):
pass
logger = Logger(verbose, profile, debug)
exit_code = main(logger, verbose, profile, debug, socket_send = send)
sys.exit(exit_code)
elif debug:
def send(msg):
pass
logger = Logger(verbose, profile, debug)
exit_code = main(logger, verbose, profile, debug, socket_send = send)
sys.exit(exit_code)
else:
logfile = "logfile.log"
f = open(logfile, 'w+')
f.close()
e = open('errorlog.log', 'w+')
with daemon.DaemonContext(
working_directory = os.getcwd(),
pidfile = pidfile.TimeoutPIDLockFile(pid_path),
stderr = e
):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5678")
socket.send(b'status')
def send(msg):
socket.send(bytes("status: " + msg, "utf-8"))
logger = Logger(verbose, profile, debug, file = logfile)
exit_code = main(logger, verbose, profile, debug, socket_send = send)
socket.close()
f.close()
sys.exit(exit_code)
def stop(pid_path):
try:
pf = open(pid_path, 'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
sys.stderr.write("pidfile at <" + pid_path + "> does not exist. Daemon not running?\n")
return
try:
while True:
os.kill(pid, SIGTERM)
time.sleep(0.01)
except OSError as err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(pid_path):
os.remove(pid_path)
else:
traceback.print_exc(file = sys.stderr)
sys.exit(1)
def restart(pid_path):
stop(pid_path)
start(pid_path, False, False, False)
if __name__ == "__main__":
if sys.platform.startswith("win"):
start(None, verbose = True)
else:
import daemon
from daemon import pidfile
from signal import SIGTERM
pid_path = "tra-daemon.pid"
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
start(pid_path, False, False, False)
elif 'stop' == sys.argv[1]:
stop(pid_path)
elif 'restart' == sys.argv[1]:
restart(pid_path)
elif 'verbose' == sys.argv[1]:
start(None, True, False, False)
elif 'profile' == sys.argv[1]:
start(None, False, True, False)
elif 'debug' == sys.argv[1]:
start(None, False, False, True)
else:
print("usage: %s start|stop|restart|verbose|profile|debug" % sys.argv[0])
sys.exit(2)
sys.exit(0)
else:
print("usage: %s start|stop|restart|verbose|profile|debug" % sys.argv[0])
sys.exit(2)

19
src/requirements.txt Normal file
View File

@@ -0,0 +1,19 @@
requests
pymongo
pandas
tra-analysis
dnspython
pyinstaller
requests
pymongo
numpy
scipy
scikit-learn
six
pyparsing
pandas
kivy==2.0.0rc2
setuptools>=78.1.1 # not directly required, pinned by Snyk to avoid a vulnerability

View File

@@ -3,43 +3,10 @@
# Notes: # Notes:
# setup: # setup:
__version__ = "1.0.0" __version__ = "0.8.6"
# changelog should be viewed using print(analysis.__changelog__) # changelog should be viewed using print(analysis.__changelog__)
__changelog__ = """changelog: __changelog__ = """changelog:
1.0.0:
- superscript now runs in PEP 3143 compliant well behaved daemon on Linux systems
- linux superscript daemon has integrated websocket output to monitor progress/status remotely
- linux daemon now sends stderr to errorlog.log
- added verbose option to linux superscript to allow for interactive output
- moved pymongo import to superscript.py
- added profile option to linux superscript to profile runtime of script
- reduced memory usage slightly by consolidating the unwrapped input data
- added debug option, which performs one loop of analysis and dumps results to local files
- added event and time delay options to config
- event delay pauses loop until even listener recieves an update
- time delay pauses loop until the time specified has elapsed since the BEGINNING of previous loop
- added options to pull config information from database (reatins option to use local config file)
- config-preference option selects between prioritizing local config and prioritizing database config
- synchronize-config option selects whether to update the non prioritized config with the prioritized one
- divided config options between persistent ones (keys), and variable ones (everything else)
- generalized behavior of various core components by collecting loose functions in several dependencies into classes
- module.py contains classes, each one represents a single data analysis routine
- config.py contains the Configuration class, which stores the configuration information and abstracts the getter methods
0.9.3:
- improved data loading performance by removing redundant PyMongo client creation (120s to 14s)
- passed singular instance of PyMongo client as standin for apikey parameter in all data.py functions
0.9.2:
- removed unessasary imports from data
- minor changes to interface
0.9.1:
- fixed bugs in configuration item loading exception handling
0.9.0:
- moved printing and logging related functions to interface.py (changelog will stay in this file)
- changed function return files for load_config and save_config to standard C values (0 for success, 1 for error)
- added local variables for config location
- moved dataset getting and setting functions to dataset.py (changelog will stay in this file)
- moved matchloop, metricloop, pitloop and helper functions (simplestats) to processing.py
0.8.6: 0.8.6:
- added proper main function - added proper main function
0.8.5: 0.8.5:
@@ -147,257 +114,514 @@ __author__ = (
"Jacob Levine <jlevine@imsa.edu>", "Jacob Levine <jlevine@imsa.edu>",
) )
__all__ = [
"load_config",
"save_config",
"get_previous_time",
"load_match",
"matchloop",
"load_metric",
"metricloop",
"load_pit",
"pitloop",
"push_match",
"push_metric",
"push_pit",
]
# imports: # imports:
import os, sys, time from tra_analysis import analysis as an
import pymongo # soon to be deprecated import data as d
import traceback from collections import defaultdict
import json
import math
import numpy as np
import os
from os import system, name
from pathlib import Path
from multiprocessing import Pool
import platform
import sys
import time
import warnings import warnings
from config import Configuration, ConfigurationError
from data import get_previous_time, set_current_time, check_new_database_matches
from interface import Logger
from module import Match, Metric, Pit
import zmq
config_path = "config.json" global exec_threads
def main(logger, verbose, profile, debug, socket_send = None): def main():
def close_all(): global exec_threads
if "client" in locals():
client.close() sys.stderr = open("errorlog.txt", "w")
warnings.filterwarnings("ignore") warnings.filterwarnings("ignore")
logger.splash(__version__) splash()
modules = {"match": Match, "metric": Metric, "pit": Pit} while (True):
while True:
try: try:
loop_start = time.time() current_time = time.time()
print("[OK] time: " + str(current_time))
logger.info("current time: " + str(loop_start)) config = load_config("config.json")
socket_send("current time: " + str(loop_start)) competition = config["competition"]
match_tests = config["statistics"]["match"]
pit_tests = config["statistics"]["pit"]
metrics_tests = config["statistics"]["metric"]
print("[OK] configs loaded")
config = Configuration(config_path) print("[OK] starting threads")
cfg_max_threads = config["max-threads"]
logger.info("found and loaded config at <" + config_path + ">") sys_max_threads = os.cpu_count()
socket_send("found and loaded config at <" + config_path + ">") if cfg_max_threads > -sys_max_threads and cfg_max_threads < 0 :
alloc_processes = sys_max_threads + cfg_max_threads
apikey, tbakey = config.database, config.tba elif cfg_max_threads > 0 and cfg_max_threads < 1:
alloc_processes = math.floor(cfg_max_threads * sys_max_threads)
logger.info("found and loaded database and tba keys") elif cfg_max_threads > 1 and cfg_max_threads <= sys_max_threads:
socket_send("found and loaded database and tba keys") alloc_processes = cfg_max_threads
elif cfg_max_threads == 0:
client = pymongo.MongoClient(apikey) alloc_processes = sys_max_threads
logger.info("established connection to database")
socket_send("established connection to database")
previous_time = get_previous_time(client)
logger.info("analysis backtimed to: " + str(previous_time))
socket_send("analysis backtimed to: " + str(previous_time))
config.resolve_config_conflicts(logger, client)
config_modules, competition = config.modules, config.competition
for m in config_modules:
if m in modules:
start = time.time()
current_module = modules[m](config_modules[m], client, tbakey, previous_time, competition)
valid = current_module.validate_config()
if not valid:
continue
current_module.run()
logger.info(m + " module finished in " + str(time.time() - start) + " seconds")
socket_send(m + " module finished in " + str(time.time() - start) + " seconds")
if debug:
logger.save_module_to_file(m, current_module.data, current_module.results) # logging flag check done in logger
set_current_time(client, loop_start)
close_all()
logger.info("closed threads and database client")
logger.info("finished all tasks in " + str(time.time() - loop_start) + " seconds, looping")
socket_send("closed threads and database client")
socket_send("finished all tasks in " + str(time.time() - loop_start) + " seconds, looping")
if profile:
return 0
if debug:
return 0
event_delay = config["variable"]["event-delay"]
if event_delay:
logger.info("loop delayed until database returns new matches")
socket_send("loop delayed until database returns new matches")
new_match = False
while not new_match:
time.sleep(1)
new_match = check_new_database_matches(client, competition)
logger.info("database returned new matches")
socket_send("database returned new matches")
else: else:
loop_delay = float(config["variable"]["loop-delay"]) print("[ERROR] Invalid number of processes, must be between -" + str(sys_max_threads) + " and " + str(sys_max_threads))
remaining_time = loop_delay - (time.time() - loop_start) exit()
if remaining_time > 0: exec_threads = Pool(processes = alloc_processes)
logger.info("loop delayed by " + str(remaining_time) + " seconds") print("[OK] " + str(alloc_processes) + " threads started")
socket_send("loop delayed by " + str(remaining_time) + " seconds")
time.sleep(remaining_time) apikey = config["key"]["database"]
tbakey = config["key"]["tba"]
print("[OK] loaded keys")
previous_time = get_previous_time(apikey)
print("[OK] analysis backtimed to: " + str(previous_time))
print("[OK] loading data")
start = time.time()
match_data = load_match(apikey, competition)
pit_data = load_pit(apikey, competition)
print("[OK] loaded data in " + str(time.time() - start) + " seconds")
print("[OK] running match stats")
start = time.time()
matchloop(apikey, competition, match_data, match_tests)
print("[OK] finished match stats in " + str(time.time() - start) + " seconds")
print("[OK] running team metrics")
start = time.time()
metricloop(tbakey, apikey, competition, previous_time, metrics_tests)
print("[OK] finished team metrics in " + str(time.time() - start) + " seconds")
print("[OK] running pit analysis")
start = time.time()
pitloop(apikey, competition, pit_data, pit_tests)
print("[OK] finished pit analysis in " + str(time.time() - start) + " seconds")
set_current_time(apikey, current_time)
print("[OK] finished all tests, looping")
print_hrule()
except KeyboardInterrupt: except KeyboardInterrupt:
close_all() print("\n[OK] caught KeyboardInterrupt, killing processes")
logger.info("detected KeyboardInterrupt, exiting") exec_threads.terminate()
socket_send("detected KeyboardInterrupt, exiting") print("[OK] processes killed, exiting")
return 0 exit()
except ConfigurationError as e: else:
str_e = "".join(traceback.format_exception(e))
logger.error("encountered a configuration error: " + str(e))
logger.error(str_e)
socket_send("encountered a configuration error: " + str(e))
socket_send(str_e)
close_all()
return 1
except Exception as e:
str_e = "".join(traceback.format_exception(e))
logger.error("encountered an exception while running")
logger.error(str_e)
socket_send("encountered an exception while running")
socket_send(str_e)
close_all()
return 1
def start(pid_path, verbose, profile, debug):
if profile:
def send(msg):
pass pass
logger = Logger(verbose, profile, debug) #clear()
import cProfile, pstats, io def clear():
profile = cProfile.Profile()
profile.enable()
exit_code = main(logger, verbose, profile, debug, socket_send = send)
profile.disable()
f = open("profile.txt", 'w+')
ps = pstats.Stats(profile, stream = f).sort_stats('cumtime')
ps.print_stats()
sys.exit(exit_code)
elif verbose: # for windows
if name == 'nt':
_ = system('cls')
def send(msg): # for mac and linux(here, os.name is 'posix')
pass else:
_ = system('clear')
logger = Logger(verbose, profile, debug) def print_hrule():
exit_code = main(logger, verbose, profile, debug, socket_send = send) print("#"+38*"-"+"#")
sys.exit(exit_code)
elif debug: def print_box(s):
def send(msg): temp = "|"
pass temp += s
temp += (40-len(s)-2)*" "
temp += "|"
print(temp)
logger = Logger(verbose, profile, debug) def splash():
exit_code = main(logger, verbose, profile, debug, socket_send = send) print_hrule()
sys.exit(exit_code) print_box(" superscript version: " + __version__)
print_box(" os: " + platform.system())
print_box(" python: " + platform.python_version())
print_hrule()
def load_config(file):
config_vector = {}
try:
f = open(file)
except:
print("[ERROR] could not locate config.json, generating blank config.json and exiting")
f = open(file, "w")
f.write(sample_json)
exit()
config_vector = json.load(f)
return config_vector
def save_config(file, config_vector):
with open(file) as f:
json.dump(config_vector, f)
def get_previous_time(apikey):
previous_time = d.get_analysis_flags(apikey, "latest_update")
if previous_time == None:
d.set_analysis_flags(apikey, "latest_update", 0)
previous_time = 0
else: else:
logfile = "logfile.log" previous_time = previous_time["latest_update"]
f = open(logfile, 'w+') return previous_time
f.close()
e = open('errorlog.log', 'w+') def set_current_time(apikey, current_time):
with daemon.DaemonContext(
working_directory = os.getcwd(),
pidfile = pidfile.TimeoutPIDLockFile(pid_path),
stderr = e
):
context = zmq.Context() d.set_analysis_flags(apikey, "latest_update", {"latest_update":current_time})
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5678")
socket.send(b'status')
def send(msg): def load_match(apikey, competition):
socket.send(bytes("status: " + msg, "utf-8"))
logger = Logger(verbose, profile, debug, file = logfile) return d.get_match_data_formatted(apikey, competition)
exit_code = main(logger, verbose, profile, debug, socket_send = send) def simplestats(data_test):
socket.close() data = np.array(data_test[0])
f.close() data = data[np.isfinite(data)]
ranges = list(range(len(data)))
sys.exit(exit_code) test = data_test[1]
def stop(pid_path): if test == "basic_stats":
return an.basic_stats(data)
if test == "historical_analysis":
return an.histo_analysis([ranges, data])
if test == "regression_linear":
return an.regression(ranges, data, ['lin'])
if test == "regression_logarithmic":
return an.regression(ranges, data, ['log'])
if test == "regression_exponential":
return an.regression(ranges, data, ['exp'])
if test == "regression_polynomial":
return an.regression(ranges, data, ['ply'])
if test == "regression_sigmoidal":
return an.regression(ranges, data, ['sig'])
def matchloop(apikey, competition, data, tests): # expects 3D array with [Team][Variable][Match]
global exec_threads
short_mapping = {"regression_linear": "lin", "regression_logarithmic": "log", "regression_exponential": "exp", "regression_polynomial": "ply", "regression_sigmoidal": "sig"}
class AutoVivification(dict):
def __getitem__(self, item):
try: try:
pf = open(pid_path, 'r') return dict.__getitem__(self, item)
pid = int(pf.read().strip()) except KeyError:
pf.close() value = self[item] = type(self)()
except IOError: return value
sys.stderr.write("pidfile at <" + pid_path + "> does not exist. Daemon not running?\n")
return return_vector = {}
team_filtered = []
variable_filtered = []
variable_data = []
test_filtered = []
result_filtered = []
return_vector = AutoVivification()
for team in data:
for variable in data[team]:
if variable in tests:
for test in tests[variable]:
team_filtered.append(team)
variable_filtered.append(variable)
variable_data.append((data[team][variable], test))
test_filtered.append(test)
result_filtered = exec_threads.map(simplestats, variable_data)
i = 0
result_filtered = list(result_filtered)
for result in result_filtered:
filtered = test_filtered[i]
try: try:
while True: short = short_mapping[filtered]
os.kill(pid, SIGTERM) return_vector[team_filtered[i]][variable_filtered[i]][test_filtered[i]] = result[short]
time.sleep(0.01) except KeyError: # not in mapping
except OSError as err: return_vector[team_filtered[i]][variable_filtered[i]][test_filtered[i]] = result
err = str(err) i += 1
if err.find("No such process") > 0:
if os.path.exists(pid_path): push_match(apikey, competition, return_vector)
os.remove(pid_path)
def load_metric(apikey, competition, match, group_name, metrics):
group = {}
for team in match[group_name]:
db_data = d.get_team_metrics_data(apikey, competition, team)
if d.get_team_metrics_data(apikey, competition, team) == None:
elo = {"score": metrics["elo"]["score"]}
gl2 = {"score": metrics["gl2"]["score"], "rd": metrics["gl2"]["rd"], "vol": metrics["gl2"]["vol"]}
ts = {"mu": metrics["ts"]["mu"], "sigma": metrics["ts"]["sigma"]}
group[team] = {"elo": elo, "gl2": gl2, "ts": ts}
else: else:
traceback.print_exc(file = sys.stderr)
sys.exit(1)
def restart(pid_path): metrics = db_data["metrics"]
stop(pid_path)
start(pid_path, False, False, False) elo = metrics["elo"]
gl2 = metrics["gl2"]
ts = metrics["ts"]
group[team] = {"elo": elo, "gl2": gl2, "ts": ts}
return group
def metricloop(tbakey, apikey, competition, timestamp, metrics): # listener based metrics update
elo_N = metrics["elo"]["N"]
elo_K = metrics["elo"]["K"]
matches = d.pull_new_tba_matches(tbakey, competition, timestamp)
red = {}
blu = {}
for match in matches:
red = load_metric(apikey, competition, match, "red", metrics)
blu = load_metric(apikey, competition, match, "blue", metrics)
elo_red_total = 0
elo_blu_total = 0
gl2_red_score_total = 0
gl2_blu_score_total = 0
gl2_red_rd_total = 0
gl2_blu_rd_total = 0
gl2_red_vol_total = 0
gl2_blu_vol_total = 0
for team in red:
elo_red_total += red[team]["elo"]["score"]
gl2_red_score_total += red[team]["gl2"]["score"]
gl2_red_rd_total += red[team]["gl2"]["rd"]
gl2_red_vol_total += red[team]["gl2"]["vol"]
for team in blu:
elo_blu_total += blu[team]["elo"]["score"]
gl2_blu_score_total += blu[team]["gl2"]["score"]
gl2_blu_rd_total += blu[team]["gl2"]["rd"]
gl2_blu_vol_total += blu[team]["gl2"]["vol"]
red_elo = {"score": elo_red_total / len(red)}
blu_elo = {"score": elo_blu_total / len(blu)}
red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)}
blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)}
if match["winner"] == "red":
observations = {"red": 1, "blu": 0}
elif match["winner"] == "blue":
observations = {"red": 0, "blu": 1}
else:
observations = {"red": 0.5, "blu": 0.5}
red_elo_delta = an.Metric().elo(red_elo["score"], blu_elo["score"], observations["red"], elo_N, elo_K) - red_elo["score"]
blu_elo_delta = an.Metric().elo(blu_elo["score"], red_elo["score"], observations["blu"], elo_N, elo_K) - blu_elo["score"]
new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metric().glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]])
new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metric().glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]])
red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]}
blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]}
for team in red:
red[team]["elo"]["score"] = red[team]["elo"]["score"] + red_elo_delta
red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"]
red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"]
red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"]
for team in blu:
blu[team]["elo"]["score"] = blu[team]["elo"]["score"] + blu_elo_delta
blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"]
blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"]
blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"]
temp_vector = {}
temp_vector.update(red)
temp_vector.update(blu)
push_metric(apikey, competition, temp_vector)
def load_pit(apikey, competition):
return d.get_pit_data_formatted(apikey, competition)
def pitloop(apikey, competition, pit, tests):
return_vector = {}
for team in pit:
for variable in pit[team]:
if variable in tests:
if not variable in return_vector:
return_vector[variable] = []
return_vector[variable].append(pit[team][variable])
push_pit(apikey, competition, return_vector)
def push_match(apikey, competition, results):
for team in results:
d.push_team_tests_data(apikey, competition, team, results[team])
def push_metric(apikey, competition, metric):
for team in metric:
d.push_team_metrics_data(apikey, competition, team, metric[team])
def push_pit(apikey, competition, pit):
for variable in pit:
d.push_team_pit_data(apikey, competition, variable, pit[variable])
def get_team_metrics(apikey, tbakey, competition):
metrics = d.get_metrics_data_formatted(apikey, competition)
elo = {}
gl2 = {}
for team in metrics:
elo[team] = metrics[team]["metrics"]["elo"]["score"]
gl2[team] = metrics[team]["metrics"]["gl2"]["score"]
elo = {k: v for k, v in sorted(elo.items(), key=lambda item: item[1])}
gl2 = {k: v for k, v in sorted(gl2.items(), key=lambda item: item[1])}
elo_ranked = []
for team in elo:
elo_ranked.append({"team": str(team), "elo": str(elo[team])})
gl2_ranked = []
for team in gl2:
gl2_ranked.append({"team": str(team), "gl2": str(gl2[team])})
return {"elo-ranks": elo_ranked, "glicko2-ranks": gl2_ranked}
sample_json = """{
"max-threads": 0.5,
"team": "",
"competition": "2020ilch",
"key":{
"database":"",
"tba":""
},
"statistics":{
"match":{
"balls-blocked":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-collected":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-lower-teleop":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-lower-auto":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-started":["basic_stats","historical_analyss","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-upper-teleop":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"],
"balls-upper-auto":["basic_stats","historical_analysis","regression_linear","regression_logarithmic","regression_exponential","regression_polynomial","regression_sigmoidal"]
},
"metric":{
"elo":{
"score":1500,
"N":400,
"K":24
},
"gl2":{
"score":1500,
"rd":250,
"vol":0.06
},
"ts":{
"mu":25,
"sigma":8.33
}
},
"pit":{
"wheel-mechanism":true,
"low-balls":true,
"high-balls":true,
"wheel-success":true,
"strategic-focus":true,
"climb-mechanism":true,
"attitude":true
}
}
}"""
if __name__ == "__main__": if __name__ == "__main__":
if sys.platform.startswith('win'):
if sys.platform.startswith("win"): multiprocessing.freeze_support()
start(None, verbose = True) main()
else:
import daemon
from daemon import pidfile
from signal import SIGTERM
pid_path = "tra-daemon.pid"
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
start(pid_path, False, False, False)
elif 'stop' == sys.argv[1]:
stop(pid_path)
elif 'restart' == sys.argv[1]:
restart(pid_path)
elif 'verbose' == sys.argv[1]:
start(None, True, False, False)
elif 'profile' == sys.argv[1]:
start(None, False, True, False)
elif 'debug' == sys.argv[1]:
start(None, False, False, True)
else:
print("usage: %s start|stop|restart|verbose|profile|debug" % sys.argv[0])
sys.exit(2)
sys.exit(0)
else:
print("usage: %s start|stop|restart|verbose|profile|debug" % sys.argv[0])
sys.exit(2)

37
src/superscript.spec Normal file
View File

@@ -0,0 +1,37 @@
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['superscript.py'],
pathex=['/workspaces/tra-data-analysis/src'],
binaries=[],
datas=[],
hiddenimports=[
"dnspython",
"sklearn.utils._weight_vector",
"requests",
],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[('W ignore', None, 'OPTION')],
name='superscript',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True )

12
submit-debug.sh Normal file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
#
#SBATCH --job-name=tra-superscript
#SBATCH --output=slurm-tra-superscript.out
#SBATCH --ntasks=8
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=256
#SBATCH --mail-user=dsingh@imsa.edu
#SBATCH -p cpu-long
cd competition
python superscript.py debug

12
submit-prod.sh Normal file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
#
#SBATCH --job-name=tra-superscript
#SBATCH --output=PROD_slurm-tra-superscript.out
#SBATCH --ntasks=8
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=256
#SBATCH --mail-user=dsingh@imsa.edu
#SBATCH -p cpu-long
cd competition
python superscript.py verbose

View File

@@ -1,14 +0,0 @@
import signal
import zmq
signal.signal(signal.SIGINT, signal.SIG_DFL)
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect('tcp://localhost:5678')
socket.setsockopt(zmq.SUBSCRIBE, b'status')
while True:
message = socket.recv_multipart()
print(f'Received: {message}')