21 Commits

Author SHA1 Message Date
Dev Singh
8e6c44db65 updates for 2023ilpe 2023-03-17 10:09:48 -05:00
Dev
11398290eb fix 2022-04-08 15:40:33 -05:00
Dev
d847f6d6a7 hi 2022-04-08 15:20:41 -05:00
Dev Singh
b5c8a91fad update data.py for new metrics structure 2022-03-30 01:30:55 -05:00
Dev Singh
8e5fa7eace reintroduce cutoff usage 2022-03-29 18:43:28 -05:00
Arthur Lu
69c6059ff8 added result logging for metric module 2022-03-29 21:17:58 +00:00
Dev Singh
fdcdadb8b2 only use gl2 2022-03-28 23:17:25 -05:00
Arthur Lu
cdd81295fc commented metrics in module.py 2022-03-28 22:02:39 +00:00
Dev Singh
82ec2d85cc clear metrics only on script start 2022-03-28 15:47:46 -05:00
Dev Singh
ac8002aaf8 delete on start 2022-03-28 14:43:24 -05:00
Dev Singh
25e4babd71 Revert "experimental trueskill support"
This reverts commit 3fe2922e97.
2022-03-28 14:21:23 -05:00
Dev Singh
3fe2922e97 experimental trueskill support 2022-03-28 10:13:42 -05:00
Dev
9752fd323b remove time check 2022-03-25 13:39:24 -05:00
Dev
ef63c1de7e add ability to load manual JSON for metrics 2022-03-24 16:19:47 -05:00
Dev
8908f05cbe fix prod issues 2022-03-16 18:51:10 -05:00
Dev
143218dda3 split sbatch commands 2022-03-16 18:39:30 -05:00
Dev
def2fc9b73 change gitignore, add up submit
Former-commit-id: 927f0a1a4c3dd0aff6fb4fca5f99ea62bc61584f
2022-03-14 20:55:00 -05:00
Dev
e8a5bb75f8 add sbatch script
Former-commit-id: f521f7b3f69df71171cd046a40bcbcb6637967a6
2022-03-13 21:56:34 -05:00
Arthur Lu
c9dd09f5e9 fixed file name
Former-commit-id: 7d113b378854316c3af5e5c58f589c5c062040b4
2022-03-13 18:54:50 -07:00
Arthur Lu
3c6e3ac58e added pandas to requirements,
readded dev docker files


Former-commit-id: 3fa4ef57e5e9542ce245ab0ef9b8320e21c9507c
2022-03-14 01:53:50 +00:00
Arthur Lu
8c28c24d60 removed all unessasary files,
moved important files to folder "competition"


Former-commit-id: 59becb22abc3305a36e2876351e6c7306e3f551e
2022-03-14 01:33:24 +00:00
23 changed files with 114 additions and 277 deletions

View File

@@ -1,7 +1,7 @@
{ {
"name": "TRA Analysis Development Environment", "name": "TRA Analysis Development Environment",
"build": { "build": {
"dockerfile": "Dockerfile", "dockerfile": "Dockerfile"
}, },
"settings": { "settings": {
"terminal.integrated.shell.linux": "/bin/bash", "terminal.integrated.shell.linux": "/bin/bash",
@@ -19,4 +19,4 @@
"waderyan.gitblame" "waderyan.gitblame"
], ],
"postCreateCommand": "" "postCreateCommand": ""
} }

View File

@@ -1,6 +1,7 @@
cerberus cerberus
dnspython dnspython
numpy numpy
pandas
pyinstaller pyinstaller
pylint pylint
pymongo pymongo

View File

@@ -1,38 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,35 +0,0 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Build Superscript Linux
on:
release:
types: [published, edited]
jobs:
generate:
name: Build Linux
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout@master
- name: Install Dependencies
run: pip install -r requirements.txt
working-directory: src/
- name: Give Execute Permission
run: chmod +x build-CLI.sh
working-directory: build/
- name: Build Binary
run: ./build-CLI.sh
working-directory: build/
- name: Copy Binary to Root Dir
run: cp superscript ..
working-directory: dist/
- name: Upload Release Asset
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
file: superscript
asset_name: superscript
tag: ${{ github.ref }}

View File

@@ -1,34 +0,0 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Superscript Unit Tests
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
if [ -f src/requirements.txt ]; then pip install -r src/requirements.txt; fi
- name: Test with pytest
run: |
pytest test/

3
.gitignore vendored
View File

@@ -16,3 +16,6 @@
**/*.log **/*.log
**/errorlog.txt **/errorlog.txt
/dist/* /dist/*
slurm-tra-superscript.out
config*.json

View File

@@ -1,5 +0,0 @@
set pathtospec="../src/superscript.spec"
set pathtodist="../dist/"
set pathtowork="temp/"
pyinstaller --clean --distpath %pathtodist% --workpath %pathtowork% %pathtospec%

View File

@@ -1,5 +0,0 @@
pathtospec="superscript.spec"
pathtodist="../dist/"
pathtowork="temp/"
pyinstaller --clean --distpath ${pathtodist} --workpath ${pathtowork} ${pathtospec}

View File

@@ -1,50 +0,0 @@
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['../src/superscript.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=['dnspython', 'sklearn.utils._weight_vector', 'sklearn.utils._typedefs', 'sklearn.neighbors._partition_nodes', 'requests'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=['matplotlib'],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False
)
pyz = PYZ(
a.pure,
a.zipped_data,
cipher=block_cipher
)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='superscript',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='superscript'
)

View File

@@ -1,16 +1,26 @@
from calendar import c
import requests import requests
import pull import pull
import pandas as pd import pandas as pd
import json
def pull_new_tba_matches(apikey, competition, cutoff): def pull_new_tba_matches(apikey, competition, last_match):
api_key= apikey api_key= apikey
x=requests.get("https://www.thebluealliance.com/api/v3/event/"+competition+"/matches/simple", headers={"X-TBA-Auth-Key":api_key}) x=requests.get("https://www.thebluealliance.com/api/v3/event/"+competition+"/matches/simple", headers={"X-TBA-Auth-Key":api_key})
json = x.json()
out = [] out = []
for i in x.json(): for i in json:
if i["actual_time"] != None and i["actual_time"]-cutoff >= 0 and i["comp_level"] == "qm": if i["actual_time"] != None and i["comp_level"] == "qm" and i["match_number"] > last_match :
out.append({"match" : i['match_number'], "blue" : list(map(lambda x: int(x[3:]), i['alliances']['blue']['team_keys'])), "red" : list(map(lambda x: int(x[3:]), i['alliances']['red']['team_keys'])), "winner": i["winning_alliance"]}) out.append({"match" : i['match_number'], "blue" : list(map(lambda x: int(x[3:]), i['alliances']['blue']['team_keys'])), "red" : list(map(lambda x: int(x[3:]), i['alliances']['red']['team_keys'])), "winner": i["winning_alliance"]})
out.sort(key=lambda x: x['match'])
return out return out
def pull_new_tba_matches_manual(apikey, competition, cutoff):
filename = competition+"-wins.json"
with open(filename, 'r') as f:
data = json.load(f)
return data
def get_team_match_data(client, competition, team_num): def get_team_match_data(client, competition, team_num):
db = client.data_scouting db = client.data_scouting
mdata = db.matchdata mdata = db.matchdata
@@ -19,6 +29,12 @@ def get_team_match_data(client, competition, team_num):
out[i['match']] = i['data'] out[i['match']] = i['data']
return pd.DataFrame(out) return pd.DataFrame(out)
def clear_metrics(client, competition):
db = client.data_processing
data = db.team_metrics
data.delete_many({competition: competition})
return True
def get_team_pit_data(client, competition, team_num): def get_team_pit_data(client, competition, team_num):
db = client.data_scouting db = client.data_scouting
mdata = db.pitdata mdata = db.pitdata
@@ -28,7 +44,15 @@ def get_team_pit_data(client, competition, team_num):
def get_team_metrics_data(client, competition, team_num): def get_team_metrics_data(client, competition, team_num):
db = client.data_processing db = client.data_processing
mdata = db.team_metrics mdata = db.team_metrics
return mdata.find_one({"competition" : competition, "team": team_num}) temp = mdata.find_one({"team": team_num})
if temp != None:
if competition in temp['metrics'].keys():
temp = temp['metrics'][competition]
else :
temp = None
else:
temp = None
return temp
def get_match_data_formatted(client, competition): def get_match_data_formatted(client, competition):
teams_at_comp = pull.get_teams_at_competition(competition) teams_at_comp = pull.get_teams_at_competition(competition)
@@ -51,7 +75,7 @@ def get_metrics_data_formatted(client, competition):
return out return out
def get_pit_data_formatted(client, competition): def get_pit_data_formatted(client, competition):
x=requests.get("https://titanscouting.epochml.org/api/fetchAllTeamNicknamesAtCompetition?competition="+competition) x=requests.get("https://scouting.titanrobotics2022.com/api/fetchAllTeamNicknamesAtCompetition?competition="+competition)
x = x.json() x = x.json()
x = x['data'] x = x['data']
x = x.keys() x = x.keys()
@@ -84,7 +108,7 @@ def push_team_tests_data(client, competition, team_num, data, dbname = "data_pro
def push_team_metrics_data(client, competition, team_num, data, dbname = "data_processing", colname = "team_metrics"): def push_team_metrics_data(client, competition, team_num, data, dbname = "data_processing", colname = "team_metrics"):
db = client[dbname] db = client[dbname]
mdata = db[colname] mdata = db[colname]
mdata.replace_one({"competition" : competition, "team": team_num}, {"_id": competition+str(team_num)+"am", "competition" : competition, "team" : team_num, "metrics" : data}, True) mdata.update_one({"team": team_num}, {"$set": {"metrics.{}".format(competition): data}}, upsert=True)
def push_team_pit_data(client, competition, variable, data, dbname = "data_processing", colname = "team_pit"): def push_team_pit_data(client, competition, variable, data, dbname = "data_processing", colname = "team_pit"):
db = client[dbname] db = client[dbname]
@@ -94,12 +118,12 @@ def push_team_pit_data(client, competition, variable, data, dbname = "data_proce
def get_analysis_flags(client, flag): def get_analysis_flags(client, flag):
db = client.data_processing db = client.data_processing
mdata = db.flags mdata = db.flags
return mdata.find_one({flag:{"$exists":True}}) return mdata.find_one({"_id": "2022"})
def set_analysis_flags(client, flag, data): def set_analysis_flags(client, flag, data):
db = client.data_processing db = client.data_processing
mdata = db.flags mdata = db.flags
return mdata.replace_one({flag:{"$exists":True}}, data, True) return mdata.update_one({"_id": "2022"}, {"$set": data})
def unkeyify_2l(layered_dict): def unkeyify_2l(layered_dict):
out = {} out = {}
@@ -153,22 +177,17 @@ def load_metric(client, competition, match, group_name, metrics):
db_data = get_team_metrics_data(client, competition, team) db_data = get_team_metrics_data(client, competition, team)
if db_data == None: if db_data == None:
elo = {"score": metrics["elo"]["score"]}
gl2 = {"score": metrics["gl2"]["score"], "rd": metrics["gl2"]["rd"], "vol": metrics["gl2"]["vol"]} gl2 = {"score": metrics["gl2"]["score"], "rd": metrics["gl2"]["rd"], "vol": metrics["gl2"]["vol"]}
ts = {"mu": metrics["ts"]["mu"], "sigma": metrics["ts"]["sigma"]}
group[team] = {"elo": elo, "gl2": gl2, "ts": ts} group[team] = {"gl2": gl2}
else: else:
metrics = db_data["metrics"] metrics = db_data
elo = metrics["elo"]
gl2 = metrics["gl2"] gl2 = metrics["gl2"]
ts = metrics["ts"]
group[team] = {"elo": elo, "gl2": gl2, "ts": ts} group[team] = {"gl2": gl2}
return group return group

View File

@@ -96,20 +96,11 @@ sample_json = """
}, },
"metric":{ "metric":{
"tests":{ "tests":{
"elo":{
"score":1500,
"N":400,
"K":24
},
"gl2":{ "gl2":{
"score":1500, "score":1500,
"rd":250, "rd":250,
"vol":0.06 "vol":0.06
}, },
"ts":{
"mu":25,
"sigma":8.33
}
} }
}, },
"pit":{ "pit":{

View File

@@ -3,6 +3,7 @@ import data as d
import signal import signal
import numpy as np import numpy as np
from tra_analysis import Analysis as an from tra_analysis import Analysis as an
from tqdm import tqdm
class Module(metaclass = abc.ABCMeta): class Module(metaclass = abc.ABCMeta):
@@ -169,25 +170,21 @@ class Metric (Module):
self._push_results() self._push_results()
def _load_data(self): def _load_data(self):
self.data = d.pull_new_tba_matches(self.tbakey, self.competition, self.timestamp) self.last_match = d.get_analysis_flags(self.apikey, 'metrics_last_match')['metrics_last_match']
print("Previous last match", self.last_match)
self.data = d.pull_new_tba_matches(self.tbakey, self.competition, self.last_match)
def _process_data(self): def _process_data(self):
elo_N = self.config["tests"]["elo"]["N"] self.results = {}
elo_K = self.config["tests"]["elo"]["K"] self.match = self.last_match
matches = self.data matches = self.data
red = {} red = {}
blu = {} blu = {}
for match in tqdm(matches, desc="Metrics"): # grab matches and loop through each one
for match in matches: self.match = max(self.match, int(match['match']))
red = d.load_metric(self.apikey, self.competition, match, "red", self.config["tests"]) # get the current ratings for red
red = d.load_metric(self.apikey, self.competition, match, "red", self.config["tests"]) blu = d.load_metric(self.apikey, self.competition, match, "blue", self.config["tests"]) # get the current ratings for blue
blu = d.load_metric(self.apikey, self.competition, match, "blue", self.config["tests"])
elo_red_total = 0
elo_blu_total = 0
gl2_red_score_total = 0 gl2_red_score_total = 0
gl2_blu_score_total = 0 gl2_blu_score_total = 0
@@ -198,72 +195,63 @@ class Metric (Module):
gl2_red_vol_total = 0 gl2_red_vol_total = 0
gl2_blu_vol_total = 0 gl2_blu_vol_total = 0
for team in red: for team in red: # for each team in red, add up gl2 score components
elo_red_total += red[team]["elo"]["score"]
gl2_red_score_total += red[team]["gl2"]["score"] gl2_red_score_total += red[team]["gl2"]["score"]
gl2_red_rd_total += red[team]["gl2"]["rd"] gl2_red_rd_total += red[team]["gl2"]["rd"]
gl2_red_vol_total += red[team]["gl2"]["vol"] gl2_red_vol_total += red[team]["gl2"]["vol"]
for team in blu: for team in blu: # for each team in blue, add up gl2 score components
elo_blu_total += blu[team]["elo"]["score"]
gl2_blu_score_total += blu[team]["gl2"]["score"] gl2_blu_score_total += blu[team]["gl2"]["score"]
gl2_blu_rd_total += blu[team]["gl2"]["rd"] gl2_blu_rd_total += blu[team]["gl2"]["rd"]
gl2_blu_vol_total += blu[team]["gl2"]["vol"] gl2_blu_vol_total += blu[team]["gl2"]["vol"]
red_elo = {"score": elo_red_total / len(red)}
blu_elo = {"score": elo_blu_total / len(blu)}
red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)} red_gl2 = {"score": gl2_red_score_total / len(red), "rd": gl2_red_rd_total / len(red), "vol": gl2_red_vol_total / len(red)} # average the scores by dividing by 3
blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)} blu_gl2 = {"score": gl2_blu_score_total / len(blu), "rd": gl2_blu_rd_total / len(blu), "vol": gl2_blu_vol_total / len(blu)} # average the scores by dividing by 3
if match["winner"] == "red": if match["winner"] == "red": # if red won, set observations to {"red": 1, "blu": 0}
observations = {"red": 1, "blu": 0} observations = {"red": 1, "blu": 0}
elif match["winner"] == "blue": elif match["winner"] == "blue": # if blue won, set observations to {"red": 0, "blu": 1}
observations = {"red": 0, "blu": 1} observations = {"red": 0, "blu": 1}
else: else: # otherwise it was a tie and observations is {"red": 0.5, "blu": 0.5}
observations = {"red": 0.5, "blu": 0.5} observations = {"red": 0.5, "blu": 0.5}
red_elo_delta = an.Metric().elo(red_elo["score"], blu_elo["score"], observations["red"], elo_N, elo_K) - red_elo["score"]
blu_elo_delta = an.Metric().elo(blu_elo["score"], red_elo["score"], observations["blu"], elo_N, elo_K) - blu_elo["score"]
new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metric().glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]]) new_red_gl2_score, new_red_gl2_rd, new_red_gl2_vol = an.Metric().glicko2(red_gl2["score"], red_gl2["rd"], red_gl2["vol"], [blu_gl2["score"]], [blu_gl2["rd"]], [observations["red"], observations["blu"]]) # calculate new scores for gl2 for red
new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metric().glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]]) new_blu_gl2_score, new_blu_gl2_rd, new_blu_gl2_vol = an.Metric().glicko2(blu_gl2["score"], blu_gl2["rd"], blu_gl2["vol"], [red_gl2["score"]], [red_gl2["rd"]], [observations["blu"], observations["red"]]) # calculate new scores for gl2 for blue
red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]} red_gl2_delta = {"score": new_red_gl2_score - red_gl2["score"], "rd": new_red_gl2_rd - red_gl2["rd"], "vol": new_red_gl2_vol - red_gl2["vol"]} # calculate gl2 deltas for red
blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]} blu_gl2_delta = {"score": new_blu_gl2_score - blu_gl2["score"], "rd": new_blu_gl2_rd - blu_gl2["rd"], "vol": new_blu_gl2_vol - blu_gl2["vol"]} # calculate gl2 deltas for blue
for team in red: for team in red: # for each team on red, add the previous score with the delta to find the new score
red[team]["elo"]["score"] = red[team]["elo"]["score"] + red_elo_delta
red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"] red[team]["gl2"]["score"] = red[team]["gl2"]["score"] + red_gl2_delta["score"]
red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"] red[team]["gl2"]["rd"] = red[team]["gl2"]["rd"] + red_gl2_delta["rd"]
red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"] red[team]["gl2"]["vol"] = red[team]["gl2"]["vol"] + red_gl2_delta["vol"]
for team in blu: for team in blu: # for each team on blue, add the previous score with the delta to find the new score
blu[team]["elo"]["score"] = blu[team]["elo"]["score"] + blu_elo_delta
blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"] blu[team]["gl2"]["score"] = blu[team]["gl2"]["score"] + blu_gl2_delta["score"]
blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"] blu[team]["gl2"]["rd"] = blu[team]["gl2"]["rd"] + blu_gl2_delta["rd"]
blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"] blu[team]["gl2"]["vol"] = blu[team]["gl2"]["vol"] + blu_gl2_delta["vol"]
temp_vector = {} temp_vector = {}
temp_vector.update(red) temp_vector.update(red) # update the team's score with the temporay vector
temp_vector.update(blu) temp_vector.update(blu)
d.push_metric(self.apikey, self.competition, temp_vector) self.results[match['match']] = temp_vector
d.push_metric(self.apikey, self.competition, temp_vector) # push new scores to db
print("New last match", self.match)
d.set_analysis_flags(self.apikey, 'metrics_last_match', {'metrics_last_match': self.match})
def _push_results(self): def _push_results(self):
pass pass

View File

@@ -2,7 +2,7 @@ import requests
from exceptions import APIError from exceptions import APIError
from dep import load_config from dep import load_config
url = "https://titanscouting.epochml.org" url = "https://scouting.titanrobotics2022.com"
config_tra = {} config_tra = {}
load_config("config.json", config_tra) load_config("config.json", config_tra)
trakey = config_tra['persistent']['key']['tra'] trakey = config_tra['persistent']['key']['tra']

View File

@@ -0,0 +1,15 @@
cerberus
dnspython
numpy
pandas
pyinstaller
pylint
pymongo
pyparsing
python-daemon
pyzmq
requests
scikit-learn
scipy
six
tra-analysis

View File

@@ -154,7 +154,7 @@ import pymongo # soon to be deprecated
import traceback import traceback
import warnings import warnings
from config import Configuration, ConfigurationError from config import Configuration, ConfigurationError
from data import get_previous_time, set_current_time, check_new_database_matches from data import get_previous_time, set_current_time, check_new_database_matches, clear_metrics
from interface import Logger from interface import Logger
from module import Match, Metric, Pit from module import Match, Metric, Pit
import zmq import zmq
@@ -205,7 +205,6 @@ def main(logger, verbose, profile, debug, socket_send = None):
config.resolve_config_conflicts(logger, client) config.resolve_config_conflicts(logger, client)
config_modules, competition = config.modules, config.competition config_modules, competition = config.modules, config.competition
for m in config_modules: for m in config_modules:
if m in modules: if m in modules:
start = time.time() start = time.time()

12
submit-debug.sh Normal file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
#
#SBATCH --job-name=tra-superscript
#SBATCH --output=slurm-tra-superscript.out
#SBATCH --ntasks=8
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=256
#SBATCH --mail-user=dsingh@imsa.edu
#SBATCH -p cpu-long
cd competition
python superscript.py debug

12
submit-prod.sh Normal file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
#
#SBATCH --job-name=tra-superscript
#SBATCH --output=PROD_slurm-tra-superscript.out
#SBATCH --ntasks=8
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=256
#SBATCH --mail-user=dsingh@imsa.edu
#SBATCH -p cpu-long
cd competition
python superscript.py verbose

View File

@@ -1,2 +0,0 @@
def test_():
assert 1 == 1

View File

@@ -1,14 +0,0 @@
import signal
import zmq
signal.signal(signal.SIGINT, signal.SIG_DFL)
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect('tcp://localhost:5678')
socket.setsockopt(zmq.SUBSCRIBE, b'status')
while True:
message = socket.recv_multipart()
print(f'Received: {message}')