HLL Stats Tools – Modular Log Analysis Toolkit for Hell Let Loose¶
HLL Stats Tools is a modular Python toolkit for collecting, processing, and analyzing server logs from Hell Let Loose. It supports both legacy JSON workflows and a modern SQL-backed pipeline using SQLAlchemy and SQLite.
The project was initially developed to support performance tracking and match analysis within the Esprit De Corps (ESPT) competitive clan and has since expanded into a general-purpose tool for any community that want to run more complex stats than the one allowed by RCON.
Overview¶
HLL Stats Tools provides structured ingestion of server logs and stores them in normalized formats suitable for long-term analysis. It offers visualizations of player metrics (e.g., KPM, DPM) and supports automated log updates, custom group tracking, and optional Discord integration.
The project can run interactively or as a scheduled pipeline, with configuration controlled through external YAML and environment files.
Key Features¶
- Dual-mode processing: legacy JSON and SQL database pipelines
- Automated log ingestion via the HLL API
- Structured database schema for games, players, and events
- Performance metric calculations and time-series visualizations
- Support for group-based player tracking (e.g., clans or teams)
- Configurable plots with rolling averages and overlays
- CI pipeline integration with GitHub Actions
Project Structure¶
hll_stats_tools/
├── data_acquisition/ # API data collection logic
├── legacy_json/ # JSON-based processing (legacy)
├── sql_pipeline/ # SQL-based ingestion and analysis
├── plotting/ # Visualization tools
├── utils/ # Shared utilities (e.g., logging, time handling)
.github/workflows/ # Continuous Integration
run_pipeline.py # Unified pipeline entrypoint
config.yaml # Main configuration file
.env # API keys, runtime settings
Configuration¶
The toolkit is configured via two main files: config.yaml
and .env
.
Example: config.yaml
¶
data_acquisition:
output_folder: "data/logs"
update_to_last_minute: true
sql_pipeline:
db_path: "data/hll_stats.db"
force_reset: false
json_pipeline:
json_folder: "data/json_logs"
output_folder: "data/json_analyses"
Example: .env
¶
API_KEY="apikey000#"
log_file="path/to/server.log"
out_folder_historical_logs="path/to/historical"
out_folder_game_logs="path/to/json_game_logs"
out_folder_analysis="path/to/json_analysis_output"
out_folder_plots="path/to/player_plots"
out_folder_player_plots="path/to/player_plot_images"
group_name="ESPT"
group_members_json="path/to/group_players.json"
sql_database="sqlite:data/hll_stats.db"
FORCE_RESET="False"
group_png_folder="path/to/output/group_plots"
Usage¶
The unified pipeline can be launched using:
python run_pipeline.py
Behavior is driven by config.yaml
and .env
. The script will:
- Download and update logs
- Ingest and store data in SQLite or JSON format
- Apply metric corrections if required
- Generate performance plots if configured
Example Plotting Call¶
from hll_stats_tools.plotting.make_plot import plot_multiple_metrics
plot_multiple_metrics(
metrics_by_date={
"KPM": kpm_data,
"DPM": dpm_data
},
group_by="W", # Options: 'D', 'W', 'M'
rolling_average=3,
display_rolling_average_overlay=True,
title="Weekly Performance",
namefile=None
)
Development & Testing¶
- Centralized logging via
logger_utils.setup_logger(__name__)
- Logs written to both console and
logs/hll_stats.log
- Code style enforcement via
flake8
- CI testing pipeline enabled through
.github/workflows/ci.yaml
License¶
MIT License. Attribution is appreciated but not required.
Credits¶
Developed by Andrea Siotto. Special thanks to the Esprit De Corps (ESPT) community.