Overview
An interactive dashboard for game metrics built using InfluxDB, Telegraf, and Grafana.
Built mainly to help a small game development studio with analyzing metrics from their game - driving data-backed conclusions for game updates.
Features
- Real-time monitoring for ingame servers
- Sort and filter dynamically by server ID
- Interactive graphs of wins, losses, tools, maps
A live demo can be seen here.
Background
A couple of friends were developing a Humans vs Zombies game for fun on the Roblox engine, and there was a difficult problem to determine which weapons were too strong or weak. Many updates were tunnel-visioned with feedback from playtesters, and not enough from the real population.
I offered to develop a solution by logging data in realtime servers to accurately describe trends that were actually happening ingame, working with the team in order to implement data logging.
Technology
-
An API was set up from the game itself using standard http webhooks to send data as a JSON, at specified intervals.
-
Telegraf is mainly used in ingesting data. An in-house script to interpret and then transform into usable data was employed
-
InfluxDB is a time-series database that was used to store all data received. Since the data is primarily sent on end-of-round matches, having time as a primary key made the most sense in order to analyze trends in game perks and weapons, as well as changes in trends after an update was released.
-
Grafana is linked with InfluxDB, and queries the database in order to display visualizations. Dashboard shows custom widgets in order to display relevant statistics.
A sent sample of data could look like this, which is then further processed and then inputted into the DB.
{
"stats":{
"Weapons":{
"Autorifle":7,
"Burstrifle":8
},
"Mutations":{
"Speed":6
},
"JobId":"1cb0cf48-d4d5-4008-9a8a-b88c768b8958",
"Version":1,
"Perks":{
"Tough":3
}
}
}
… which is then processed with Python…
load("json.star", "json")
load("logging.star", "log")
def apply(metric):
x = json.decode(metric.fields.get('value'))
log.info(str(x))
metrics = []
if "stats" in x.keys() and len(x["stats"]) > 0:
j = x["stats"]
if "Weapons" in j.keys() and len(j["Weapons"]) > 0:
for obj in j["Weapons"].items(): # populate weapons dict
new_metric = Metric("HumanLoadoutCounts")
new_metric.fields[str(obj[0])] = int(obj[1])
new_metric.tags["TypeOfPurchase"] = "Weapon"
new_metric.tags["Version"] = str(j["Version"])
new_metric.tags["ServerId"] = str(j["JobId"])
metrics.append(new_metric)
if "Perks" in j.keys() and len(j["Perks"]) > 0:
for obj in j["Perks"].items(): # populate perks dict
new_metric = Metric("HumanLoadoutCounts")
new_metric.fields[str(obj[0])] = int(obj[1])
new_metric.tags["TypeOfPurchase"] = "Perk"
new_metric.tags["Version"] = str(j["Version"])
new_metric.tags["ServerId"] = str(j["JobId"])
metrics.append(new_metric)
if "Mutations" in j.keys() and len(j["Mutations"]) > 0:
for obj in j["Mutations"].items(): # populate mutation dict
new_metric = Metric("ZombiePicks")
new_metric.fields[str(obj[0])] = int(obj[1])
new_metric.tags["TypeOfPurchase"] = "Mutations"
new_metric.tags["Version"] = str(j["Version"])
new_metric.tags["ServerId"] = str(j["JobId"])
metrics.append(new_metric)
if "WR" in j.keys() and len(j["WR"]) > 0:
for obj in j["WR"]["MapStats"].items(): # populate the actual map stats dict fuck
new_metric = Metric("MapStats")
for field in obj[1].items():
new_metric.fields[str(field[0])] = int(field[1])
new_metric.tags["MapName"] = str(obj[0])
new_metric.tags["Version"] = str(j["Version"])
new_metric.tags["ServerId"] = str(j["JobId"])
metrics.append(new_metric)
return metrics
and then finally processed into InfluxDB, where it is queried.
The full schema of the database is difficult to display, but an example query for a prominent stat (map pickings) is shown below.
from(bucket: "mutationstats")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(
fn: (r) => if "${ServerID}" != "" then
r.ServerId == "${ServerID}"
else
r.ServerId != ""
)
|> filter(fn: (r) => r["_measurement"] == "MapStats")
|> filter(fn: (r) => r["_field"] == "Total")
|> group(columns: ["MapName"])
|> reduce(fn: (r, accumulator) => ({sum: r._value + accumulator.sum}), identity: {sum: 0})
|> group()
|> rename(columns: {sum: "_value"})