add

11/08/2019


Analysing champion matchups Who does better in lane, and what metric do we use to find out?

How can we be robust in our matchup analysis?
An important part of climbing in League of Legends (or drafting for games in competitive leagues) is knowing how well champions counter one another. In general, you'd like each lane pick to be advantageous to you in some way, usually determined by the meta (or people's perception of the meta). I stumbled on a webpage from https://loldatascience.com which displays this information in the form of a heatmap for patch 9.9, which inspired me to approach this challenge. I can't claim this is an original idea, but since this hasn't been updated for a couple months, and the source code is hiding behind the webpage I thought I'd give a loose walkthrough of how a problem like this can be solved and the limitations of the things it finds.

The goal
The goal, simply, is to construct these heatmaps for a selection of champions within each lane. This means, for each game we find data for, we need to record lane matchups and some criteria to determine success within that lane (in this case, we'll stick to average gold lead at 10 mins, and victory status). After grabbing the data, we should have a table with the following columns - Blue Side, Red Side, Gold Lead (for blue), Win, Role. This should give us everything we need to construct the heatmaps, but how do we get this data?

Our approach
Ideally, we'd like this table to have around 100,000 entries for each role, which means the retrieval of around 100,000 games is required. The Riot API allows us to grab a player's last 100 games (although, not the data for these games, just the references to them). I'm going to use another endpoint (LEAGUE-V4) to find the summoner names of the highest ranked players in EUW, retrieve references to their last 100 games, and continue down the ladder until we reach 100,000 games or below Diamond I elo (for this patch), whichever comes first. There are some things we need to consider: two different players may have the same games within their match history - when storing the games we'll make sure the unique gameId is a primary key so duplicates can't be stored, also, the Riot API doesn't do a great job of telling us which role was played by which champion within a game; I'll employ the use of 3rd party modules to help me determine what the matchups for each game were. I'll also disregard any game that occured before July 31st, so we're only grabbing games from patch 9.15. Before deciding to retrieve 100,000 pieces of game data (around 2.58 GB in size), we should first construct test scripts which process a few game instances to ensure all known error cases are handled, and that this endeavour is possible at all.

Preliminaries
Assuming we have a game instance stored as data.json in the local directory, how can we extract the aforementioned columns we'd like to populate from this data? Once we've established how to do this for one game, we can iterate through the entire dataset to arrive at the final table. From the data we need only a few things the gameId, the champions that were played, the role of that champion, the respective gold of each champion at 10 minutes, and which team won. The Riot API splits "match data" and "match timeline data". We need the timeline data to find information about the gold, and also to determine roles using this 3rd party role identifier https://github.com/Canisback/roleML.

# Load in the game data
with open("data.json", "r") as f:
  game_data = json.load(f)

  # Match data is within game_data['match_data']
  # Timeline data is within game_data['timeline_data']
							


The champions that were played
Within the match data, a key participants gives us information about all players within a game, including a mapping between participantId and championId. We'll hold this information for later use. It should be noted that participant identities are per game only, and are used constantly as reference for a player throughout the match data.

# Dictionaries for our information to be stored in
part_champs = {}
part_gold = {}

# Find champions each participant (or "part") played
for part in gamedata['match_data']['participants']:
  part_champs.update({part['participantId'] : part['championId']})
							


The gold of each champion at 10 minutes
There are "frames" within the timeline match data, which are equivalent to minutes within the game. In our case, we simply want to index the 10th minute of the game, and add the gold information to the above participant and champion information.

# Find the gold for each participant at 10 minutes
for part_id in range(1,11):
  part_id = str(part_id)
  # Grab the information for a participant at the 10 minute mark, which we can index for their gold information
  info = gamedata['timeline_data']['frames'][9]['participantFrames'][frame_n]
  part_gold.update({info['participantId'] : info['totalGold']})
							


The role of champions
As previously mentioned, throwing the match and timeline data into roleML gives us a mapping of participantId and their role. These, again, and simply be appended to the above.

# RoleML gives us a mapping for each participant, and the role they played
part_roles = roleml.predict(gamedata['match_data'], gamedata['timeline_data'])
							


Which team won

# Check if blue won
win = True
if gamedata['match_data']['teams'][0]['win']=='Fail':
    win = False
							


Combining the above
Since we now have all the information we need, and series of transforms is needed to construct rows of the table. We need to pair matchups (given to us by the role), calculate the gold difference (just a simple subtraction of red gold from blue gold), and throw all this information into a row as required. So, we have 3 things altogether, the mappings of a participant to the champion that played, the gold they had at 10 minutes, and their role. An example of these is shown below.

part_champ = {
  '1' : 54,
  '2' : 23,
  '3' : 98,
     ...
}

part_gold = {
  '1' : 4534,
  '2' : 5423,
  '3' : 8293,
     ...
}

part_role = {
  '1' : 'supp',
  '2' : 'jung',
  '3' : 'mid',
     ...
}
							
So, to find matchups we simply need to find the two participants whose roles are 'top', two participants whose roles are 'mid', and so on. Then, we assign the champs they played, and the difference between their gold values to end up with a formatted input row like so.

# Item is a "matchup item" and hold information about which two participants faced off against each other
input_row = {'role' : item[0],
             'blue' : get_champ_by_id(part_champs[item[1]]),
             'red' : get_champ_by_id(part_champs[item[2]]),
             'gold_diff' : part_gold[item[1]]-part_gold[item[2]],
             'blue_win' : win}
							
This will form the basis for the production of any graphics made using this data, since we now have clear information on champion matchups, the role they were placed in, and which champion had the gold advantage (and by how much) at 10 minutes.

Getting the data
As previously mentioned, to get all the game data we need we're going to iterate over the highest elo players within EUW, adding the games in their match history to our dataset subject to a few conditions: we don't already have this game in the dataset, the game lasted longer than 15 minutes, and the game occurred after patch 9.15 landed. On each iteration we'll check how many games we already have, or what elo of player we've reached, and stop if necessary. Once we have all the game data objects, we simply need to pass it through the above parsing method (with some added error checking) and wait for the results. All in all, we retrieve about 75,000 games from the API, which takes about 6 hours to process.

Initial Analysis
Now we have the data, we can put some thought into how it should be handled. Ideally, when representing matchups, we want to cut out any matchup which has less than around 50 data points, since we can deem this as non-representative. We'll aggregate matchup stats by simply taking the mean gold difference between them. As an example, here is what happens when we drag the data into R (which I find much easier to graph in), perform our aggregates and produce a heatmap of the results. The amounts within the tiles indicate the size of the dataset the aggregate was formed from (and hence should be treated as a "trust factor" in how accurate that measurement is).
This graphic should be interpreted like so, choose a champion on the blue side, and follow across through the various matchups. The more blue the tile, the more dominant the blue side champion is in this matchup, and vice versa. We can see a couple of things, matchup information is "mirrored" across the bottom-left to top-right diagonal (as expected), and champions such as Riven and Renekton have a much higher prevelence in SoloQ than any other champion in top lane. For future graphs, I'll include all data for matchups, even if the datapoint count is below the required threshold. Although gold ahead at 10 minutes is a relatively good indicator to see which champion won lane, there are a couple of other factors such as CS at 10 minutes, and EXP at 10 minutes which also give some indication to how the matchup is going. An appropriate combination of all three would be much more indicative of the matchup than just one alone, although can cause some problems (which I'll describe later).

Percent Ahead Rating (PAR)
Reliable methods of performance ranking using ingame metrics are hard to substantiate, and also find the data for. Here, I'll outline my own combination of these coefficients to best represent which champion a matchup favours. In principle, it is described as;
PAR = Gp * Wg + Xp * Wx + Cp * Wc
Where, Gp represents the percentage more GOLD that the blue side champion has over the red side champion (Xp for EXP, and Cp for CS). Wg are the associated weightings to these percentages, allowing us to control the importance that gold, EXP or CS lead has on a matchup. These three weightings must form a statistical measure (i.e. be in the range 0-1 and sum to 1).
Here's an example of a matchup, some sample weightings, and the resultant PAR for the matchup.
Blue Red Blue Gold Red Gold Blue XP Red XP Blue CS Red CS
Jax Vayne 2815 3434 3823 3637 57 67

Gp = (2815/3434) x 100 - 100 = -18.0%
Xp = (3823/3637) x 100 - 100 = 5.1%
Cp = (57/67) x 100 - 100 = -14.9%

Wg = 0.6
Wx = 0.1
Wc = 0.3

PAR = -5.82%
The above implies that in a laning matchup between Jax and Vayne, Vayne "performs around 6% better" than Jax before 10 minutes. Obviously, the above may not be completely representative of the matchup, but luckily we have around 300,000 individual matchups to average over.

PAR Heatmaps
Below are selected top lane and mid lane matchups, and their resultant PAR heatmaps with the above weightings.


Drawbacks
The drawbacks in this model mostly stem from the combination of the variables (XP, CS and Gold) within our rating. Through modelling our data, we can find interesting trends of contributions by these variables especially when we filter by each role. For example, we don't expect CS to have an overbearing significance in gold lead or lane dominance in support matchups, therefore the weightings used within the PAR calculations should be changed to reflect that. Generally, both XP and CS are highly statistically significant in their prediction of a players gold at 10 minutes. In midlane however, I found that (over ~30,000 datapoints) a higher CS implies a lesser gold lead for equal XP, perhaps being a result of early kills leading to loss of farm from recalls.

I hope you enjoyed this (quite long and technical) article!
Let me know if you have any feedback, suggestions or hate @samhine_.

23/07/2019


Who has the biggest brain in the UKLC? Apart from me, of course...


At the start of the summer split, a few people took to twitter to express their perceived rankings of teams before the games had started. Here, we'll take a look at these rankings, their differences to the current score table, and the metrics we can use measure their "correctness".

The Problem
We have a table of predictions (specifically 9 UKLC teams) who's order determines their performance rankings. We'd like to generate a robust metric which can measure the "wrongness" of a table of predictions compared to their true value. In other areas on statistics, performance metrics are usually based on their "distance" from the true values (where distance can be defined in a number of ways), and are useful for the training/analysis of models, such as neural networks and other areas of machine learning. In our case, we're not so much worried about using a metric to train a model, just to find out who's been the "most correct" so far in their initial rankings.

The Data
After searching Twitter, I managed to wrangle 5 tier lists which users had put out, those users being Aux (@AuxCasts), Crane (@CraneLoL), Viggo (@ViggoEdits), Snuggli (@Snuggli_) and Royal (@royalwinsmid). These are the tier lists I'll be using for my analysis. All 5 of their predictions can be found below;

Position Team
1 FNC
2 MNM
3 XL
4 BRG
5 PHE
6 DBL
7 DMS
8 ENC
9 NVE
Position Team
1 FNC
2 XL
3 MNM
4 BRG
5 PHE
6 ENC
7 DBL
8 DMS
9 NVE
Position Team
1 FNC
2 XL
3 MNM
4 ENC
5 DMS
6 PHE
7 BRG
8 DBL
9 NVE
Position Team
1 FNC
2 XL
3 MNM
4 BRG
5 ENC
6 PHL
7 DBL
8 DMS
9 NVE
Position Team
1 ENC
2 BRG
3 XL
4 PHE
5 DMS
6 NVE
7 FNC
8 DBL
9 MNM


The Method
There are a few ideas I have personally for how this could be approached. The first (and most obvious) approach would be calculate the sum of differences for each of the prediction tables compared to the current true table. So, if someone was completely correct in their predictions but 1st and 2nd were switched, their score would be 2, since their 1st place prediction was 1 slot away, as well as their 2nd place prediction. One could simply divide this by 2 (which I'd imagine would seem more sensible in most peoples head), so I'll apply this transformation at the end.
The second method that comes to mind is calculating the amount of "switches" required for a predictions table for it to match the current true table. So, for example, if the rankings `1, 2, 3, 4` were correct, and someone had predicted `2, 4, 3, 1`, their score would be 2, since two switches (1 and 2, then 2 and 4) would render the table correct.
Each of these metrics have their downfalls. The second metric is great at telling us the complexity of "wrongness" of the predictions, but treats predictions such as `1, 3, 2, 4` and `4, 2, 3, 1` as equal when in most cases the second should be punished more. In this example, I'll choose to mainly focus on the first metric (which we'll now refer to as the "sum of differences") rather than the second (but will still be included where possible).

The Results
For the results, I'll choose Aux's predictions and walk through the calculation of his score. Here are his predictions and the current true table side by side, with differences marked.
Team Actual Predicted Difference
XL 1 3 2
FNC 2 1 1
MNM 3 2 1
DBL 4 6 2
DMS 5 7 2
PHE 6 5 1
BRG 7 4 3
ENC 8 8 0
NVE 9 9 0

Starting with the first prediction, we find the distance to the matching team in the true table, in this case 2.
For the second prediction, this distance is 1.
For the third, 1 and so on...

In the end Aux's differences total 12, which means his final score is 6.
We can apply the same metric to the other 4 predictions table which give the resultant scores;
User Score
Viggo 5
Aux 6
Crane 7
Snuggli 7
Royal 17.5


Limitations and Extensions
An obvious limitation is that in actuality, the bottom 3 teams all sit at 0 points. This means that the difference calculation for these three teams is a little skewed in favour of those guessed the bottom teams correctly. This should simply be noted, I'll probably come back to this at the end of the season when the point table is more distributed, and more concerete assumptions about brain sizes can be made.
There are a few extensions I can think of, mainly involving weightings. For example, wrong guesses towards the top of table could have more of an impact on score than wrong guesses towards the bottom. Subjectively, switching 1st and 2nd is a lot more meaningful than 7th and 8th. Also, weightings on how close two teams are together (in terms of points). If Team A trials Team B by 1 point, and Team C trials Team B by 20 points, mistaking their positions should have different levels of meanings.

I hope you enjoyed this (somewhat short) article.
Let me know if you have any feedback/hate @samhine_, or want your predictions to be included.

19/07/2019


UKLC Data Analysis - Does warding win games? Coauthors: Kasey, Jack


Over the past few weeks, I've had the fantastic opportunity to work with player and game data as part as my role with Diabolus. As fun as grinding out player statistics and tendencies is, I thought I'd take a step back and see what trends I could identify within the UKLC game data itself, and how well this matches up with what is commonly believed. Although there are many statistics to choose from, as a support main warding has more of an impact on my playstyle than most. Before getting into the nitty gritty of warding based stats, there are a few things of interest to note for the UKLC games so far.

Blue/Red win rates
For the 24 games that have been played in the UKLC (up to Week 4), only 9 games have featured a blue side victory. This means teams placed on red side have a 62.50% chance of winning*. This isn't so much in line with general SoloQ statistics - where blue side has a slightly advantageous ~52% win rate.

But where does this difference come from in SoloQ? I've recruited the help of a couple of big brains from Diabolus to help me explain some of the more in-depth game knowledge. He's my long time esports partner Jack (@DBL_Coach) explaining this difference in win rate, and why it occurs.

Generally speaking, blue is the historically stronger side. It's been notoriously strong in pro play because of first pick value, but also because of the the map-state which is what affects the solo queue winrate. What I'm alluding to here is specifically the way botlane is shaped. The tri brush gives very easy access to both dragon and the river, but also gives the jungler two ganks paths to bot. This results in an easier enforcement of what most players understand as 'better not wins'. There's several other factors however the ease of ganks and movement around blueside bottom lane is what stands out when you really conceptualise the map.
Although interesting, since the SoloQ statistics are so balanced, I will choose to discard this as a consideration within the analysis.

Other considerations
When analysing whether higher ward placement rates are conducive to winning games, we must also consider to converse case - "do teams ward more when they are winning?". Unfortunately, timeline data is not available for UKLC games so further investigation to the rate of ward placements after some winning indicator (such as gold lead) is not possible. So, this possibility is just a factor to consider throughout. It is also very strange to consider a univariate analysis (that is, only focusing on one variable) for a game as complicated as League of Legends, since there are obviously many co-dependent variables within a game instance which contribute to the victory or defeat of a team. Without trying to be to political, this is also a common problem in many attention grabbing news headlines, where journalists tend to focus purely on one statistic and treat it as the governing factor without considering a multivariate analysis.
In this case, I'll simply acknowledge that the consideration of only wards in a game may be ignorant, but for exploration's sake it's still a worthwhile endeavor.


What is warding?
For the few readers who aren't up to scratch on their League of Legends knowledge, I've got Kasey (@_WEEXIAO) to tell us more about warding, and why it's important.
In its most fundamental form, warding is the control of opportunity in League of Legends. Placing a ward lessens the fog of war on the map for your team, and thus increases your potential knowledge over map circumstances as it relates to the enemy team and neutral objectives. For competitive teamplay, there are two main benefits that warding provides for your team:
1. Positional advantages
2. Informational Advantage
The two forms of advantages that warding provides are not mutually exclusive - on the other hand, they tie into each other. An easy way to think of it would be micro information (positioning and etc) and macro information (knowledge that influences decision making), with the two tying into another. As a whole, the reason why warding is important is simple: warding provides details about the game state, which increases a team's sense of agency.
Approach
First of all, the warding data must be found for each game "instance". We'll record for each player of a game how many wards they placed, and how many they destroyed and label whether they were victorious. We can plot this simply as wards placed against wards destroyed, and colour each data point by win status.
alt text
From the scatter plot there are no obvious trends, mainly because we're mixing support warding statistics along with other roles. Since we have no indication of which lane we're pulling data from (and it would be messy to do so), this is obviously less than helpful for our investigation. A sensible idea might be to use summary statistics within each game instance to find total warding statistics for an entire team within a game, rather than for individuals.
alt text
Although a lot easier to digest, this is also unhelpful. It's not easy to discern whether higher rates of warding (points more to the left) suggests a higher win rate. There is one more factor to consider while doing this kind of analysis - game length. The flaring of the scatter plots suggests fewer games have high rates of warding, likely due to the small amount of games that go on for a longer period. To combat this problem, we'll simply find the wards placed/destroyed per minute by each team for each game.
alt text
This is still a little bit messy, but we're no longer suffering from the flaring (or "right facing trumpet") as seen in the previous "time ignorant" graphs. Overall, we can see that the graph is slightly more red on the lower left, and slightly more blue on the upper right. It may help to have more game data, but from this simple analysis we can sort of see that higher warding rates do indicate a higher win rate.

Summary
There's a reason for my cautious language use throughout this article, usually making claims in the statistical world requires modelling, and performance analysis thereof. Since we only have 48 data points (24 games) any modelling on a scatter plot as varied as the above is likely not to render any useful result. But, informed speculation is still fun.

Some interesting notes;
  • · The highest warding rate of any game was achieved by Excel against Fnatic, the 24th match of the split (4.46 wards placed per minute against 3.94 from FNC).
  • · The lowest warding rate of any game was "achieved" by Barrage against Phelan, the 18th match of the split (2.11 wards placed per minute against 3.04 from PHL).
  • · Out of the 24 games, only 8 were won by the team with the lower warding rate.
  • · The highest warding rate of any player was FNC support Prosfair against XL (game 24) with 2.01 wards per minute.
  • · The lowest warding rate of any player was NVE jungler Infinity against MNM (game 20) with 0.15 wards per minute.

Below are the full tables for match, team and individual breakdowns;
  • listMatch Breakdown
    Match Team Wards Placed per Minute Wards Destroyed per Minute Win?
    1 NVE 2.59 0.83 FALSE
    DMS 2.75 1.00 TRUE
    2 PHL 3.76 1.49 FALSE
    DBL 3.74 1.72 TRUE
    3 ENC 2.94 1.35 FALSE
    MNM 3.24 1.53 TRUE
    4 XL 3.16 1.36 TRUE
    BRG 2.87 1.03 FALSE
    5 DMS 3.35 1.32 FALSE
    DBL 3.18 1.35 TRUE
    6 MNM 2.82 0.96 FALSE
    XL 2.82 1.36 TRUE
    7 DBL 2.63 1.03 FALSE
    XL 2.51 1.11 TRUE
    8 FNC 3.23 1.09 FALSE
    XL 3.40 1.31 TRUE
    9 PHL 3.16 0.75 FALSE
    DMS 2.97 0.78 TRUE
    10 NVE 2.37 0.61 FALSE
    FNC 2.85 1.14 TRUE
    11 DBL 3.85 1.63 TRUE
    ENC 2.97 1.99 FALSE
    12 BRG 3.32 1.37 FALSE
    MNM 3.34 1.72 TRUE
    13 DMS 3.39 0.82 FALSE
    FNC 3.62 1.10 TRUE
    14 DBL 3.00 0.89 FALSE
    MNM 2.59 1.44 TRUE
    15 FNC 3.32 1.20 TRUE
    MNM 2.36 1.10 FALSE
    16 XL 3.26 1.83 FALSE
    FNC 3.38 1.69 TRUE
    17 ENC 2.59 1.28 FALSE
    XL 2.86 1.64 TRUE
    18 PHL 3.04 0.82 TRUE
    BRG 2.11 1.11 FALSE
    19 DBL 2.88 1.41 TRUE
    DMS 2.79 0.88 FALSE
    20 MNM 2.49 1.20 TRUE
    NVE 2.83 0.58 FALSE
    21 XL 3.73 1.50 TRUE
    PHL 2.97 1.29 FALSE
    22 DBL 2.87 0.83 FALSE
    MNM 2.32 1.17 TRUE
    23 XL 3.44 1.44 TRUE
    MNM 2.42 1.02 FALSE
    24 FNC 3.94 2.30 TRUE
    XL 4.46 1.84 FALSE
  • groupTeam Breakdown
    Team Average Wards Placed per minute
    FNC 3.39
    XL 3.29
    PHL 3.23
    DBL 3.17
    DMS 3.05
    ENC 2.84
    BRG 2.77
    MNM 2.70
    NVE 2.59
  • personPlayer Breakdown
    Team Player Role Average Wards Placed per Minute
    FNC Prosfair Supp 1.42
    DMS Viggo Supp 1.36
    XL KaSing Supp 1.35
    DBL Heathen Supp 1.17
    PHL Visdom Supp 1.13
    NVE Propameal Supp 1.03
    ENC Raizins Supp 0.91
    BRG Fastlegged Supp 0.84
    BRG Munckizz Jung 0.81
    MNM Shogun Supp 0.77
    PHL Sof Jung 0.63
    FNC Nji Jung 0.62
    XL Taxer Jung 0.61
    DBL PFI Jung 0.60
    ENC Kehvo ADC 0.60
    MNM Noltey Jung 0.58
    XL Send0o Top 0.57
    DBL Furuy Mid 0.56
    FNC Shikari Top 0.55
    DMS Batu Jung 0.54
    MNM Only Angel Top 0.54
    PHL Chemera Mid 0.54
    DBL Kakan Top 0.51
    BRG Diva Mid 0.50
    ENC Renghis Top 0.50
    ENC Skude Jung 0.49
    NVE Infinity Jung 0.49
    PHL Xizz3l Top 0.49
    DMS DenVoksne ADC 0.48
    MNM Chibs Mid 0.47
    FNC xMatty ADC 0.44
    PHL Achuu ADC 0.44
    DMS Hidon Jung 0.43
    NVE 3z3 Mid 0.41
    XL Special Mid 0.41
    DMS Zeiko Top 0.40
    FNC Ronaldo Mid 0.38
    NVE Brelia Top 0.38
    FNC MagiFelix Mid 0.36
    XL Hjarnan ADC 0.36
    DMS Vixen Mid 0.35
    BRG Jakamaka ADC 0.34
    ENC Beeley Mid 0.34
    MNM Yusa ADC 0.34
    DBL Dragdar ADC 0.33
    NVE Spark ADC 0.28
    BRG Zhergoth Top 0.27

So, I hope you learnt something about warding in the UKLC and perhaps something about the methods of data analytics.
If you have any suggestions, comments or hate, feel free to tweet me @samhine_.

Until next time...

Sources: LVP UK

27/09/2018


Franchised Leagues in Esports, Yay or Nay? Speakers: Michal "CARMAC" Blicharz, Tomi Kovanen
Moderator: Ian Smith

Although franchising comes nowhere close to UK University Esports, I found this talk particularly interesting. Hearing what industry professionals have to say was a lot more insightful than shouting down the mic with my friends on Discord.

Esports franchising (as generally described by the panellists) is the implication that an organisation can permanently feature in an esports league. The beginning of the conference mainly discussed how franchising in the US is already becoming established (e.g. NALCS), and now the esports organisations there place a larger focus on business since their participation in the tournaments are certain.

To me, whether one decides they are “on-board” with the franchising of leagues comes down to this point – does the security of an organisation in a tournament further encourage new talent and is it more entertaining for the viewers?

A point which certainly resounds with me is that matches between two franchised orgs seem far less important. Both have secured their spot, and there’s little to no risk one of the teams is being removed from it anytime soon. The sheer number of matches in something similar to the NALCS is pretty overwhelming, to the point where it is only really reasonable to follow the teams you care about. This leads on nicely to how viewers align themselves with the professional scene. In my experience, is it more common for viewers for be fans of individual players rather than of organisations, of course this is purely anecdotal, but it’s something organisations must think about. Ultimately, do you make the decision to keep fans from continuing the roster, or include new talent to win?

The difference in acquiring talent in franchised leagues also has both it’s advantages and disadvantages. From the perspective of new players, you now have a slightly more structured path in your career – get hired by one of the top organisations. The sad thing is that there is a ceiling at which a player can progress with unfranchised brands; and with decisions being made to continue rosters to keep fans happy, it’s a hard ceiling to break through.

The argument can be made that franchising encourages organisations to become riskier with new roster picks, but in my amateur view I have a feeling most will choose to stick with what’s safe, and what they know the fans would love.

My personal opinion falls to this; franchising doesn't sit right with me. I'm sure it makes more business sense in most scenarios, but it seem to unintentionally take away from what to me is at the core of competitive gaming - meritocracy. The thing I love about open leagues is the ability of a team of 5 friends to get together, decide they want to be the best, and do it. However unlikely that may be, it should be possible.

27/09/2018


UK Esports - It's coming home Speakers: Kieran Holmes-Darby, Scott Gillingham, Dom Sacco, Ben Greenstone, Martin Wyatt
Moderator: Ollie Ring

Whenever the topic of UK Esports comes up, I always get excited. Not only because it’s my home country, but because the likelihood of university esports coming up is far more likely.

It’s commonly known that esports has mostly grown in the complete wrong direction, the big leagues with the best players were established far before the university and high school ones. It’s great to see so many established individuals on the panel all looking towards the university scene and giving their encouragement.

In my view, it’s all about being patient. There are very few brands who have fully immersed themselves with the collegiate scene (OCUK, noblechairs, ASUS ROG), as it’s understandable – it’s hard to know what the fuck is going on. Personally, I’m doing my best (as best a university organisation can) to grow organisations’ interest in the scene and incentivise them to get involved. Of course, there are limits to not only what I’m able to do, but what I’m capable of (they’re different, trust me). As well as obvious time constraints of being a university student, I’m going to be far less experienced that other competing and more secure (semi) pro league organisations.

Something which is often mentioned alongside the university topic is the passion of those willing to get involved. Many leaders/committee members of these university societies are dedicated tens of hours a week to try and grow/market/improve their brand with little to no reward or recognition. There are whispers in the air of organisations taking more note of student effort, but even then, the collegiate scene is so saturated with passion, it’s becoming harder and harder to stand out. Keep in mind however, that this is a great thing for companies, organisations, and the collegiate scene in general.

The speakers were asked to give their 5-year predictions of the UK scene, so I’ll hop in and give my own. My guess is that since esports has just been supported but BUCS, that universities start to take things a little more seriously. There will be esports arms of the university rather than of the students’ unions, and a lot more money will be given to student organisers to help brand themselves against the university and throw more impressive events. This will bring in sponsors (endemic and otherwise) since being involved with a university and esports is an absolutely huge marketing opportunity