Welcome to the Forum Archive!

Years of conversation fill a tonne of digital pages, and we've kept all of it accessible to browse or copy over. Whether you're looking for reveal articles for older champions, or the first time that Rammus rolled into an "OK" thread, or anything in between, you can find it here. When you're finished, check out Boards to join in the latest League of Legends discussions.

GO TO BOARDS


@Riot - changes that realy are needed about this game

Comment below rating threshold, click here to show it.

Lamcho

Junior Member

02-09-2015

Hey, Adrenalotr,
I was not really referring to you as a "negative critic" i appreciate your comments and your point of view


Comment below rating threshold, click here to show it.

Adrenalotr

Senior Member

02-10-2015

Well there's just us three in this thread.

I'm looking forward to seeing the code and messing with it (if I grasp the language).


Comment below rating threshold, click here to show it.

Lamcho

Junior Member

02-10-2015

I'm looking forward to it too, however i have big deadline coming end of next week, on a real project so i don't imagine having a lot of time and energy for my fun little side projects.

Meanwhile some food for though: https://www.facebook.com/DotaMEA/posts/227285034110019
Trueskill is indeed rather complex matchmaking system that should make sure that the players are given a fair teams over a couple of games. I think that this is what my system is missing to reduce its error, however according to the trueskill statistics even in perfect conditions a player should do between 50 - 150 games to be evaluated "correctly". My personal experience is that even 150 or 250 games quality of the most matches sux. I can see why a system that can be plugged anywhere and work fine is preferred in most multilayer games. It is perfect in a sense that you can change everything about the game and still use the same matchmaking system. However i don't think this is in any way optimal, i think is convenient. if measuring player by his own performance can be exploited, just put team objectives for player measurements.


Comment below rating threshold, click here to show it.

Adrenalotr

Senior Member

02-10-2015

Very interesting link. I haven't actually considered uncertainty at all. It makes sense that they include it.

I wonder how much it's necessary, though. If we define skill as the ability to consistently win games, then high-uncertainty players aren't necessarily as skilled as they might feel that they are when considering their good games, and the players that they consistently outplay can be a lot less varying in their performance and do better against the majority of opponents they're matched with, and therefor consistently climb or remain at their rank, whereas the high-uncertainty player is jumping up and down the ranked ladder.

Because the game has a number of things affecting this (positions, matchups, team cooperation, off-meta adaptability, griefers, smurfs), there's an inherent uncertainty in everything. I've considered it statistical noise that becomes irrelevant after enough games are played. When you have the time, can you run a few more simulations to find the number of games, in your current implementation, where the number of misplaced players is less than 10% and less than 1% of the total population of players?