• We are looking for you!
    Always wanted to join our Supporting Team? We are looking for enthusiastic moderators!
    Take a look at our recruitement page for more information and how you can apply:
    Apply

Developers, please fix a randomizer for Carnival!

  • Thread starter DeletedUser34480
  • Start date

DeletedUser31440

I'm going to try to channel @True592 on this one:

What are your expectations for the event?
To win everything and never get any bad prizes, just like I DESERVE!!!!!

What averages are you measuring?
I don't have time to answer ridiculous questions, come back with questions that are worthy of my time pleb.

What results are you expecting?
You already asked this, what you can't keep track of questions but I'm supposed to track event results, ridiculous!

Do you expect to always win the RNG prize you want?
Obviously, why else would I play, to not win the prizes that I want? Go look at a bus schedule.

How much evidence have you collected to prove the problem lies with the randomizer?
How much evidence do you have that it doesn't? None, just as I thought.

If you are trying to argue this point, why are you not gathering data to support it, especially from the people who do not share your viewpoint?
Aint nobody got time for that, plus you all are so obviously wrong that it would be a massive waste of my valuable time.

What sort of data would you need to see to declare the randomizer was fixed?
6 fully upgraded bridges!!!

What do you hope to accomplish with your arguments?
EVERYTHING!!!!!

Would you like me to post my data here or would you like to check the Carnival thread to gather my and other player's data that has been deposited there to support your argument?
Yet again pleb, my time is too valuable to be spent proving my arguments, just realize that they are correct. Also have fun waiting for the bus.

Regardless, RNG, like many things in life, doesn't care about our hopes and dreams. We just need to remain determined and make informed decisions to maximize the little control we have.
BORING!!!


Edit: Sorry Lancer, I just couldn't resist.
 
Last edited by a moderator:

DeletedUser13838

why people don't understand that even if you get heads 99 times in a row, the chance of getting tail on the 100th coin flips is still 50%
I tried to keep silent but this kind of thinking drives me crazy. It's pretty clear that by any statistical measure, if a coin lands on heads 99 times in a row that it's almost certainly not a fair coin and the next flip is not 50/50 to be tails (do you know how small 2^-99 is?). Most people in this thread who claim to know something about statistics and probabiltiy fall into the basic trap of accepting their assumptions as gospel truth with no evidence and laugh at people who cricize the unstated randomness (with just anecdotal evidence).

The reality is that it is very easy to devise a random number generator that produces (over time) the listed percentages without this notion of independence of individual trials that seem to be accepted as some holy mandate. Now I have no idea what mechanism the devs have used but if I was designing a game I'd make sure to correlate results so people don't get too unlucky or lucky to avoid these threads. My own personal anecdote is that if the devs were actually trying to do this then they failed miserably. ;)
 

DeletedUser31308

I tried to keep silent but this kind of thinking drives me crazy. It's pretty clear that by any statistical measure, if a coin lands on heads 99 times in a row that it's almost certainly not a fair coin and the next flip is not 50/50 to be tails (do you know how small 2^-99 is?). Most people in this thread who claim to know something about statistics and probabiltiy fall into the basic trap of accepting their assumptions as gospel truth with no evidence and laugh at people who cricize the unstated randomness (with just anecdotal evidence).

The reality is that it is very easy to devise a random number generator that produces (over time) the listed percentages without this notion of independence of individual trials that seem to be accepted as some holy mandate. Now I have no idea what mechanism the devs have used but if I was designing a game I'd make sure to correlate results so people don't get too unlucky or lucky to avoid these threads. My own personal anecdote is that if the devs were actually trying to do this then they failed miserably. ;)

Can you name a method of reducing the variance, as you and True have suggested, without changing the absolute probability of individual trials?
 

Snarko

Active Member
Can you name a method of reducing the variance, as you and True have suggested, without changing the absolute probability of individual trials?
I'm not saying this is how it's done (I'm sure it's not), nor am I suggesting it should be done this way. I'm just mentioning a way to make a 5% chance that is less random than totally random.

Generate 20 booleans. Make sure 1 is true and 19 false. From the code perspective when you use your ticket the outcome is already decided. 5% of the time you are 100% guaranteed to win. 95% of the time you are 100% guaranteed to lose. You the player however has no idea which one will come up next without a substantial data collection to figure out that that is how it works.

It shouldn't be 20 booleans (or numbers, since there's other % chances to factor in too). That would be too easy to figure out. Perhaps 100 would be a good amount. Losing 100 5% rolls in a row happens, on average, to 1 in 169 people. With the system above it would happen to 0. You would win exactly 5 times in 100 attempts (assuming no doubling) and you would not have a clue when it would happen, nor would you be aware of how to game the system without gathering a substantial amount of data. People could possibly figure out that it's not true random but figuring out exactly how the randomness works is another matter.

If the devs suspect someone has gathered enough data they could change it to alter between 75 (4 success) and 125 (6 success) predetermined rolls. Suddenly your collected data doesn't help you much anymore, beyond indicating that collecting data can help you figure it out. They could even change it slightly with each event that has % chances, to make it nigh on impossible to collect enough data in time.
 

DeletedUser31308

I'm not saying this is how it's done (I'm sure it's not), nor am I suggesting it should be done this way. I'm just mentioning a way to make a 5% chance that is less random than totally random.

Generate 20 booleans. Make sure 1 is true and 19 false. From the code perspective when you use your ticket the outcome is already decided. 5% of the time you are 100% guaranteed to win. 95% of the time you are 100% guaranteed to lose. You the player however has no idea which one will come up next without a substantial data collection to figure out that that is how it works.

It shouldn't be 20 booleans (or numbers, since there's other % chances to factor in too). That would be too easy to figure out. Perhaps 100 would be a good amount. Losing 100 5% rolls in a row happens, on average, to 1 in 169 people. With the system above it would happen to 0. You would win exactly 5 times in 100 attempts (assuming no doubling) and you would not have a clue when it would happen, nor would you be aware of how to game the system without gathering a substantial amount of data. People could possibly figure out that it's not true random but figuring out exactly how the randomness works is another matter.

If the devs suspect someone has gathered enough data they could change it to alter between 75 (4 success) and 125 (6 success) predetermined rolls. Suddenly your collected data doesn't help you much anymore, beyond indicating that collecting data can help you figure it out. They could even change it slightly with each event that has % chances, to make it nigh on impossible to collect enough data in time.
This would be highly abusable if people figured out it was working this way, which would happen very quickly because of how uniform the behavior would be. (Imagine 3 or 4 people reporting that they got perfect distributions, and suddenly everyone suspiciously collects their data to confirm, and there are no outliers.

In order to guarantee you will win the 5% chance exactly once in every set of 20 tries, you give users a lot of extra information. Say you win the 5% on your first try, you would know for certain at least the next 19 tries would fail to give you what you're after, so you stop there. As you say, the remedy would be to make it require a large sample size to notice the pattern. This can only be done by manually increasing the variance (instead of 1 5% win guaranteed every 20 tries, there are 5 guaranteed every 100 tries). It would take a ton of dev effort (and some serious algorithm writing chops) to design and implement an algorithm like this that would be flexible enough to allow for somewhat regular changes when the algorithm's pattern gets exposed. If there happens to be a functional version already in an open source library, maybe I could see the devs trying it out. It would basically end up meaning there's variance very similar to what we have now, though, as the variance would need to be large enough that the community would not notice a pattern. People like me that keep track of their data would spot any reasonably low amount of variance pretty quickly, since as we've said, true random very rarely is close to the correct distribution and such an algorithm would force it to be close.
 

DeletedUser34480

If only. You have to take the good with the bad. I tend to stay neutral there. Lol.
While you're here, let me ask, do you have any feedback connection with developers?
If not, who does?
 

DeletedUser34480

Welp, that proves to me trolling appears to be your objective, rather than actually fixing/analyzing the randomizer.
You know, when someone pops up, provides some basic probability sample, without reading bunch of the same primitive provided before and throws dirt at you -- do you still call me troll?

Data, data, data. We cannot create bricks without clay.
There were data.

The issue with all these analogies is the amount of assumptions that go into them.
Absolutely.

What are your expectations for the event? What averages are you measuring?
So, expectations are not mine. And "assumed averages" are clearly stated.
It is printed right there, 100% chance, 15/30%, and 5/10%.
I stated my point from beginning: while 5-10% percent seem to be quite high probability to me, I could understand some serious deviation.
Now, 15%, that turns to 30%, is a solid promise.
Yet, there were deviations, of 100%+, twice, few attempts apart.
That's wrong, that's broken.

What math fans here can't comprehend, is that it might be fine as a math exercise but is not acceptable for this kind of promised return.

All my examples were to explain that mindlessly taking the average (or throwing one's empirical dice million times) is not some universal screwdriver.
Your deviation cannot fly around as it pleased when you declare a high chance -- unless you are student in a classroom.
 

DeletedUser34480

Can you name a method of reducing the variance, as you and True have suggested, without changing the absolute probability of individual trials?
First, I gave you an option you've asked for. You ignored it.
Second, please, let's stop clinging on "absolute". It is not uranium that we need to clean.
 

DeletedUser31308

First, I gave you an option you've asked for. You ignored it.
Second, please, let's stop clinging on "absolute". It is not uranium that we need to clean.
If you're advertising the chance that you get each drop on a given click, that better be the true probability, else your randomizer would actually be broken.
 

DeletedUser34480

If you're advertising the chance that you get each drop on a given click, that better be the true probability, else your randomizer would actually be broken.
I'm not advertising anything. You've been insisting on providing an option.
I've never said it was an ideal solution but it would be much more adequate.
As for better solution, like I've said as well, either developers could spend some time (it won't be 5 lines of code, like you presented) or buy one (and it won't be open code, as you wished).
 

Lancer

Well-Known Member
While you're here, let me ask, do you have any feedback connection with developers?
If not, who does?

I'm just a volunteer. The feedback threads are meant for the developers. Then the developers will speak with the appropriate staff. I'm not involved to that extent. You could ask @Snowbelle. She may have more info about how to better answer your question.
 

DeletedUser34480

I'm just a volunteer. The feedback threads are meant for the developers. Then the developers will speak with the appropriate staff. I'm not involved to that extent. You could ask @Snowbelle. She may have more info about how to better answer your question.
Oh.
Probably, won't make sense to engage for this event... while I ask her, then she might send it to devs -- event would pretty much be over.
Thank you for directing me to the right place, though. So far, I've just got a suggestion to create a ticket :(
 

DeletedUser13838

Can you name a method of reducing the variance, as you and True have suggested, without changing the absolute probability of individual trials?
First, I never said anything about variance reduction. It's a completely different concept from what I was referring to.

Second, probability of individual trials implies independence which is an unstated assumption. My thought was more about adding autocorrelation (ie the process is no longer independent). Again, I'm not saying what the devs do or don't do but there is always room for skepticism.

Finally, what I was referring to was the possibility of an autocorrelation term, which simply the notion that future results are based in part on past results. A simple autocorrelation would be something like this: if the kth trial failed then probability of k+1st trial succeeding is 1/4 but if the kth trial passed then probability of k+1st trial is 3/4. It's readily apparent that the overall probability is unchanged. While not as clear, the long term variance is unchanged as well (since the process is stationary). However, the probability of n consecutive failures or successes is significantly increased as n increases. The process could be altered to instead reduce the probability of n consecutive failures or successes.
 

DeletedUser31308

First, I never said anything about variance reduction. It's a completely different concept from what I was referring to.

Second, probability of individual trials implies independence which is an unstated assumption. My thought was more about adding autocorrelation (ie the process is no longer independent). Again, I'm not saying what the devs do or don't do but there is always room for skepticism.

Finally, what I was referring to was the possibility of an autocorrelation term, which simply the notion that future results are based in part on past results. A simple autocorrelation would be something like this: if the kth trial failed then probability of k+1st trial succeeding is 1/4 but if the kth trial passed then probability of k+1st trial is 3/4. It's readily apparent that the overall probability is unchanged. While not as clear, the long term variance is unchanged as well (since the process is stationary). However, the probability of n consecutive failures or successes is significantly increased as n increases. The process could be altered to instead reduce the probability of n consecutive failures or successes.

My assumption is that as soon as you make successive trials dependent on one another, it opens the door to abuse. It is impossible to abuse random, independent trials. In your mentioned-but-not-enumerated example where you correlate to prevent successive successes/failures, it would become prudent to stop trying for a prize after winning it the first time (as you'd suddenly be paying full cost for a reduced-probability attempt). Having such edge cases is not desirable from an algorithm design perspective (due to complexity), and makes it much more difficult to predict player behavior (what will happen if they notice the correlation, will they spend less or more? will event participation increase or decrease? will more/less people consider the system "rigged" and stop playing as a result?). It's a whole lot of effort between devs and marketing/business for a solution that wouldn't be much better than simple independent random trials for the majority of people.
 

DeletedUser13838

My assumption is that as soon as you make successive trials dependent on one another, it opens the door to abuse. It is impossible to abuse random, independent trials.
You can assume anything you want without evidence.

In your mentioned-but-not-enumerated example where you correlate to prevent successive successes/failures, it would become prudent to stop trying for a prize after winning it the first time (as you'd suddenly be paying full cost for a reduced-probability attempt).
The probabilities aren't reduced. The autocorrelated process I used as a simple example is stationary which means the moments (expected value, variance etc.) don't change over time.
 

DeletedUser31308

You can assume anything you want without evidence.


The probabilities aren't reduced. The autocorrelated process I used as a simple example is stationary which means the moments (expected value, variance etc.) don't change over time.
The probabilities of individual attempts, as you mentioned, are different (just look at the example you provided). It doesn't matter that the overall probability, mean, variance, etc is unchanged.
 

DeletedUser13838

The probabilities of individual attempts, as you mentioned, are different (just look at the example you provided). It doesn't matter that the overall probability, mean, variance, etc is unchanged.
The conditional probabilities in the example are changed by design but the unconditional probabilities don't change. I don't know what you're argument is.
 
Top