Monday, September 21, 2015

Q4-2015 PLEX Prediction


With the price of PLEX reaching new records, the conversation always centers around what is the appropriate price of PLEX.  As Dr. Ejyo was fond of saying "The right price of PLEX is the price players are willing to pay".  

Though there will be an outcry from players about the play-to-pay equation becoming unbalanced, the price of PLEX will keep rising to meet the demand for the item.  But as the price rises, I expect to see a whole host of related behaviors feed back upon themselves to drive the frenzy and will make the peaks and troughs hard to nail down.
  1. Deflation/Liquidity squeeze: players will avoid spending their ISK on consumption, instead either keeping it in reserve or buying more PLEX than they usually would.
  2. Net worth chasing: players concerned with total value will move more liquid ISK into PLEX to preserve purchasing value against PLEX
  3. Investment chasing: as the rise becomes more pronounced, players will use PLEX as an investment rather than as a consumable.
In short, I think the current peak is unsustainable right now.  However, a 1.3B-1.5B PLEX by the end of the year is absolutely possible.  To dream of the good ol' days of sub-800M PLEX is as foolish as predicting 2B+ PLEX.  I think to expect a band from 1.1B-1.3B for the quarter is a reasonable expectation.

A Prediction In Parts

I expect Q4 to be split into two halves.  Now to Thanksgiving (US), and then the rest of the remaining year.  This is similar to last year's predictions where I expect positive trends through November, as it is traditionally a busy time in gaming.  Then when December hits, I expect negative trends as play tapers off paired with expected sales.  The trouble is picking a starting point for the predictions, but I based off the 60-day moving average to avoid starting in unrealistic territory.

Raw Predictions
Code Source

To briefly sum up, I believe we will continue in a positive realm to a steady price of 1.3B (+10%), then see a back off to 1.2B (-10%) to end the year.  Again, it's really hard to predict prices with the recent wild swings, but I based the behavior off the similar period last year.

Also, I'd like to note that 2 of the last 4 weeks triggered price flags in the Prosper toolset, which is strong indication of instability.  The only times I've seen PLEX trigger these models were before big action on CCP's part (ISboxer bans).  I can't know what CCP is doing behind the scenes, but I'd bet on something popping the bubble sooner rather than later.


We are in unstable territory to be generating predictions in, but the net direction of PLEX over history is up.  Also, say what you will about the health of the game vs PLEX, the honest truth is as long as people are willing to buy and [over]valuate PLEX, the price will climb.  We were on a positive trajectory before the ISboxer changes last year, and CCP's rule change eset the general demand to consume PLEX.  Though we can't know actual PLEX consumption rates, a traditionally busy Q4 paired with AUR consumption may drive us closer to the high-line than the low one.

Also, for as much work as this was to generate, I'm going to keep the making of to another blog.  Stay tuned here and on the EVE-Prosper Market Show for more information as it develops!

Twitch - Live Fridays 0200
Support us on Patreon

Tuesday, July 7, 2015

Aspiring Hari Seldon - Developing Price Predictions (Part 1)

With the Prosper Market Show on semi-hiatus for the summer, I'm trying to put the time toward developing some new utilities for analysis.  Though I have plenty on the to-do list to work through, I wanted to toy around with some future-looking tools to help illustrate some of the intuition I see moving forward.

Specifically, after looking at some examples, I wanted to try out Monte Carlo analysis.  The idea being if you take a decent prediction methodology, then generate thousands of predictions, you can average out the noise to get at the true signal.

Stage 1: Generating Predictions

Knowing nothing at the start, I picked up Geometric Brownian Motion as the predictor function.  It's relatively popular and well supported, and is handily ready in an R Library.  I won't even begin to claim I am an expert as to what's going on under the hood, but the general concept is pretty straight forward:

Next Price = Current Price + (expected variance)*(normal-random number)

The variance can be tuned as a percentage to express how far any individual roll will move, and how widely the random-number-generator will vary.  To find the numbers, I had to do a bit of hackery, but if you want to see the full methodology, check out the raw code here.  Also, it's worth noting that GBM does take into account interest, but I've zeroed out those numbers for the time being.

Using CREST data as a basis, we can pick a point in time and generate some predictions looking forward.  But due to the random-variance methodology, some predictions are good, and others are not:

These are 3 independent predictions using a low/med/hi variance seed into the GBM function.  The "high" line trending down and "low" trending up are a matter of which rolls are winning.  If you look more closely, the bounds on low are much tighter where the high is much more wild.  Given another set of rolls, the best predictions will change.

The weakness we should be aware of is rolling errors.  Because the GBM function is a recursive and random guessing function, if predictions start to stray off into the bushes, it's hard to expect they will come back.  This model uses a single price to start walking, and the further it walks, the wider the end results will land.

Step 2: Generate a lot of predictions

Happy with the general behavior of the model, the next step is to be able to generate a lot of guesses.  Randomness should wash out given enough trials.

Though I was able to wash out the randomness, the end result is not nearly as useful as I was expecting:

Summarizing the predictions by-day gives us a much different picture.  The randomness is gone, but we're left with a less useful "cone of possibility".  Though the end result we're looking for is definitely a "zone of possibility", this picture is not a useful automation of that concept.  Specifically that the median prediction is roughly "the price will stay the same", this is not a useful prediction for most items.

What is going on?  Well, here we can actually see the GBM function for what it is and why it is breaking down for our predictions:
  1. Assumes +/- variation is equally probable.  Here it looks like the distributions are strictly normal, which means they are centered around 0 +/- variance over time.
  2. Takes no history into account by default.  Function takes a single price and a single variance variable.  Understands nothing about max/min variance
This is particularly absurd for PLEX where variance/volatility is largely positive.

Just looking at the variations for the last year, almost 60% of the movements were positive.  We could further enhance our predictor by looking at a more recent window.

Step 3: Back to the Drawing Board

After chewing on this problem for a few days, I do have some things to follow up.  Though there are some other models to try in the sde package, I do think we have some options on how to get more useful predictions out.

Convolve Predictions?

This is one I've gone back and forth about.  I'd like to be able to "sum" together the predictions to get the general "tone" of what is going on.  Except that it's been 5 years since I took a digital signals class, and my attempts so far have just been guesses.  Though R is ready out-of-the-box to do FFT/convolutions, I need to better understand what I'm asking the program to do.  Currently, all attempts run off to infinity.

Exponential Predictions Using Volatility

After seeing the outcomes of the GBM experiments, I'm instead seeing a different trend pop out.  If I just pick a low/med/high variance out of the distribution, I could better generate a window in the short term.  Simply project forward the same %variance forward.

Another option is to get witty with the actual high/low bounds to narrow the prediction off our existing price-flagging system.  I just picked 25/75th percentiles, but we could narrow those bands with a smarter lookback to characterize how "wild" the last period has been.

Get Smart: Filter/Augment GBM Outputs

The last option is to roll some of my own ideas into the random-guessing function.  Using the historical variance as a seed for guessing-function, and/or dropping predictions that don't at least make a good prediction for the last 10 days before moving forward to the next 10.  I'm not yet convinced either is better than the far simpler exponential predictor above, because I would expect the exponential pattern to still wash out in the end, especially if we stick with a Monte Carlo style approach.

The hope would be to start a random function on a linear path (perhaps take 2wks into account) then have exponential variance as the high/low bounds to build a channel.

EDIT: using RSI as a prediction filter yielded some interesting results.  There is still some tweaking to do, especially for items that have had a recent spike into very unstable territory but initial forays seem promising


I'm really disappointed with the outcomes from this project; a lot of articles online made this sound like a magic-pill.  Instead we see the underlying nature of the model once randomness is removed.  Though it's not a complete loss, I was hoping for something a little wittier than a linear fit, but CCP Quant has pointed out Forecast inside R for more tinkering.

The nice thing out of this exercise has been being able to quickly experiment in R.  Though there are still some pretty considerable hurdles between these experiments and actual inclusion in the show, I was able to work with some pretty powerful packages extremely quickly to really dig into the problem and iterate quickly.  I continue to be impressed with R as an exploration platform, but still have some hesitance on integrating it in more powerful ways.