Monday, November 7, 2016

Casino Bans and F2P - October Economic Report


October was jam packed with news. Engineering Complexes, NPE Update, Command Boosts, and EULA changes. The stage is set for Ascension, but where is the economy today?

The Money Supply - IWI Bans

Though EVEbet was allowed to close peacefully and settle tabs, I Want ISK (IWI) was shuttered with a ban wave that knocked out the leaders while zeroing out all bankers' wallets. I personally feel that this kind of enforcement was one of the better scenarios given the circumstances; freeing IWI from paying back over-leveraged liabilities because #grrccp. But many complain that without the premier content sponsors, producers and content makers will be in trouble. Though it’s important to remember the EULA update is in line with a changing litigious atmosphere around virtual gambling, see: Valve mired in CS:GO trouble.

Money Supply Graph - Total ISK zoomed in

Market watchers wanted to know the quantifiable impacts. Reddit reports put the damage at 30T, but looking between the Money Supply data and monthly Sinks & Faucet Report, we're estimate the total cash impact at 22.4T out of a record 36T reported in the Active ISK Delta for October. Damages could total to 30T when counting confiscated PLEX or high-value assets like Alliance Tournament prizes, but that value is not reported in the Economic Report and we have no way of corroborating any claim over 27T.


Lastly, CCP Quant released a breakdown of EVE's player wealth distribution. Though many took to social media to argue about the 1%, the distribution of ISK is much fairer than anticipated. It's a good bet to assume a power curve in cases like this: where 20% of the population controls 80% of the resources. And though this does not include net worth/physical assets, nor corporation wallets or alliance war-chests, it's an interesting sample of wealth in EVE.


Other Indicators

Looking at other monthly indicators we find a mixed bag. First off, PLEX and ISK velocity remain on their seasonal track without much deviation. PLEX has risen a little too sharply in the last week, and shows some signs of leveling off for a price between 1.25B and 1.3B for Ascension.


Usually, the markets are hottest in the fall, with gaming rising across the board in Q4.  Last year, the markets were hottest in October with each player gathering like EVE_NT, EVEsterdam, and EVE Vegas stoking the hype train. This year, players seem to be in a holding pattern ahead of Ascension. Though CCP Rise's Alpha Clone PVP presentations should have stoked more hype into pre-Ascension speculation, statistics still look anemic. Furthermore, the PVP numbers are concerning. The pre-Vegas dip was so unbelievable, we had to verify we were not missing data.


Looking Forward to Ascension

Hitting the big points as fast as possible:
  • PLEX: Expect demand to be a wash, watch out for PLEX sales to pop the current bubble
  • Mining: Brace for big impacts across the board:
    • Tritanium: Stay stable in short term due to ISK/m3 value, but expect long term sag
    • Pyerite/Isogen: Bigger drops as compressed material comes out of nullsec
    • Mexallon/Nocxium: Expecting supply stability, though volatile prices
    • Zydrine/Megacyte: Still have room to fall, and will be clobbered by Ascension mining
    • Morphite: Minimum supply change, but weak demand without big conflicts
    • Nitrogen Isotopes: EC's should offline many industry POS, lowering demand
    • Other Isotopes: Hot on patch day, but supplies should drive prices down by end of year
  • Alphas: Minimal impacts in T1 prices, but should drive up ISK velocity and PVP stats
Forecasting Alpha impacts is incredibly difficult given no other game comes close to EVE mechanically. Many savvy players already treat EVE as F2P thanks to PLEX. Also, since Skill Injectors were introduced, subsidizing ISK generation has never been easier. The content of EVE is players, and Alphas give the means to keep more warm bodies in the game. We are excited to see Nov-Jan's economic stats with the expected flood of new content generators.

Thursday, October 27, 2016

Favorite Python Packages 01 - Making a chatbot

I had to write a logging handler for work that pushed errors up to HipChat. Turns out the process was so easy, I could not resist adding a chat-handler to ProsperCommon (esp given my hacky email handler). Despite my love for Slack, Discord became the tool-of-choice because it’s easy to stand-up/tear-down chats with a lot of flexibility. I also skipped Slack for now because the tweetfleet server blows past the 10k message buffer on a daily basis.

So, let's cook up a chatbot! Discord's API offerings are dizzying; this should be easy!

Discord.py - Making Chatbots Easy

Since Discord relies on an oAuth2 connection, and chats are inherently asynchronous, cooking up a bot from scratch would hurt. Discord.py to the rescue! This library has exceptional API coverage and is easy to use.

My one gripe is the documentation. Docs are sparse in places, but I'll forgive that sin with their example code and an active community on the Discord API Guild. Also, I had some trouble getting off the ground with the Discord API docs. Specifically, getting the correct tokens required to work, but once the bot was authenticated, it was off to the races!


TinyDB - The Easy Object-store

Pinging the internet for data is not free; whether because of rate limits or round trip times. Tools like SQLite are great for lightweight/portable data storage, but also requires schema design. MongoDB is a powerful noSQL solution, but is heavy to stand up (and I'm not in love with the query language). TinyDB comes to the rescue as a way to get the JSON/noSQL storage of MongoDB with none of the server/auth standup.

This shines when paired with REST endpoints. It's easy to push/pop entries around and keep the same raw JSON in archive as what's coming from the endpoint. Also, it's as easy as JSON to add more keys for searching. I'm still not in love with my cache-timer implementation in ProsperBot, but fetching from cache is 100x faster than an internet-call. Lastly, debug is easy since output is raw JSON, though this could lead to compression issues down the line.


Quick pro-tip about TinyDB: get ujson. This pure-C implementation of the JSON library is a great drop-in replacement. It can also be baked into libraries like Requests. ujson makes handling JSON lightning fast! Also, TinyDB has a wide array of extensions, and I will be looking into MongoDB hooks at a future date.

NLTK - Processing Text Made Easy

The number one problem I have with stock quotes: it takes 2-3 extra clicks to figure out WHY the price moved for the day. Google/Yahoo/etc provide great single-stock pages that give news summaries, but when you open a ticker or phone widget, only the raw numbers are reported. If I'm going to make a quote bot, why not include some information and save people a search?

The good news, Google/Yahoo both give a by-ticker API of relevant news articles. The bad news, they yield 10-15 articles in the query. Furthermore, the data isn't particularly ranked/scored from the source. I could have gambled with first-article being the best, or stacked a publisher priority order, but all I wanted was:
Good news when the stock is up.  Bad news when the stock is down
NLTK to the rescue. I have wanted to try my hand at sentiment/language analysis since I saw a local talk on Analyzing P2P Lending Data. Putting headlines through the vader_lexicon tools did exactly what I wanted and was blazing fast.

After playing with this quick demo of NLTK, I'm excited to expand this toolset.  If I can find the time, I'd very much like to write up a new discord bot for grading a community and highlighting troublemakers statistically rather than bluntly using block lists and word black-lists.

Let's See It!


I'm going to save the "how to get [stock] data" question for another blog.  There's a wide world of API's and support out there, and digging into them is worth a whole blog.  For the impatient, I used these two articles as a springboard to get started:
Though designing the bot language may require some creative design for EVE topics, standing up the bot should be easy.  I've been able to add functions at a uniquely fast pace (0.5-1d/feature) and standing up the whole bot took just a few evenings once I got through the roadblocks.  The libraries above are excellent tools to have in your tool box, and I'm excited to dig deeper into their functionality beyond the small `hello world` functions written so far!

Tuesday, October 18, 2016

Up And Down - Maintaining a OHLC Endpoint and Deploying Flask Restful


This is the first part of a more technical devblog. I will be writing up more specifics in a part 2, but I wanted to talk about the ups and downs behind the scenes with our EVE Mogul partnership. Issues are mostly my failings, and Jeronica, Randomboy50, and the rest of the team have been amazing given my shoddy uptime.

Prosper's OHLC Feed

I forgot to blog about this since the plans for Prosper's v2 codebase have only recently solidified, but we have a CREST markethistory -> OHLC feed hosted at eveprosper.com. The purpose was to run Flask/REST through its paces, but Jeronica over at EVE Mogul whipped up a front-end and Roeden at Neocom has been using it in their trading forays.

SSO Login Required To View

This originally served me well as a learning experience, but keeping a REST endpoint up isn't as simple as originally expected. From Flask's lack of out-of-the-box multithread support, to some more linux FUBAR's below, it's been a wild ride. And now that players are legitimately counting on this resource as part of their toolchain, I figured it's time to get my act together.

The Litany of SNAFUs

What really brought the house of cards down was our move from a traditional hosting service to a full r/homelab solution. Prosper has been living besides some other nerd projects (minecraft, arma, mumble, etc) and this move gets Prosper off the shitlist from the other customers when Wednesday night rolls around and I hammer box generating the show's plots. Unfortunately, for the added performance, we trade being under a benevolent tinkerer; restarts and reconfigs are more common than before. It's a huge upgrade, and I can't thank Randomboy50 enough for the support, but nothing is truly free (except the minerals you mine yourself™).

#nofilter #bareisbeautiful


This need for stability runs headlong into a shitty part of python: package deployment. Though wheeling up and distributing individual python libraries is easy, deploying python as a service is not. There will be a second blog on the specifics, but you're largely stuck with magic-project-deploying scripting out of the box, which can get really hairy if you're not careful about virtualenvs.

Thankfully, work turned me on to dh-virtualenv and though now we're grossly overengineered with a service .deb installer, we now have a properly deployed linux service that should be far more robust going forward. It does mean that there's now "build" and "deploy" steps for updates, but now that we're tied into systemctl the endpoint should be much less likely to go down.

With the last few months of work, I still expect a large amount of reengineering in our quest for a Quandl-like EVE service, but with the installer built in we can upkeep the endpoint with a lot less effort going forward. We are still behind on the ProsperWarehouse rollout, getting scrapers rewritten, but those modules should be a cakewalk to deploy now with ProsperAPI properly built up.

Also, I've worked in a discord logging handler which will be useful for monitoring, but notes on that later ;)

Friday, October 7, 2016

Stoking the Hype Train - September Economic Report

Editor's Note: blog was published about 30mins before CCP released updated notes about alpha clones.  The ban on Alpha-multiboxing means we revise back our expectations on low-end minerals in HS

September's economic numbers were released this week.  And though we got a sneak-peek in our o7 Show Market Brief, having the real numbers released to the public gives us a chance to really review the state of the EVE economy.



I'm going to break from formula this month.  Where previously I've tried to explain the technicals, I would rather take this chance as a primer for the fall/winter seasons with all the juicy features on the horizon.

What's Not In The Report

Sadly, the economic numbers were cut just before some of the biggest news was released.  Though we can see some ripples thanks to Alpha Clones, and some effects in the mineral markets thanks to a barge rebalance, we won't see the waves from the Mining Booster devblog.

Before we pick apart the month-gone-by, I think it's incredibly important to send out some forecasts about minerals:
  • High-end minerals are going to crash thanks to the Rorqual changes
    • 6 exhumer-grade mining drones and a invulnerability button will significantly increase nullsec yields
  • Low-end minerals are going down thanks to Alpha Clones
    • Alpha clones in HS will be very close to "mined minerals are free".  Though total yields should not increase dramatically, costs/risks will fall.
Now, "crash" is a pretty strong term to bandy about, but there will be some very significant moves in all the materials thanks to November's changes.  So much so, that I'd be hard pressed to hold stockpiles in any minerals personally.  Isotopes are a less risky prospect, but the Engineering Arrays could cut POS fuel consumption more than fleet-booster charges will raise it.

Also, if you are the tinfoil-chewing type... Those megacyte volumes pre-devblog sure look suspicious

What Do You See

I said in last month's report that there were some troubling headwinds.  Values were okay, but month-to-month rates were very weak even for the end of summer.  With September's report out, things are looking much better.

Both Net Trade and ISK velocity plots are looking healthy once again.  Ship trade has crossed back over the 1T mark, and minerals have taken a sharp bump thanks to the barge rebalance.  Also, with PLEX about to cross 1.2B, it's interesting to see the net trade values aren't quite matching the slope, pointing to a speculation bubble.

The specifics of ISK velocity are still a little lower than I'd like to see, but with trends pointing positive (and the 30d skew on the calculation) I'm reasonably happy to see EVE warming up for the winter.

Lastly, looking over the PVP numbers, I think it's interesting to see a bump in value destroyed without the kill counts really moving up.  We have seen in past events (Opportunities, Bloody Harvest, The Hunt, etc) where pilots were trafficked in to more combat, and the Purity of the Throne event does not seem to be driving the same activity levels.

Other Signals

The sink/faucet graphs are starting to look better too.  Seeing Active ISK Delta (ISK leaving due to inactive accounts) shrink is heartening.  That's going to be one hell of a statistic to watch in the November report once Alphas release.


And I'm loving this Top5 plot of the sinks/faucets.  Interesting to see things rise and fall with the higher resolution.  The bounty levels post WWB are extremely interesting (would like to see a mission vs rat breakdown).  With levels climbing that fast, I wouldn't be surprised to hear of nerfs/rebalances some time in the next 6mo.

Last but not least, the player vs NPC breakdowns of fees now with citadels is a welcome addition, and we will be looking in to tracking this as a trend in the future.

Conclusions

Though my August outlook was cautiously pessimistic, September has the game back on a good track.  October/November's numbers are going to be the real blockbusters to watch and I hope everyone has put in their EVE Vegas bets.  There's still a lot of upset that can play out between now and November, when Alphas launch.  Nothing was particularly surprising in September's numbers, but the new charts are a welcome addition.

Friday, September 23, 2016

ProsperWarehouse - Building Less-Bad Python Code

EVE Prosper is first and foremost a data science project.  And though hack-and-slash has got us this far, we need to consider a proper design/environment if we want to actually expand coverage rather than just chase R/CREST/SQL bugs.


There has been some work moving Prosper to a v2 codebase (follow the new github projects here) but ProsperWarehouse is a big step toward that design.  This interface should allow us to open up a whole new field of projects, so it's critical nail this design on the first-pass before moving on.

What The Hell Is This Even
Building a Database Abstraction Layer (DAL).  

Up to now we have used ODBC, but there are some issues with cross-platform deployment, and database-specific weirdness that have caused issues: such as ARM and MacOS support being painful.  Furthermore, relying only on ODBC means we aren't able to integrate non-SQL sources like MongoDB or Influx into our stack without rewriting huge chunks of code.  Lastly, we have relied on raw-SQL and string-hacks sprinkled all over the original codebase, making updates a nightmare.

There are two goals of this project:
  1. Reduce complexity for other apps by giving standardized get/put methods.
  2. Allow easier conversion of datastore technologies.  Change connection without changing behavior
By adopting Pandas as the actual data-transporter, this means everything can talk the same talk and move data around with very little effort.  Though some complexity will come from cramming noSQL style data into traditional dataframes, that complexity can be abstracted under the hood and always yield the same structures when prompted.

How Does It Work?
The Magic of Abstract Methods

I've never been a great object-oriented developer, and I've been especially weak with parent/children relationships.  Recent projects at work have taught me some better tenants of API design and implementation, and I wanted to apply those lessons somewhere personal.  



Database Layer

Holds generic information about the connection; esentially the bulk of the API skeleton.  Whatever Database() defines will need to be filled in by its children.  This container doesn't do much work, but acts as the structure for the whole project under the hood.

Technology Layer

Right now, that's only SQLTable(), but this is designed to hold/init all the technology-specific weirdness.  Connections, query lingo, test infrastructure, configurations.  This is supposed to be interchangeable so you could pull out the SQLTable and replace it with a MongoDB- or Influx-specific structure.  This isn't 100% foolproof with some of the test hooks the way they are built in right now, but by standardizing input/output, conversion shouldn't be a catastrophe.

Datasource Layer

A connection-per-resource is the goal going forward.  This means we give up JOIN functionality inside SQL, but gain an easier to manage resource that can be abstracted.  All of the validation, connection setup/testing, and any special-snowflake modifications go to this layer.  Also, because these have been broken out into their own py files, debug tests can be built into __main__ as a way for humans to actually fix problems without having to rely on shoddy debug/logging.

This adds a lot of overhead for initializing a new datasource.  In return for that effort we get the ability to test/use/change those connections as needed rather than going up a layer and fixing everything that connected to that source.  It's not free, but should be a cost-benefit for faster development down the line.

Importlib Magic

The real heavy lifter for the project isn't just the API object design, but a helper that turns an ugly set of imports/inits into a far simpler fetch_data_source() call.  I would really like to dedicate a blog to this, but TL;DR: importlib lets us interact with structures more like function-pointers.  This was useful for a work project because we could execute modules by string rather than using a "main.py" structure that would need to import/execute every module in-sequence.  This should make it so you just have to import one module and get all the dependent structure automagically.

Without importlib, every datasource would have to be imported like:



Instead now it can look like


A small change, but it should clean up overhead and allow for more sources to be loaded more easily.  Also, this does mean you could fork the repo and build your own table_config path without going crazy trying to path everything.

A Lot Of Work For What Exactly?

The point is to simplify access into the databases.  With a unified design there, we can very easily lay the groundwork for a Quandl-like REST API.  Also, with the query logic simplified/unified, writing direct apps to both fetch/process the data go from 100+ lines of SQL to 2-3 lines of connection.  

By abstracting a painful piece of the puzzle, this should make collaboration easier.  This also buys us the ability to use a local-only dummy sources for testing without production data, so collaborators can run in a "headless mode".  Though I doubt I will get much assistance on updating the Warehouse code, it's a price worth paying to solve some of the more tedious issues like new cron scripts or REST API design with less arduous SQL-injection risk/test.