define('DISALLOW_FILE_EDIT', true); Dev Stuff – unFocus Projects – Kevin Newman and Ken Newman

Introducing Quint Dice!

I’ve been working on a game for the last few months in spare time. It’s a dice game called Quint Dice, a social dice game. What makes this different from so many dice games out there is that it’s based on dice that have color pairs, and you can play it with more than two players.

What I’d like to avoid with Quint Dice is pay to win forms of revenue. Currently there are no bonus rolls. I’ll eventually add some, but you won’t be able to buy them or stock pile them to gain advantage over your opponents. I think I’ll add one bonus roll per game, and if you don’t use it in that game it goes away. I may also add a second bonus type – an extra dice. The idea is to add a level of strategy and flexibility to the play, without allowing a fundamental shift in advantage for one player or another just because they paid for an advantage.

The only revenue source built into the game at launch is a small banner ad at the bottom. I’d also like to add custom dice packs, and maybe some full on themes. I’m hoping this will be enough to turn this into something that pays for itself. I may also play with interstitial ads, but only as a voluntary way to earn points to buy custom dice packs and themes without shelling out cash, for users who prefer that route. I like this better than pestering players with involuntary interstitial ads as a way to get them to pay. Annoying players is not my favorite model, no matter how common it is in mobile gaming. Finally, there will eventually be an option to remove the ads.

I built Quint Dice with Meteor and React, and I would like to eventually port to React Native, but I’m using Cordova for the time being on Android and iOS (soon!). Like so many of the projects I play with under the unFocus banner, this has mostly been a learning exercise. But I’m happy with the results, and thought I should probably dust off this blog, and may start to share some thoughts I have as I develop these things.

To kick that off, I’ll share a couple of things I learned while getting this out the door, in no particular order. If you’d like to know more about any of these items, please leave a comments, and I’ll see about writing a follow-up post.

  • Facebook integration is easy/hard. Getting notifications to work seems pretty easy, but getting a canvas app to work posses some challenges, particularly where advertising is involved. You can’t use AdSense inside an iframe, which is needed for FB canvas. Instead you’ll need to go with one of Facebook’s approved ad vendors. They all have that institutional feel to them, if their websites can even be reached. Not a fantastic dev experience. The solution I’ll probably go with is to create a Facebook canvas based landing page, and then flow my users to my domain from there, instead of having them play within the canvas page.
  • Meteor’s accounts system is awesome! With very little effort you can get up and running with a full accounts system, and there are a ton of Meteor packages to expand functionality. I ended up building custom UI in the end, but to get started I used the off the shelf accounts-ui, so I didn’t have to wait. I’ll probably be using a link accounts package to add the ability to associate facebook, google+ and maybe other third-party accounts services (Amazon perhaps) to existing Quint Dice accounts. I may also use an account merge package to make it so users can merge their accounts if they accidentally sign up with two different auth sources and want to combine their accounts into one. There are two different packages for that – and these are the kinds of things that make Meteor so fantastic! I can’t think of another platform where something like that is so easy to set up. Setting this up has some interesting challenges in terms of user flow, and it’s probably worthy of a blog post or two.
  • My on boarding process is a mess in the current iteration. I hope to fix that with the above mentioned packages link and merge packages. I may also play around with having an anonymous user for anyone who comes to the site and is not logged in. That way they can just get started.
  • Finding players is another messy area so far. I basically only collect one bit of information from users – a username. To start a game with other players, you are presented with a giant list of every player. This clearly needs work. Eventually I’d like to add Facebook friend support, and maybe even a friends list internal to Quint Dice. I’ll also add more profile data and some way to search on that (this is on my short list).
  • Push notifications are relatively easy to set up on Android. Relatively more complicated on iOS, but I should have that out soon (this is the only thing hold up an iOS release). I did figure out how to get a nice black and white notification icon to work, and that maybe warrants its own blog post (see this Meteor forums post for now). I’m using raix:push package in Meteor for that.
  • Meteor’s React support is build around a React Mixin, which basically wraps a single method on a component to make it reactive. This makes sense given that Meteor typically doesn’t enforce any kind of application architecture on the developer (a good thing IMHO), but I will probably switch to using something more Flux like. For non-reactive data sources and application state, I’m already using a Flux like pattern/architecture (using SignalsLite.js), but I may look into something like Reflux (or maybe Redux, or Alt) and then figure out how to move my reactive Meteor handling to that level. This probably warrants a blog post or two.
  • I used Adobe Animate CC to create the animated dice roller (output as HTML5 Canvas of course). CreateJS is pretty sweet, even on mobile. I may experiment with OpenFL for new dice packs, and see how well that runs. I’m thinking that custom dice packs will stay in HTML5, even if I eventually transition to React Native, so that they can be truly cross-platform. The only challenge with that might be an eventual port to Apple Watch, and AppleTV, which don’t support WebViews. I’m curious though if there is a way to use the JS behind my canvas based mini-apps, and render that through some HTML5 canvas wrapper from within a JavaScriptCore instance (is JSCore on Apple WatchOS and Apple TVOS?). When I figure this out, I’ll almost certainly blog about it. Of course, I may not even need all that if I go with OpenFL, because they have a native c++ compiler.

Going forward, I’ll try to post more, probably when I make an update. There are a ton of other important packages (aldeed:simple-schema and aldeed:collection2) and technologies to cover, and I’m sure I’ll mention them eventually.

SignalsLite.js, Unit Testing, and Buggy IE

I decided to finally learn unit testing, so I downloaded QUnit (after looking at the 20,000 different unit testing options), and figured I’d give porting tiny SignalsLite to JavaScript a try, and see how the process goes.

While doing that, I found a crazy IE7/IE8 JS bug, that I’m sure has had me scratching my head in the past. Here is a quick unit test to show the problem:

[sourcecode language=”javascript”]
test( "Basic Requirements", function testReqs() {
expect(1);
var T;
(function makeT() {
T=function T(){}
T.prototype.test = 1;
})();
ok((new T).test, "Instance of exported T should have prototype methods");
});
[/sourcecode]

If you run that IE7 or IE8 it’ll fail!

The cool thing is, without having created unit tests for SignalsLite.js, I would never have known that could be an issue, and instead would continue to scratch my head when stuff like that broke in IE7/8. I found this because I was trying to export SignalLite from within a closure (I try to always define my stuff inside of closures to avoid namespace pollution), with this:

[sourcecode language=”javascript”]
(function() { "use strict"; // standard header

// naming inline functions makes the debug console easier to read.
window.SignalLite = function SignalLite() {
// stuff
}
SignalLite.prototype = {
// methods
};

// The fix is to use an anonymous function, or export elsewhere:
// window.SignalLite = SignalLite;

})();
[/sourcecode]

For whatever reason, that doesn’t work in IE7 and IE8. Unit testing is crazy!

If you are interested, go fork SignalsLite.json GitHib.

P.S. You can run the SignalsLite.js unit tests here to see the fail for yourself! I disabled that test in the SignalsLite.js tests.

Backstage2D – the GPU Augmented Flash Display List

I’ve been playing with some 2D API ideas built on top of Flash’s Stage3D and Actionscript 3.0. I call it Backstage2D, the GPU augmented flash display list.

Currently, Backstage2D’s code base is mostly a playground for proof of concept of some API ideas. Some stuff in this post may not match the git repo (for example, I’m still using “layer” instead of “surface”). There’s a bunch left to do, but it is working enough to run a modified version of MoleHill_BunnMark that some folks from Adobe put together (I actually lifted most of my GPU code from that example code, heh). The BunnyMark example was adapted from Iain Lobb’s BunnyMark, with some additions from Phillipe Elsass. You can view the Backstage2D version of BunnyMark here (and check out the original BunnyMark MoleHill here).

Fork Backstage2D at GitHub.

The rest of this post describes the thought process that went into Backstage2D.

The Flash AS3 display list API is not the best way to utilize the massively parallel capabilities of a GPU, and deal with the other limitations of a CPU/GPU architecture. The display list’s deeply nestable DisplayObject metaphor, and all the fun filters and blend modes just doesn’t translate well to very parallel, flat GPU hardware renderer. All of this is especially true on mobile like iPhones, iPads and Android devices, and that’s the primary target for Backstage2D.

With an API like the traditional flash display list, it’s easy to create situations that can’t easily be batched due to branching operations and other things which change the GPU state, and break parallel processing – slowing everything down. You see this in Adobe AIR’s GPU render mode, where seemingly random things can have a huge negative impact on performance. Behind the scenes AIR attempts to break the content into batches to speed things up. The use of certain features, or normal features in certain ways can drop you out of a batch. When performance degradation happens, it’s not always clear why. Because of that, to get great performance you must target just a subset of the normal features, and apply a lot of discipline to make sure everything keeps working smoothly.

I wanted do something different. I wanted to play with an API that is intentionally unlike the Flash display list – one designed to help the implementor (Flash developer or designer) understand how to arrange their content, so that it renders very quickly, even on mobile devices – and still get the benefits of all the glorious Flash stuff we are all used to.

Here are some of the primary principles I came up with, which impact the API:

  • In order to take advantage of the parallel nature of GPUs, we need to batch many Quads (think, DisplayObject) into batches. The API should make batches easy to understand and use, so there’s no guessing about what’s going on.
  • GPUs like shallow content – they draw a lot triangles all at the same time. There is no nesting on the GPU, so while some form of organization is necessary, the infinite nesting model must be reigned in.
  • Backstage2D shouldn’t do too much in an automagic kind of way. Guessing about the impact of nesting things a certain way, or using a blend mode, or the performance impact of using certain features translates into extra effort and cost during production because of the unpredictable negative impact features can have on performance. Features should work as you expect them to, and the performance impacts of doing certain things should be clear.
  • Think of the GPU as a remote server that you send instructions to. Uploading things like Textures to the GPU from system memory is slow (especially on mobile). Backstage2D should make these stress points clear.
  • Flash’s vector engine is tops, and working with Bitmaps (and sprite sheets) sucks! The API should enable the continued use of the display list, in GPU friendly ways. Drawing vector art on the GPU is hard, and is ugly anyway. So leverage the CPU rasterizer, and make sure the API makes the GPU upload bandwidth and render time overhead clear.
  • Backstage2D objects shouldn’t look like traditional display list objects – we’ll use names other than Sprite, MovieClip, DisplayObject, etc.

Of these, batching is the starting point, since it is the most necessary for advanced performance, and effects how data must be organized the most. You can draw each Quad (think Flash DisplayObject, or Sprite) individually by setting up the vertex, program, texture, etc. data for each quad, and calling drawTriangle for each and every Sprite. But the GPU can’t optimize to run in parallel if you do that – most of its processing cores end up underutilized in that model.

Batching let’s more than one quad be draw simultaneously, but there are limitations – Every item in a batch must use the same vector data (a giant array of x,y,other data), single texture and other state information, like blend modes. Additionally, the entire batch must be drawn without interruption, which means you can’t insert items from other batches (with other state settings like a different blend mode) in the middle of the batch.

So batches resemble layers, or surfaces. The model for Backstage2D will be a series of stacked surfaces, instead of a deeply nested tree structure starting at the root.

In this paradigm, the surface gets batched, and the children it contains get rendered in parallel – perfect for GPUs. To eliminate batch breaking APIs, certain “state changing” operations can be applied to only an entire layer, not to each element – operations like blend mode settings – or adding and removing elements from a surface. The limitations of surface API should help the implementor understand the impact of doing certain things. If you need to have 100 elements, and every other element has a blend mode of Multiply, while the one below it has a blend mode of Normal – in the traditional Flash API, this is fine, and can actually run pretty well. On the GPU, all 100 elements must be rendered individually in 100 distinct surfaces. Having that many surfaces feels heavy because it is heavy.

Texture changes are one of the things that break batching – a Shader can deal with only one texture (well, actually up to 8 – but that makes the pixel shader more expensive to run), so a set of elements in a batch must be combined into a sprite sheet or texture atlas. If you’ve tried to use a texture atlas in another 2D rendering engine, you may have noticed these are a pain to deal with – and usually it requires setting them up manually before compilation. This is one thing that Backstage2D handles for you – at runtime – in an automagic kind of way.

This feature was actually done for a bunch of reasons. I’d like to add is a resolution (screen DPI) independent measurement mode, where assets get generated on each device an app runs on, from high quality vector art, for exactly the necessary DPI the system is running at, and scaled to real life sizes. Type specified at 12-point, should truly measure at 12-point.

Additionally, Flash vector art looks great (especially with the new 16X16 quality modes), but they look their best when rendered to match the screen exactly. Resizing prerendered vector art can ruin the beautiful anti-aliasing in vector art. Proper sizing can also help performance with older hardware like an iPhone 3GS, which is actually pretty capable, but doesn’t cope well with iPhone 4 retina screen sized material (4x more pixels than will be displayed).

Setting all this up is expensive – especially generating the sprite sheet. But just setting up vector data and loading even predigested textures is already expensive enough that you wouldn’t want to do those tasks while your app is running some smooth animation – it will cause missed frames, and your users will notice. So Backstage2d’s API should guid the user to avoid doing expensive things while an app or animation is running. It exposes a build, load, and/or upload commands per layer. That way, the implementer always knows that what they are doing is computationally expensive (down the road, the plan would be to move much of that into concurrency – more on that another time).

The characteristics of this are very different from normal Flash, which is to load only the minimum of whats needed, when it’s needed, and try to keep as much as you can off the display list. In the Backstage2D model (in the standard surface type anyway), an entire surface, and all it’s children (called “Actors” to avoid colliding with AS3’s “Sprite”, etc.) gets rendered up front to a big TextureAtlas, and stored in memory or on disk. How to optimize and organize your assets to avoid running out of memory becomes an entirely different matter from the way to optimize for the CPU. A surface will have an associated sprite sheet bitmapData asset though, which can be measured.

With these restrictions in mind, the idea would be to create a variety of surface types to suit differing kinds of content. For classic content, a standard static Quads surface (done!), still frame animations (sprite sheet animations – generated at runtime), tweened animations (inverse kinematics – the bone tool), and streaming animations (dynamic MovieClips, large MovieClips, or video) – maybe even some surfaces useful for standard UI, like scroll panes. For more advanced 2D assets, a variety of different mesh layer types could be added (that’s where GPU stage3D programming gets fun!).

I’d love to flesh this out with more features, including an animation subsystem that would include a couple of different Animation display types. Alas, free time is short, and I’ll probably never get to it. But I already spent a lot of time on this (I broke my foot, and was couch bound for a while) so thought I’d share where I got to. 🙂