processing vs flash analysis

Working in 2 environments has caused me to think about how the 2 perform differently. I have tried to emulate results gained in flash within processing and vice versa with only limited success.

This has lead me to look at the hard data coming out of the environments.

  • I captured the data in the seperate environments and copied the output into text files.
  • I then took this data and passed it through a flash tool to render the results as a bitmap.
  • Then comped them together in photoshop to produce a comparison graph.
  • I produced a graph based on white noise and then one using a song to give context

I started this a little while ago but have been held back by the mistake I made in defining the frequency bands in flash. There are some obvious differences straight away:

  • In order to fit the 2 traces on the same graph I have had to scale the processing results down to 10% of the results taken in flash. The upshot is that Processing delivers far greater numbers which might mean better resolution than flash. (It may be useful to include peak results next time round)
  • The white noise produces graphs where it is hard to see the correlation between the 2 results. whereas the song produces some consistency.
  • With white noise, even though there is a bias towards low bass in both, The range of values processing gives for each band seem tighter. ie the peaks for low bass are in a range closer to those for high treble when compared to flash. In fact by comparison I seem to be getting a lot more low bass than any other bands in flash (both in white noise and song).
  • In the song I’m getting incredibly low values in the 2 treble bands (both flash and processing). This is a surprise as I had expected the reverse seeing as it is these bands that hold the vast majority frequencies calculated (from 1024hz – 11025hz/12800hz[flash/processing] in flash it accounts for 225 of the 255 result bands in compute spectrum)

I might have to do a little bit more on this to burrow down into the nitty gritty of what is going on.

definition of bass pt2

I’ve been working under a wrong assumption picked up from http://www.actionscript.org/forums/showthread.php3?t=176574. This thread says that the 256 bands resulting from flash’s computeSpectrum are split over 22050hz giving each a range of about 86hz. There is very little documentation from adobe about the results from the fft option.

I recently found an article which gave a different result (43hz). So built a test.

the raw files can be downloaded here.

The results bear out information found on Ben Stucki’s The Math Behind Flash’s FFT Results in that the range for each band is 43hz

Key pieces of information are:

  • Flash further clips the data to 256 results for a top frequency of around 11,025 Hz
  • that computespectrum fft computes it’s spectrum in a linearly not logarithmically.

So what does that all mean? It means that the frequencies passed through each of the 256 bands go up in steps of about 43hz. (11025/256). Following this through we get approximations of:

  • band 0-1 .. 20hz – 80hz = …….. Low Bass
  • band 2-7 .. 80hz – 320hz = …… Hi Bass, also referred to as Midbass.
  • band 8-29 .. 320hz – 1280hz = … Midrange
  • band 30-118 .. 1280hz – 5120hz = .. High Midrange/Low Treble
  • band 119-255 .. 5120hz – 20840hz = . High Treble

spirogen in processing

I’ve been running into performance issues in flash. pretty obvious really. Try to move 2000 dots around the stage and process sound every frame is quite a task. But I want to also look at this as a dynamic visualiser rather than something that can only play an mp3 that is loaded and plays from within the flash player. I want the spirogen to react to any sound coming in or out of the computer. As most flash people know flash only provides limited functionality with a sound input http://www.getmicrophone.com/.

So this has been good opportunity to experiment with processing which has a library called minim which should allow my experiments to go a bit further. Many thanks to Medwyn Jones for getting me started and putting up with all my naive questions (I still have a few more).

The image link takes you through to an html page with embedded java applet or click here.

radio visualizer 1

Our dev team at work have been working on a mp3 radio for the group (ENGINE) and with a bit of downtime I had a look at how what I have been doing so far might fit within this. I have pushed this quite far on and have also been looking at processing as an alternate environment (mainly for performance and get line in reasons).

However because the front-end of the radio is in flash (they are working on an air app too – smart) it was obvious I needed to step back into the flash side.

The radio project gave me the focus I needed to look at a load of “bits” I hadn’t got around to.

  • instead of drawing the dots on stage I do this in a virtual movieclip then draw the movieclip to a bitmap on the stage. Performance increase (though for this version I have taken the number of dots down to 800 to cater for more computers.
  • Using the bitmap I can do fade downs over frames thereby smoothing the appearance from frame to frame. Also creates a cool zooming affect (a bit i-Tunes old school but fun).
  • I have introduced a ratio changer (ie jumping from 2-3-4-5 point shapes) based on the dominant frequency at each frame to provide some  variety).
  • I have brought in 5 frequency bands giving more variables to attach change to (though it is getting a bit complicated now)

I have had to replace the radio link with an mp3 because of cross domain issues.
click here

a definition of bass pt1

Hurrah! At last and after tons of searching I have found an article (actually a forum thread) that gives me the starting point of the information I need.

Bass, Midrange, and Treble? What?

This article actually define 5 bands of frequency as follows:

  • 20hz – 80hz = …….. Low Bass
  • 80hz – 320hz = …… Hi Bass, also referred to as Midbass.
  • 320hz – 1280hz = … Midrange
  • 1280hz – 5120hz = .. High Midrange/Low Treble
  • 5120hz – 20840hz = . High Treble

so now I need to find out how flash’s compute spectrum distributes it’s frequencies across the 256 (per channel) bands. There’s not a lot of documentation from adobe… more searching needed.

spirogen 03 crack out the glowsticks

OK so this is what it really wants to do. for this one I’ve cut down on the alpha a bit so that I can have more dots.

(big fish, little fish, cardboard box) Click here

spirogen visualiser 02

Playing some more with the visualiser.

I’m managing to get a bit more definition into the shapes that emerge but that is at the expense of the the shape not changing so much (doh!) and not so much colour.

Let me explain… As I have previously written about for the  spirograph tool, the spirograph goes through periods of definition and discord as the ratio on the inner and outer circles change, definition occurs at points where the inner circle is a close multiple of the outer circle.

If controlling this ratio by values from music the majority of what you will get is discord unless you restrict the amount of change available to the ratio.

But in doing so the spirograph will be stuck on or around 1 ratio (ie 3 point shape or such) rather than sometimes having 3 points then other 4 or 5 or 2.

I’m managing to increase the variety of shape by attaching the treble to control the offset of the spiral making it extend past the point a physical spirograph would be able to do. (these appear like little flares- most appealing).

The colour I think should be easier to get back.

I’m also frustrated that each spirogen so far has to be tweaked per song. I haven’t found a one fits all formula.
I suspect the way I’m splitting up the frequency bands could be improved and maybe more bands are needed.
I am still think there’s a long way to go to eek out the performance and maybe increase the number of dots available at runtime.

I’m still happy with the results but lots more effort needed.

Click here to view the visualiser

spirogen visualizer 01

From developing the tool it occurred to me that it was only a short step to change the input from sliders to input from another source, music being the obvious candidate. AS3 allows for on the fly analysis of waveform information either as a float waveform or as frequencies (using fft). Using fft seemed a no brainer to me as I need various “bands” of value to make something interesting and the idea of bass, mid and treble are a good place to start.

Things to do on this:

  • I’m not sure of the proper definitions (try typing “definition of bass” into google you’ll see the problem of quick research) of these bands and have opted for a “what works best?” approach. I’ll refine this if there is the need to in the future.
  • I’m losing a lot of performance, calculation of the sound spectrum is obviously eating up CPU which means the number of dots I can render each frame is diminished

All that said it’s a very pleasing start.



For reason of performance I am holding these examples away from the home page.
Click here to view the visualiser

spirogen 2


Using the new tool helped me to produce this sequence.

images using the new tool

spirogen of a 7 point shape with offset extending past the inner circle radius

spirogen of a 7 point shape with offset extending past the inner circle radius

movie on the way

Return top

maths, code, input, image

explore, develop, but have fun