EmberJS – ongoing – part 2

Your app depends on an API or some data source to generate dynamic tabs using bootstrap.

I had no idea how to simply print href=”#National” ,  the string national is part of an object that is in some array.

In Angular you’d be using ng-repeat and then templating output: href=”{{item.name}}” : BOOM

EmberJS and emblem solution:


each g in geographySelection
li class=”
a href=”##{unbound g.name}” click=”selectGeo g.data” data-toggle=”tab” #{unbound g.name}



  1. If you won’t use unbound keyword, a bunch of script tags will be generated and you don’t want that!
  2. Click is a special tag for emblem (aka slim for ember) that converts to an observed listener for onClick events , EmberJS will propagate event through its cycle. Each event has a unique id and if you open Developer Console, you can inspect it as properties.
  3. Unbound will allow you to use property as string.

Share This:

coding, javascript

EmberJS – ongoing – part 3

Here is a situation:

API – Build on Rails , follows ActiveModel pattern (e.g. first_name)

EmberJS App – model uses camelized style for properties (e.g. firstName)

EmberJS requires an adapter for pulling data from some source (e.g. an API)
We have to use adapter that will deal with ActiveModel pattern

ApplicationAdapter = DS.ActiveModelAdapter.extend

Here is the caveat: Upgrading to Ember 1.8.1 will cause this error:

TypeError: undefined is not a function
at Object.func (http://localhost:4200/assets/vendor.js:49499:18)
at Object.Cache.get (http://localhost:4200/assets/vendor.js:25100:38)
at decamelize (http://localhost:4200/assets/vendor.js:49541:31)
at RESTSerializer.extend.keyForAttribute (http://localhost:4200/assets/vendor.js:63962:16)
at apply (http://localhost:4200/assets/vendor.js:32847:27)
at superWrapper [as keyForAttribute] (http://localhost:4200/assets/vendor.js:32419:15)
at null. (http://localhost:4200/assets/vendor.js:66421:31)
at null. (http://localhost:4200/assets/vendor.js:68910:20)
at cb (http://localhost:4200/assets/vendor.js:29082:22)
at OrderedSet.forEach (http://localhost:4200/assets/vendor.js:28880:13)

You might have to override an ActiveModelSerializer to handle the issue, however stick to Ember 1.7 and it’ll work.

Not loving the experience with Ember/EmberCLI

Share This:


EmberJS – ongoing

Strangest things happen when developing projects in EmberJS (specifically Ember CLI)

I have plenty of experience developing AngularJS based applications (mobile and desktop). EmberCLI is something a client wished based on their stack preference.

I won’t go into history of EmberJS but it was a turbulent one (at one point, it was in beta for over a year with little updates). It’s a lot more heavy (than Angular) on the dependencies and development approach (everything is a route, forget building small apps).


Odd errors:


Uncaught SyntaxError: Duplicate data property in object literal not allowed in strict mode


If you see the error above, i have resolved it by commenting out

proxiedSelectedStates: Ember.computed.filterBy(‘proxiedStates’, ‘checked’, true)
selectedStates: Ember.computed.mapBy(‘proxiedSelectedStates’, ‘content’)


Strangely enough, i don’t have a duplicate code and when i renamed these properties on a controller, everything worked. Magic !



Share This:


AngularJS – being aware of environment


Single Page Apps can be awesome

Single Page app is not a myth, its real and its awesome.

But strong , fast tools are definitely something a developer should always use. Grunt and Gulp are extremely useful for running tasks that optimize web resources, run automatic tests , monitor changes and refresh your browser as you tinker with the next big thing but also run any custom task you desire.

I had a challenge with an AngularJS app that was strictly using an API from a another domain and we had several environments (local, stage, prod – typical right? ) . Clearly those environment use different settings, and have different urls.

AngularJS provides nice ways of settings variables available across your app through injection.

Here is an example that creates a module ‘app.config’ (or appname of your choice) and sets constant (using this string you can inject and use it)

Define constants within AngularJS app

angular.module("app.config", [])

.constant("ENV", {
  "name": "development",
  "apiEndpoint": "http://localhost:5000",
  "basePath": "http://app.dev"


Inject and use variables within the app (CoffeeScript)


angular.module('app.controllers', ['restangular', 'app.config'])
.config (RestangularProvider, ENV)->

At this point i can inject and use ENV throughout controllers and as you can see above in config state of initializing an Angular app. Since i’m using Restangular to interact with an API i needed to set a base url.

What happens on automated deployments? Solution

I’ve had a chance to use codeship and jenkins for building codebase and running tests, etc. Deploying to Staging or production would require that you replace config module with variables dependent on the environment. You don’t have to do anything manually. Here is the solution if you’re using Grunt.

NPM you need to install:


This is your Gruntfile.coffee (just a portion that is of interest, loading and using ng-constant) :



ngconstantConfig = 
      # Options for all targets
        space: "  "
        wrap: ""use strict";nn {%= __ngModule %}"
        name: "app.config"

      # Environment targets
          dest: "<%= yeoman.app %>/scripts/shared/config.js"

            name: "development"
            apiEndpoint: "http://localhost:5000"

          dest: "<%= yeoman.app %>/scripts/shared/config.js"

            name: "stage"
            apiEndpoint: "http://staging-api.someurl.com"

Set configuration for grunt :

     ngconstant: ngconstantConfig

In regular JavaScript this is simply :


grunt.initConfig({ ngconstant: ngconstantConfig });


One more simple step (setting up the task)

env = process.env.NODE_ENV || 'development'

grunt.registerTask "build", ["clean:dist","ngconstant:"+env, "copy:ngconfig"...

This will register a task in a development environment “ngconstant:development” which will trigger NPM grunt-ng-constant to generate a file config.js that would include all the variables you defined in ngconstantConfig object for a specific environment.


Possible issue and solution

At least in my project (coffeescript) config.js was generated but too late and wasn’t picked up by other task that packaged everything into one file. My solution was to write a small task named “copy:ngconfig” that simply copied the generated file explicitly

This is simply copying config.js into Temporary location which then available for other task to package.

                files: [ 
                    src: ["<%= yeoman.app %>/scripts/shared/config.js"]
                    dest: ".tmp/scripts/shared/config.js"


Voila , your application will environment specific values on any environment

Next time your code is pulled into stage or production , it can run> grunt build   and code will be optimized, packaged into “dist” folder and deployed to optimally Amazon CDN


Share This:

architecture, bigdata, coding

Processing data using your GPU cores – with or without OpenCL

I’ve been personally fascinated by the progress big data computing is making these days. Few weeks i’ve been experiment with h2o (in memory java cluster wide math) that processes all kinds of algorithms across multiple clusters, ALL in memory.

What eluded me to understand is what’s happening with using those GPUs we have in our everyday laptops and desktops.


I’ve recently reached out to a lead engineer for AMD’s Aparapi project and asked him what’s happening with the project, after all there hasn’t been a release in over a year!

Gary Frost – lead engineer and contributor to Aparapi wrote:

It is active, but mostly in the lambda branch.   if you like AMD GPU's (well APU's) you will love the lambda branch it allows you to use Java 8 lambda features *AND* allows you to execute code on HSA enabled devices (no OpenCL required).  This means no buffer transfers and much less restrictions on the type of code you can execute. 

So the new API's map/match the new Java 8 stream APIs

Aparapi.range(0,100).parallel().forEach(id -> square[i]=in[i]*in[i]);

If you drop the 'parallel' the lambda is executed sequentially, if you include the parallel Aparapi looks for HSA, then OpenCL then if neither exist will fall back to a thread pool. 

The reason that there are less 'checkins' in trunk, and no merges from lambda into trunk is because we can;t check the lambda stuff into trunk without forcing all users to move to Java 8. 

This is really exciting and interesting !

In a more practical use case, what i would image doing is running map reduce with data provided (e.g. hazecast or other datagrid) for each node and utilize AMD’s finest GPUs to process data much quicker than costly Xeon 12 cores can do.This provides an affordable scalability.

For the time being, there will still be a need to have Hadoop’s data/name nodes and job tracker that would control which piece of data is process and where since GPUs won’t be able to share data between their remote nodes (at least for now).

Next steps to try it out:

Check out branch “lambda”

and Compile/Run

Aparapi.range(0,100).parallel().forEach(id -> square[i]=in[i]*in[i]);

There are plenty of other examples in the project’s source.




Share This:

coding, ruby

Map Reduce plus filter using Ruby

Previous two articles were dedicated to JavaScript and map reduce and filtering of somewhat large data, elegance of code, etc.

Going forward, i’d like to evaluate other languages doing the same exact thing.

And i’m curious about performance too.


Here is my version of the same code i wrote but in Ruby (Ruby 2.1.1 was used). Didn’t even run in Jruby 1.7 unfortunately.


 class Time
 	def to_ms
 		(self.to_f * 1000.0).to_i

 total = 300 *10000
 data = Array.new

 total.times.each {|x|
 	data.push({name: 'it', salary: 33*x})

 data.push({name: "it", salary: 100})
 data.push({name: "acc", salary: 100})

 def self.timeMe
 	start = Time.now.to_ms
 	endtime = Time.now.to_ms
 	puts "Time elapsed #{endtime - start} ms"

 timeMe do
 	boom = data.map {|j|  j[:salary] if j[:name] =='it' }.compact.reduce(:+)
 	puts "and  boom: #{boom} "



and  boom: 148499950500100
Time elapsed 1194 ms

Not bad but not as fast as javascript, probably due to array containing Hash, as opposed to one of the core types in javascript (a simple object) .

I’m not entirely sure why Jruby didn’t run, id love to learn.

Update (2014/april 18): Java 1.6 with JVM parameters (needed) posted results of around 600ms , JVM 1.8 didn’t seem to work at the moment.

On another note, Ruby’s syntax is simply lovely


data.map {|j|  j[:salary] if j[:name] =='it' }.compact.reduce(:+)


Share This:

coding, javascript, nodejs

Async map reduce filter using NodeJS and callbacks in parallel

Following up with a series i started earlier


Writing clean code is indeed paramount in our industry and we all aspire to be better at it. With popularization of NodeJS we face another challenge

Our first challenge was to process large set of json objects , filter it by name property and get a total for that group.

This is a traditional JavaScript blocking way of doing it.

var data = []

while( data.length < 100) {
   data.push({name: "it", salary: 33*data.length});
data.push({name: "accounting", salary: 100});

data.push({name: "acc", salary: 100});
var sum = data.filter(function(val){
	return val.name == "it"
	return curr.salary;
.reduce(function(prev, curr){
	return prev +curr;


I thought, well, this can be done in an asynchronous way. I’ve had a great production use of ‘async’ library that works mainly on NodeJS but also in browser.

To ramp up the numbers, we’ll create 3000000 objects.

> Finished iterating , took: 656 Sum 148499950500100

It took 656 ms. That’s pretty quick.

Here is my implementation using Async. Few comments:

Control is passed using callbacks. Iterators in most cases include an object and a callback. Filter is a special case that does not have a typical nodeJS  (err, data) pattern.

async.filter(data, function(item,cb){
	item.name == "it" ? cb(true) : cb(false);
}, function(results){
	return cb(null,item.salary);
}, function(err,results2){


function(memo, item, cb2){
//functions in a series
		setImmediate(function (){

},function(err, sum){
		end = +new Date();
      var diff = end - start; // time difference in milliseconds
      console.log(" Finished iterating , took: "+diff + " Sum "+sum);



Pretty cool but the numbers… not so good 9.8 seconds, JEEZ

 Finished iterating , took: 9835 Sum 148499950500100

Here is a series problem: reduce is executed in series, meaning it is sequential in terms of getting the final result, that’s a performance bottleneck.

Don’t be alarmed, there is a way and i absolutely tested it.

async.each(data, function(item,cb){
	if (item.name == "it")
		sum += item.salary;

}, function(err){
	end = +new Date();
      var diff = end - start; // time difference in milliseconds
      console.log(" Finished iterating , took: "+diff + " Sum "+sum);

Async’s each is the most commonly used method for executing in parallel.


Finished iterating , took: 446 Sum 148499950500100

 Much faster!

Async provides a lot of useful methods, one really useful is Sort/Sort By, eachSeries (will execute in sequence) and most important method is Async.parallel([methods to be executed in paralel], callback)


Voila & Thanks



Share This:

coding, javascript

Map Reduce is fun and practical in JavaScript

I’ll be honest, i’ve never used map..reduce in javascript. I wrote it in java and ruby (so far). So i had to try and i had an challenge in front of me that i needed to complete.

I turned to Mozilla for their wonderful JavaScript documentation.

This is an implementation of <array>.map

if (!Array.prototype.map)
  Array.prototype.map = function(fun /*, thisArg */)
    "use strict";

    if (this === void 0 || this === null)
      throw new TypeError();

    var t = Object(this);
    var len = t.length >>> 0;
    if (typeof fun !== "function")
      throw new TypeError();

    var res = new Array(len);
    var thisArg = arguments.length >= 2 ? arguments[1] : void 0;
    for (var i = 0; i < len; i++)
      // NOTE: Absolute correctness would demand Object.defineProperty
      //       be used.  But this method is fairly new, and failure is
      //       possible only if Object.prototype or Array.prototype
      //       has a property |i| (very unlikely), so use a less-correct
      //       but more portable alternative.
      if (i in t)
        res[i] = fun.call(thisArg, t[i], i, t);

    return res;

Now the fun part, how do i FILTER data, then map, then reduce and get the result back.



1. A bunch of data with object such as this:  ({name: “it”, salary: 100} )

2. Filter data by name “it”

3. Provide a total sum of all salaries for that name


Clearly this can be achieved in an simple data.forEach(function(item….)  but with map reduce + filter its a lot more elegant , though probably not as fast.

Here is my solution (after i sat down and refactored what i wrote during the challenge earlier )


var data = []

while( data.length < 100) {
   data.push({name: "it", salary: 33*data.length});
data.push({name: "accounting", salary: 100});

data.push({name: "acc", salary: 100});
var sum = data.filter(function(val){
	return val.name == "it"
	return curr.salary;
.reduce(function(prev, curr){
	return prev +curr;


I generated a bunch of data and it prints the sum of all salaries for a name “it”.

For some reason, i thought that map and reduce would executed in parallel and would have a callback but that just means how heavily i am into NodeJS . On the next post, ill share how i truly write async code an how sorting, filtering, map/reduce can be achieved with callbacks.


Thanks and happy coding.



Share This:

coding, ruby

Ruby interview challenge

I had a pleasure of getting an interview with an upcoming startup (i won’t disclose which one). Besides implementing fizz buzz in ruby, i was asked to write a method that would check for input to be a palindrome.

Palindrome is a word, phrase, number, or other sequence of symbols or elements, whose meaning may be interpreted the same way in either forward or backward.

Keep in mind: using reverse is not allowed 🙂

I wrote two versions since i wasn’t pleased with my first one.


Rspec – testing driven development

require 'spec_helper'
require 'blah'

describe "Blah" do 

	it "should match reversed order of the word " do
		palindrome("abba").should == true
		palindrome("abcba").should == true
	it "should reject if reversed order doesnt match" do 
		palindrome("abbac").should_not == true

	it "should handle empty string with passing" do 
		palindrome("").should == true

	it "should handle various cases " do
		palindrome("AbbA").should == true

	it "should handle empty spaces " do
		palindrome("   Ab  bA").should == true


Version 1

def palindrome2(word)

i = 0
last = -1
word.each_char do |c|
	if word[i] != word[last]
		return false
	last -=1

return true



Version 2

def palindrome(word)
	word = word.downcase.gsub(" ","").chars 
	word.each{|c| return false if  word.shift != word.pop  }

I have a feeling there is a better way of writing this.


Share This:

bigdata, coding, javascript, nodejs

Using sumo logic to query bigdata

Main selling point of Sumologic is: real-time (near) big data forensic capability.

[pullquote]Log data is the fastest-growing and most under-utilized component of Big Data. And no one puts your machine-generated Big Data to work like Sumo Logic[/pullquote]


At Inpowered, we used Sumologic extensively, our brave and knowledgeable DevOps folks managed chef scripts that contained installation of Sumologic’s agents on most instances. What’s great about this:

  • Any application that writes any sort of log, be it a tomcat log (catalina.out)
    or custom log file (i wrote tons of json) , basically any data that’s structured or otherwise is welcome
  • Sumologic behind the scene processes your data seamlessly (with help of hadoop
    and other tools in the background) and you deal with your data using SQL-like language
  • Sumologic can retain gigabytes of data , although there are limits as to what is kept monthly
  • Sumologic has a robust set of functions , from basic avg, sum, count,
    it has PCT (percentile ) – pct(ntime, 90) gives you 90th percentile of some column
  • Sumo has a search API, allowing you to run your search query ,
    suspend process in the background and return
  • Sumo’s agent can be installed on hundreds of your ec2 machines (or whatever)
    and each machine can have multiple collectors (think of collector as a source of logs)
  • Besides an easy access to your data (through collectors on hundreds of machines) ,
    very useful dashboard with autocomplete field for your query is easy to use
  • Another cool feature is “Summarizing” within your search query,
    allowing you to group data via some sort of pattern into clusters
  • Oh! And you get to use timeslicing when dealing with your data


Getting started guide can be found here 

High level overview how Sumologic processes data behind the scene (img from sumologic)


 How could we live without an API?!


Sumologic wouldn’t great if it hadn’t offered us to run queries ourselves using whatever tools we want.

This can be achieved fairly easily using their Search job API , here is an example that parses log files that contain 10.343 sec —-< action name> . Somewhat a common usecase where an app logs these things and i want to know which are the slowest, whats the 90th percentile and what were the actions within certain time range and sliced by hour so that i don’t get too much data. Just an example written in nodeJS.

query_messages – query that will return you all the messages with actions that were slow

query – query that will provide you statistics and 90th percentile, sorted result

var request = require('request'),
    username = "[email protected]",
    password = "somepass",
    url = "https://api.sumologic.com/api/v1/logs/search",
    query_messages = '_collector=somesystem-prd* _source=httpd "INFO"| parse "[* (" as ntype  | parse "--> *sec" as time | num(time) as ntime | timeslice by 1h |  where ntime > 7 | where !(ntype matches "Dont count me title")   | sort by ntime',
    query = '_collector=somesystem-prd* _source=httpd "INFO"| parse "[* (" as ntype  | parse "--> *sec" as time | num(time) as ntime | timeslice by 1h |  where ntime > 7 | where !(ntype matches "dont count me title")  | max(ntime), min(ntime), pct(ntime, 90)  by _timeslice | sort by _ntime_pct_90 desc'

var qs = require('querystring');
var util = require('util'); 
from = "2014-01-21T10:00:00";
to = "2013-01-21T17:00:00"

	var params = {
    		q: query_messages,
    		from: from,
    		to: to

    	params = qs.stringify(params);    
    url = url + "?"+ params;
    	'auth': {
    		'user': username,
    		'pass': password,
    		'sendImmediately': false
    function (error, response, body) {
    	if (!error && response.statusCode == 200) {
    		var json = JSON.parse(body);
    		console.log(">>> ERrror "+error + " code: "+response.statusCode);

    function insp(obj){
	console.log(util.inspect(obj, false, null));

Now you have an example and you can work with your data , transform it, send a cool notification to a team, etc etc.

Thanks and enjoy Sumologic (free with 500Megs daily )


Share This: