coding, javascript, nodejs

Async map reduce filter using NodeJS and callbacks in parallel

Following up with a series i started earlier

http://jeveloper.com/map-reduce-is-fun-and-practical-in-javascript/

Writing clean code is indeed paramount in our industry and we all aspire to be better at it. With popularization of NodeJS we face another challenge

Our first challenge was to process large set of json objects , filter it by name property and get a total for that group.

This is a traditional JavaScript blocking way of doing it.

var data = []

while( data.length < 100) {
   data.push({name: "it", salary: 33*data.length});
}
data.push({name: "accounting", salary: 100});

data.push({name: "acc", salary: 100});
var sum = data.filter(function(val){
	return val.name == "it"
})
.map(function(curr){
	return curr.salary;
})
.reduce(function(prev, curr){
	return prev +curr;
})

console.log(sum);

I thought, well, this can be done in an asynchronous way. I’ve had a great production use of ‘async’ library that works mainly on NodeJS but also in browser.

To ramp up the numbers, we’ll create 3000000 objects.

> Finished iterating , took: 656 Sum 148499950500100

It took 656 ms. That’s pretty quick.

Here is my implementation using Async. Few comments:

Control is passed using callbacks. Iterators in most cases include an object and a callback. Filter is a special case that does not have a typical nodeJS  (err, data) pattern.

async.filter(data, function(item,cb){
	item.name == "it" ? cb(true) : cb(false);
}, function(results){
async.map(results,function(item,cb){
	return cb(null,item.salary);
}, function(err,results2){

async.reduce(results2,0, 

function(memo, item, cb2){
//functions in a series
		setImmediate(function (){
			cb2(null,memo+item); 
		});

},function(err, sum){
		end = +new Date();
      var diff = end - start; // time difference in milliseconds
      console.log(" Finished iterating , took: "+diff + " Sum "+sum);

});

});
});

Pretty cool but the numbers… not so good 9.8 seconds, JEEZ

 Finished iterating , took: 9835 Sum 148499950500100

Here is a series problem: reduce is executed in series, meaning it is sequential in terms of getting the final result, that’s a performance bottleneck.

Don’t be alarmed, there is a way and i absolutely tested it.

async.each(data, function(item,cb){
	if (item.name == "it")
		sum += item.salary;
	cb();

}, function(err){
	end = +new Date();
      var diff = end - start; // time difference in milliseconds
      console.log(" Finished iterating , took: "+diff + " Sum "+sum);
  });

Async’s each is the most commonly used method for executing in parallel.

Result:

Finished iterating , took: 446 Sum 148499950500100

 Much faster!

Async provides a lot of useful methods, one really useful is Sort/Sort By, eachSeries (will execute in sequence) and most important method is Async.parallel([methods to be executed in paralel], callback)

 

Voila & Thanks

 

 

Share This:

coding, javascript

Map Reduce is fun and practical in JavaScript

I’ll be honest, i’ve never used map..reduce in javascript. I wrote it in java and ruby (so far). So i had to try and i had an challenge in front of me that i needed to complete.

I turned to Mozilla for their wonderful JavaScript documentation.

This is an implementation of <array>.map

if (!Array.prototype.map)
{
  Array.prototype.map = function(fun /*, thisArg */)
  {
    "use strict";

    if (this === void 0 || this === null)
      throw new TypeError();

    var t = Object(this);
    var len = t.length >>> 0;
    if (typeof fun !== "function")
      throw new TypeError();

    var res = new Array(len);
    var thisArg = arguments.length >= 2 ? arguments[1] : void 0;
    for (var i = 0; i < len; i++)
    {
      // NOTE: Absolute correctness would demand Object.defineProperty
      //       be used.  But this method is fairly new, and failure is
      //       possible only if Object.prototype or Array.prototype
      //       has a property |i| (very unlikely), so use a less-correct
      //       but more portable alternative.
      if (i in t)
        res[i] = fun.call(thisArg, t[i], i, t);
    }

    return res;
  };
}

Now the fun part, how do i FILTER data, then map, then reduce and get the result back.

 

Challenge:

1. A bunch of data with object such as this:  ({name: “it”, salary: 100} )

2. Filter data by name “it”

3. Provide a total sum of all salaries for that name

 

Clearly this can be achieved in an simple data.forEach(function(item….)  but with map reduce + filter its a lot more elegant , though probably not as fast.

Here is my solution (after i sat down and refactored what i wrote during the challenge earlier )

 

var data = []

while( data.length < 100) {
   data.push({name: "it", salary: 33*data.length});
}
data.push({name: "accounting", salary: 100});

data.push({name: "acc", salary: 100});
var sum = data.filter(function(val){
	return val.name == "it"
})
.map(function(curr){
	return curr.salary;
})
.reduce(function(prev, curr){
	return prev +curr;
})

console.log(sum);

I generated a bunch of data and it prints the sum of all salaries for a name “it”.

For some reason, i thought that map and reduce would executed in parallel and would have a callback but that just means how heavily i am into NodeJS . On the next post, ill share how i truly write async code an how sorting, filtering, map/reduce can be achieved with callbacks.

 

Thanks and happy coding.

 

 

Share This:

bigdata, coding, javascript, nodejs

Using sumo logic to query bigdata

Main selling point of Sumologic is: real-time (near) big data forensic capability.

[pullquote]Log data is the fastest-growing and most under-utilized component of Big Data. And no one puts your machine-generated Big Data to work like Sumo Logic[/pullquote]

 

At Inpowered, we used Sumologic extensively, our brave and knowledgeable DevOps folks managed chef scripts that contained installation of Sumologic’s agents on most instances. What’s great about this:

  • Any application that writes any sort of log, be it a tomcat log (catalina.out)
    or custom log file (i wrote tons of json) , basically any data that’s structured or otherwise is welcome
  • Sumologic behind the scene processes your data seamlessly (with help of hadoop
    and other tools in the background) and you deal with your data using SQL-like language
  • Sumologic can retain gigabytes of data , although there are limits as to what is kept monthly
  • Sumologic has a robust set of functions , from basic avg, sum, count,
    it has PCT (percentile ) – pct(ntime, 90) gives you 90th percentile of some column
  • Sumo has a search API, allowing you to run your search query ,
    suspend process in the background and return
  • Sumo’s agent can be installed on hundreds of your ec2 machines (or whatever)
    and each machine can have multiple collectors (think of collector as a source of logs)
  • Besides an easy access to your data (through collectors on hundreds of machines) ,
    very useful dashboard with autocomplete field for your query is easy to use
  • Another cool feature is “Summarizing” within your search query,
    allowing you to group data via some sort of pattern into clusters
  • Oh! And you get to use timeslicing when dealing with your data

 

Getting started guide can be found here 

High level overview how Sumologic processes data behind the scene (img from sumologic)

valprop_anomaly

 How could we live without an API?!

 

Sumologic wouldn’t great if it hadn’t offered us to run queries ourselves using whatever tools we want.

This can be achieved fairly easily using their Search job API , here is an example that parses log files that contain 10.343 sec —-< action name> . Somewhat a common usecase where an app logs these things and i want to know which are the slowest, whats the 90th percentile and what were the actions within certain time range and sliced by hour so that i don’t get too much data. Just an example written in nodeJS.

query_messages – query that will return you all the messages with actions that were slow

query – query that will provide you statistics and 90th percentile, sorted result

var request = require('request'),
    username = "[email protected]",
    password = "somepass",
    url = "https://api.sumologic.com/api/v1/logs/search",
    query_messages = '_collector=somesystem-prd* _source=httpd "INFO"| parse "[* (" as ntype  | parse "--> *sec" as time | num(time) as ntime | timeslice by 1h |  where ntime > 7 | where !(ntype matches "Dont count me title")   | sort by ntime',
    query = '_collector=somesystem-prd* _source=httpd "INFO"| parse "[* (" as ntype  | parse "--> *sec" as time | num(time) as ntime | timeslice by 1h |  where ntime > 7 | where !(ntype matches "dont count me title")  | max(ntime), min(ntime), pct(ntime, 90)  by _timeslice | sort by _ntime_pct_90 desc'

var qs = require('querystring');
var util = require('util'); 
from = "2014-01-21T10:00:00";
to = "2013-01-21T17:00:00"

	var params = {
    		q: query_messages,
    		from: from,
    		to: to
    	};

    	params = qs.stringify(params);    
    url = url + "?"+ params;
    request.get(url,
    {
    	'auth': {
    		'user': username,
    		'pass': password,
    		'sendImmediately': false
    	},
    },
    function (error, response, body) {
    	if (!error && response.statusCode == 200) {
    		var json = JSON.parse(body);
    		insp(json);
    	}else{
    		console.log(">>> ERrror "+error + " code: "+response.statusCode);
    	}
    }
);

    function insp(obj){
	console.log(util.inspect(obj, false, null));
}

Now you have an example and you can work with your data , transform it, send a cool notification to a team, etc etc.

Thanks and enjoy Sumologic (free with 500Megs daily )

 

Share This:

coding, javascript, ruby

Date Time mess with JavaScript is a breeze in Ruby world

Those of you whoa are seasoned in web develop know what i mean when i say date/time parsing , formatting can be ugly and time consuming. You turn to DateJS or momentJS (my currently favourite), you always (you should) look at the last time the project was updated, you make sure its not dependent on jquery or something else.  You also would hope that it works nicely within your nodeJS app.

 

Then there comes the time where you need a nice date time picker, there are many but some forget about time picking. I’ve used a few , moved to different, etc. Sometimes these date pickers have their own date time format , GREAT! now you should be careful when parsing on front end is okay and parsing on backend works too.

Here is why its a pleasure of having DateTime class baked right into Ruby, for parsing and formatting.

Here is a handy url

http://apidock.com/ruby/DateTime/strftime

Share This:

architecture, javascript

Publishing to AWS SNS topic

SNS is wonderful , supporting HTTP, email(No HTML markup), email-json, SQS and SMS (US only for now).

Basically it allows you to receive various types of notifications or an app subscribed via various protocols.

Let’s bypass discussion how to subscribe to a Topic , its pretty trivial and is very flexible (some thirdparty can receive an HTTP call based on a message that arrived, awesome, right? )

Here is an example using NodeJS and AWS node library of publishing a message.

var AWS = require('aws-sdk');
//don't hard code your credential 🙂
 AWS.config.update({accessKeyId: 'ZZZ', secretAccessKey: 'ZZZ', region: 'us-east-1'});

var sns = new AWS.SNS();
sns.publish(
{TopicArn:'arn:aws:sns:us-east-1:283fdfdf-warning', Message:"Just testing for now ", Subject: "testing "}
, function(err,data){
 if (err){
     log("Error sending a message "+err);
 }else{
     log("Sent message {0}".format(data.MessageID));
 }
 });

 

Share This:

javascript

Helping organize NodeJS meetup in Toronto

I love to network within meetups, it’s a great way of learning and meeting smart people.

I had a pleasure of speaking at the last NodeJS meetup on the topic of Clustering nodeJS and tonight InPowered (where i currently work) is hosting another NodeJS meetup with amazing speakers.

 

Toronto Node.js Meetup: The Summer Edition

Tuesday, Jul 23, 2013, 7:00 PM

Location details are available to members only.

40 Members Went

Greetings Nodesters. We’ve got a last minute summer meetup for you, with two great speakers from out of town. Benjamin Lupton the creator of DocPad, and Matthew Dobson from Apigee.We have a new venue this meetup, 24 Duncan Street, courtesy of InPowered Inc.Benjamin Lupton is the founder of Bevry, a Sydney based company dedicated to empowering dev…

Check out this Meetup →

 

We’ll have the creator of DocPad (Benjamin Lupton) talking about his web framework and Matthew Dobson ( Software Engineer at Apigee) who will be talking about Node.js as your API layer.

I will have the pleasure of introducing and talking about InPowered and our product.

Share This:

architecture, coding, javascript, mongodb

Skedul.In project is wonderful – power of google API and ruby

Not always do you get to work with wonderful client that know what they want, but i have. Skedul.In soon to be launched in production mode (currently in beta) is a simple yet great idea: have single place to create all your events at Once , select your google calendar, use your google contacts and you are done, all events are created, invitation sent, your calendar updated.

 

Techie stuff:

Ruby 2.0

Rails 3.2.X

MongoDB – Can’t live without it , also used for session storage

Completely relies on OpenAuth , you have to login with your Google Account

Google Calendar API, Contacts API

Hosted on Heroku

 

Couple of screenshots :

Screenshots

Share This:

coding, javascript

CoffeeScript way of checking for undefined

I enjoy CoffeeScripting

There is some learning curve but experienced developer will pick up. There is something i am not a fond of, its the indentation for block and function definition.

Here is something to keep in mind: Checking for a variable existence:

Normal Javascript:

if (!(typeof data !== "undefined" && data !== null)) {

CoffeeScript way (awesome)

if data?

Alternative way (same)

if typeof data isnt "undefined" and data isnt null

Share This: