Tuesday, May 16, 2017

Google launches new Tools to help the developers for producing high performance Daydream Apps

Daydream Apps will be striking feature in the Android Nougat release. Building this VR experience is mission critical task for all the developers involved. Because it’s like explaining the entire game of thrones story in an hour. There are lots of possibilities and corner cases.
Building this VR apps process must be scalable, that is the apps developed must not consume the entire resource or the device must not get overheated.
Google launches new Tools to help the developers for producing high performance Daydream Apps

So to help the developers who are working on this Google has announced brand new tools for producing high performance Daydream apps. So to begin with, let's have a look on Daydream Renderer.

Rendering is an important aspect when it comes for
developing VR apps. Because Rendering is the thing which makes the user to feel the impact. To produce the impact of rendering the secret is the art of producing light and shadows in the visuals at appropriate places. As per Google, this Daydream Renderer is a set of optimized tools that allows the user to produce dynamic lighting and shadowing in the visuals that gives an authentic impact to the users.

Developers who are constructing the games for the daydream platforms will be getting more benefitted from this Daydream Renderer. As the lighting and shadowing is very essential for making the user to stay intact with the game. This Tool will do that for sure as per the Google. Speaking about the VR for Games, Actually Roulette in Casino is my favourite Game, I used to play online roulette at casino.com. Roulette is an game in casino that actually stimulates one's brain to choose the exact option among the various possibilities.

Then moving on to next, Instant Preview

Normally a Developer writing a mobile application follows the process of writing a code, then compile and upload the change to a mobile device and test whether the change works for it. So at the end of the day a developer will spend several minutes idle during the entire process.  

But the Instant Preview now introduced by Google ensures that this process could be completed in seconds therefore saving the time of a developer increasing the productivity. It also ensures Quality allowing developers to do more iterations in less time.

Google has also introduced performance monitoring tools like GAPID and PerfHUD
Though the VR Apps looks just great, it will reach wide range of users only when it performs in an optimized way. The device on which the VR apps runs must not get overheated and the VR apps must run with no dependence upon the device or the environmental condition.

gapid permits the developers to perform deep GPU profiling providing ideas upon how the hardware and software interacts to drives performance. It also allows the developer to search for any other corner case that could bring the entire performance down.

PerfHUD is an another extraordinary tool which helps the developers to rightly plot which areas of the games and apps push the hardware of the device too hard.

So, looking through the future VR is going to be next big game changer for Game Industry and enables the user to use virtual reality from their smart phone devices.

How to build an iOS App with Xcode 8 and Swift 3

Hello developers, In this tutorial I'm gonna show you how to build an iOS App with Xcode.

What is Xcode and Swift ?

     Xcode is a integrated development environment which is developed by apple for developing iOS, macOS, watchOS and tvOS Apps. It comes with iOS SDK, compilers, debugging tools and simulators etc. Install the Xcode app from the Mac App Store itself or download it from here. Note: To install Xcode, you need to have Mac machine.

     Swift is a new programming language developed by Apple Inc and it's release by Chris Lattner in 2010. Swift comes with Playground to write swift program and the outputs are displayed in right pane. It shows the results very fast and you no need to click any run buttons to execute the program. It is very interesting know ? Swift has a lot of features and it has a huge libraries too. The extension of swift program is .swift 

     Okay now let's create our new project and start to build our first iOS App in Xcode.

Creating a Project

  • Open Xcode and Create a new project from the file menu.
  • Select Single View Application template and click Next.
  • Give name to product, organization and organization identifier. See below image for reference.

Adding Views 

  • Click the Main.Storyboard file.
  • Drag and drop the Label and Button from the Object Library into the View. Do like below image.

  • Now open the Assistant Editor in Xcode. Click the double circled icon in top right corner(See the image below) or Click CMD + OPTION + ENTER shortcut.

  • Then drag and drop the label and button into the view controller(viewcontroller.swift file)

  • Then it'll show you one tiny window, in that you've to change the connections and click connect. For button, change the connection from Outlet into Action. For Label, set the connection as Outlets(By default, it's Outlet only) 

View Controller Code

import UIKit

class ViewController: UIViewController {

    @IBOutlet var helloWorldLabel: UILabel!
    @IBAction func clickMeBtnPressed(_ sender: Any) {
        //Setting hello world label into Welcome to iOS World
        helloWorldLabel.text = "Welcome to iOS World"
    override func viewDidLoad() {
        // Do any additional setup after loading the view, typically from a nib.
        print("viewDidLoad function is called")

    override func didReceiveMemoryWarning() {
        // Dispose of any resources that can be recreated.


Run the App

      Now build the project and run it. Then the app will open in Simulator.

Boom!!! Just click the "Click Me" button then the label will changed from "Hello World" to "Welcome to iOS World". We've completed our first hello world iOS app in Xcode using Swift 3. I hope you guys understand and if you've any doubts in this tutorial then just drop your comments. See you in the next tutorial...

Friday, April 28, 2017

Working with Pdf JS - Render PDF natively in Browser Tutorial

Posting Web tutorial after a long time with a lot of energy and possibilities. There would be few things in life which would be life changing and support you even in worst times, yes! of course, my blog and readers are my supporters who support me throughout my journey. 

Working with Pdf JS - Render PDF natively in Browser Tutorial - i-visionblog


Why Pdf.js? 

Pdf Js is awesome javascript library supported by Mozilla and Individual contributors to make the web a beautiful place to visit and get work done. So, Every browser is capable of viewing Pdf files and what is so special about the pdf.js library is you can control the pdf with Javascript code rendering, switching pages and even more. So, Developer has the ultimate power to control the pdf loaded from the server and rendering part in the client. PDF are awesome documents that are daily shared between business for invoices and business payment processing documents, official agreements, documentations even more possibilities.

Use cases:

When you're building an application that heavily depends on the pdf view to the customer and need to render as a part of the application, pdf js is right choice to go with it. Where you can control the contents of the pdfs with session maintenance (premium and free customers ), personalized pdf rendering.

Getting Started:

Integrating pdf.js into your web/mobile web application is straightway easy but requires knowledge about javascript promises. To write clean and remove callback-style code, Promises are introduced.


Either you can use pre-build version or you can clone the source code from GitHub and try to build with gulp command. I prefer to build the Javascript library from source. If you want to use it directly you can refer pastebin reference code.

building process commands:
Make sure you have node.js installed in your system with sudo access. Open the shell and try to execute the commands. First of all clone the GitHub repo.
 > cd <Pdf.js repo>
 > npm install  
 > gulp generic 

By running above commands, you can successfully build the source code into distribution code (dist) in the build directory. You can use this as javascript library importing it as a script.


You can simply include the pdf.js in your script tag along with the pdf.worker.js file. Once it has been setup you need to write application logic based on your web application.


<title>Pdf.js Example Application | i-visionblog</title>
<script src="build/pdf.js"></script>
<script src="build/pdf.worker.js"></script>
<script src="js/app.js"></script>

So, this will import the whole pdf.js and its recommended to go for minified version while moving into production mode.

PDF.js API to render pdf:

pdf.js uses Ajax feature to load the pdf from the server. It reduces the memory footprint by loading the pdf by pagewise instead of loading all at a time. In fact, that's bad practice to load all pdf content at a time unless it is too important. PDF.js also uses a separate worker to download and render the content to Html DOM (document object model). We could getPage(index) method to load the page and use render() function to render the page in the Html DOM. It also uses Context to instruct the height, width, scale and element container to load. But it all works with promises to reduce the callback-style of coding. Here is sample javascript snippet to load all the pages into the canvas.

Final Words:

You can download the whole application project from here. The above code simply loads the pdf from the local server and then renders all pages into Html DOM as canvas images. It's up to you to devise your own event click listeners to load next and previous pages as per your application logic.

Hope you've liked this article. Subscribe for more tutorials and follow me in Google+/Facebook for updates. If you have any doubts chat with me or drop me a mail. Feel free to comment below. Share is care.

Tuesday, April 25, 2017

How to Make an Explainer Video

Creating an explainer video is a great way to attract the attention of prospective customers. By giving people solutions to common problems, you build trust — which often translates to custom over time. Your company may be able to solve the problems or provide the solutions they need, but in order to get this message across, you need to explain exactly what your business is about. This is where best explainer video comes in.
Consumers are bombarded with information, and they have relatively short attention spans as a result. To explain what your company does using text might take 600 words or more, which would take up 10 minutes of a person’s precious time. But by creating a video, you can deliver the same information quickly and in a more engaging way.
In order to create the best explainer video you can, there are several steps you need to take.

Write a compelling script

The video making process should start with a script. Your video should follow a logical, linear narrative, and it should contain all of the essential information you need to communicate in a way that will be engaging and informative. By writing a script, you can be sure that you’re creating the best explainer video possible from the very beginning of the process.

Create a storyboard

Once you have a completed script, you can use it to start putting the visual aspects of your video in place — with a storyboard. Think of your storyboard as a visual sketch of what your video is going to look like. This gives you a tactile method of experimenting with different scenes and visuals before you start the video production process. Try out different movements, transitions and angles on your storyboard until you’re happy that what you’ve created will deliver the right message.

Enlist the services of a narrator or voice-over

The narrator or voice-over tells the story to complement the visuals in your video. This is where your script comes to life, so it’s important that you choose someone with an engaging and clear voice. It’s definitely worth spending some money on hiring a professional voice-over, as the right voice will make your audience feel more engaged with the story — and your company.

Animate your video

This is probably the most important task you will need to complete during the entire process. As videos on most social media platforms play on mute until they are clicked, your visuals need to be professional, visually engaging and interesting. You need to capture the attention of prospective customers within a few seconds, so what they see on the screen has to be impressive. Don’t cut corners here: hire a professional animator to create visuals that will stun your audience. A talented animator will take your script and transform it into a visual story that flows seamlessly. Music, sound effects and text will also add to the overall aesthetic — creating a video that will represent your brand in the best possible way.

Publish and track the results

Publish your video on your own website, relevant third-party websites and on all of the main social media platforms. It’s vital that you track how your video is performing — both in terms of click-through rates, conversion rates and the general feedback of your audience. Depending on the results, you might want to tweak your video slightly. Consider implementing A/B testing on two different videos in order to see which approach delivers the best results.

If you can create an informative and engaging explainer video, you should be able to use it for driving high quality traffic to your website. This not only grows your business, it builds your brand.

Hope you've liked this post, if any doubts feel free to comment below or chat with me in Google+/ Facebook. Share is care.

Wednesday, March 15, 2017

Why The Hell Would I Use Node.js? A Case-by-Case Tutorial

We're happy to publish our first Guest post from Toptal Team regarding Evolution of Node.js Server side programming language. For publishing your guest post via our blog, just drop us a mail.


JavaScript’s rising popularity has brought with it a lot of changes, and the face of web development today is dramatically different. The things that we can do on the web nowadays with JavaScript running on the server, as well as in the browser, were hard to imagine just several years ago, or were encapsulated within sandboxed environments like Flash or Java Applets.
Before digging into Node.js, you might want to read up on the benefits of using JavaScript across the stack which unifies the language and data format (JSON), allowing you to optimally reuse developer resources. As this is more a benefit of JavaScript than Node.js specifically, we won’t discuss it much here. But it’s a key advantage to incorporating Node in your stack.
As Wikipedia states: “Node.js is a packaged compilation of Google’s V8 JavaScript engine, the libuv platform abstraction layer, and a core library, which is itself primarily written in JavaScript.” Beyond that, it’s worth noting that Ryan Dahl, the creator of Node.js, was aiming to create real-time websites with push capability, “inspired by applications like Gmail”. In Node.js, he gave developers a tool for working in the non-blocking, event-driven I/O paradigm.
After over 20 years of stateless-web based on the
stateless request-response paradigm,
we finally have web applications with real-time, two-way connections.
In one sentence: Node.js shines in real-time web applications employing push technology over websockets. What is so revolutionary about that? Well, after over 20 years of stateless-web based on the stateless request-response paradigm, we finally have web applications with real-time, two-way connections, where both the client and server can initiate communication, allowing them to exchange data freely. This is in stark contrast to the typical web response paradigm, where the client always initiates communication. Additionally, it’s all based on the open web stack (HTML, CSS and JS) running over the standard port 80.
One might argue that we’ve had this for years in the form of Flash and Java Applets—but in reality, those were just sandboxed environments using the web as a transport protocol to be delivered to the client. Plus, they were run in isolation and often operated over non-standard ports, which may have required extra permissions and such.
With all of its advantages, Node.js now plays a critical role in the technology stack of many high-profile companies who depend on its unique benefits. The Node.js Foundation has consolidated all the best thinking around why enterprises should consider Node.js in a short presentation that can be found on the Node.js Foundation’s Case Studies page.
In this post, I’ll discuss not only how these advantages are accomplished, but also why you might want to use Node.js—and why not—using some of the classic web application models as examples.

How Does It Work?

The main idea of Node.js: use non-blocking, event-driven I/O to remain lightweight and efficient in the face of data-intensive real-time applications that run across distributed devices.
That’s a mouthful.
What it really means is that Node.js is not a silver-bullet new platform
that will dominate the web development world. Instead,
it’s a platform that fills a particular need.
What it really means is that Node.js is not a silver-bullet new platform that will dominate the web development world. Instead, it’s a platform that fills a particular need. And understanding this is absolutely essential. You definitely don’t want to use Node.js for CPU-intensive operations; in fact, using it for heavy computation will annul nearly all of its advantages. Where Node really shines is in building fast, scalable network applications, as it’s capable of handling a huge number of simultaneous connections with high throughput, which equates to high scalability.
How it works under-the-hood is pretty interesting. Compared to traditional web-serving techniques where each connection (request) spawns a new thread, taking up system RAM and eventually maxing-out at the amount of RAM available, Node.js operates on a single-thread, using non-blocking I/O calls, allowing it to support tens of thousands of concurrent connections (held in the event loop).
Diagram of traditional vs. Node.js server thread
A quick calculation: assuming that each thread potentially has an accompanying 2 MB of memory with it, running on a system with 8 GB of RAM puts us at a theoretical maximum of 4,000 concurrent connections (calculations taken from Michael Abernethy’s article “Just what is Node.js?”, published on IBM developerWorks in 2011; unfortunately, the article is not available anymore), plus the cost of context-switching between threads. That’s the scenario you typically deal with in traditional web-serving techniques. By avoiding all that, Node.js achieves scalability levels of over 1M concurrent connections, and over 600k concurrent websockets connections.
There is, of course, the question of sharing a single thread between all clients requests, and it is a potential pitfall of writing Node.js applications. Firstly, heavy computation could choke up Node’s single thread and cause problems for all clients (more on this later) as incoming requests would be blocked until said computation was completed. Secondly, developers need to be really careful not to allow an exception bubbling up to the core (topmost) Node.js event loop, which will cause the Node.js instance to terminate (effectively crashing the program).
The technique used to avoid exceptions bubbling up to the surface is passing errors back to the caller as callback parameters (instead of throwing them, like in other environments). Even if some unhandled exception manages to bubble up, tools have been developed to monitor the Node.js process and perform the necessary recovery of a crashed instance (although you probably won’t be able to recover the current state of the user session), the most common being the Forever module, or using a different approach with external system tools upstart and monit, or even just upstart.

NPM: The Node Package Manager

When discussing Node.js, one thing that definitely should not be omitted is built-in support for package management using the NPM tool that comes by default with every Node.js installation. The idea of NPM modules is quite similar to that of Ruby Gems: a set of publicly available, reusable components, available through easy installation via an online repository, with version and dependency management.
A full list of packaged modules can be found on the npm website, or accessed using the npm CLI tool that automatically gets installed with Node.js. The module ecosystem is open to all, and anyone can publish their own module that will be listed in the npm repository. A brief introduction to npm can be found in a Beginner’s Guide, and details on publishing modules in the npm Publishing Tutorial.
Some of the most useful npm modules today are:
  • express - Express.js, a Sinatra-inspired web development framework for Node.js, and the de-facto standard for the majority of Node.js applications out there today.
  • hapi - a very modular and simple to use configuration-centric framework for building web and services applications
  • connect - Connect is an extensible HTTP server framework for Node.js, providing a collection of high performance “plugins” known as middleware; serves as a base foundation for Express.
  • socket.io and sockjs - Server-side component of the two most common websockets components out there today.
  • pug (formerly Jade) - One of the popular templating engines, inspired by HAML, a default in Express.js.
  • mongodb and mongojs - MongoDB wrappers to provide the API for MongoDB object databases in Node.js.
  • redis - Redis client library.
  • lodash (underscore, lazy.js) - The JavaScript utility belt. Underscore initiated the game, but got overthrown by one of its two counterparts, mainly due to better performance and modular implementation.
  • forever - Probably the most common utility for ensuring that a given node script runs continuously. Keeps your Node.js process up in production in the face of any unexpected failures.
  • bluebird - A full featured Promises/A+ implementation with exceptionally good performance
  • moment - A lightweight JavaScript date library for parsing, validating, manipulating, and formatting dates.
The list goes on. There are tons of really useful packages out there, available to all (no offense to those that I’ve omitted here).

Examples of Where Node.js Should Be Used


Chat is the most typical real-time, multi-user application. From IRC (back in the day), through many proprietary and open protocols running on non-standard ports, to the ability to implement everything today in Node.js with websockets running over the standard port 80.
The chat application is really the sweet-spot example for Node.js: it’s a lightweight, high traffic, data-intensive (but low processing/computation) application that runs across distributed devices. It’s also a great use-case for learning too, as it’s simple, yet it covers most of the paradigms you’ll ever use in a typical Node.js application.
Let’s try to depict how it works.
In the simplest example, we have a single chatroom on our website where people come and can exchange messages in one-to-many (actually all) fashion. For instance, say we have three people on the website all connected to our message board.
On the server-side, we have a simple Express.js application which implements two things: 1) a GET ‘/’ request handler which serves the webpage containing both a message board and a ‘Send’ button to initialize new message input, and 2) a websockets server that listens for new messages emitted by websocket clients.
On the client-side, we have an HTML page with a couple of handlers set up, one for the ‘Send’ button click event, which picks up the input message and sends it down the websocket, and another that listens for new incoming messages on the websockets client (i.e., messages sent by other users, which the server now wants the client to display).
When one of the clients posts a message, here’s what happens:
  1. Browser catches the ‘Send’ button click through a JavaScript handler, picks up the value from the input field (i.e., the message text), and emits a websocket message using the websocket client connected to our server (initialized on web page initialization).
  2. Server-side component of the websocket connection receives the message and forwards it to all other connected clients using the broadcast method.
  3. All clients receive the new message as a push message via a websockets client-side component running within the web page. They then pick up the message content and update the web page in-place by appending the new message to the board.
Diagram of client and server websockets in a Node.js application
This is the simplest example. For a more robust solution, you might use a simple cache based on the Redis store. Or in an even more advanced solution, a message queue to handle the routing of messages to clients and a more robust delivery mechanism which may cover for temporary connection losses or storing messages for registered clients while they’re offline. But regardless of the improvements that you make, Node.js will still be operating under the same basic principles: reacting to events, handling many concurrent connections, and maintaining fluidity in the user experience.


Although Node.js really shines with real-time applications, it’s quite a natural fit for exposing the data from object DBs (e.g. MongoDB). JSON stored data allow Node.js to function without the impedance mismatch and data conversion.
For instance, if you’re using Rails, you would convert from JSON to binary models, then expose them back as JSON over the HTTP when the data is consumed by Backbone.js, Angular.js, etc., or even plain jQuery AJAX calls. With Node.js, you can simply expose your JSON objects with a REST API for the client to consume. Additionally, you don’t need to worry about converting between JSON and whatever else when reading or writing from your database (if you’re using MongoDB). In sum, you can avoid the need for multiple conversions by using a uniform data serialization format across the client, server, and database.


If you’re receiving a high amount of concurrent data, your database can become a bottleneck. As depicted above, Node.js can easily handle the concurrent connections themselves. But because database access is a blocking operation (in this case), we run into trouble. The solution is to acknowledge the client’s behavior before the data is truly written to the database.
With that approach, the system maintains its responsiveness under a heavy load, which is particularly useful when the client doesn’t need firm confirmation of a the successful data write. Typical examples include: the logging or writing of user-tracking data, processed in batches and not used until a later time; as well as operations that don’t need to be reflected instantly (like updating a ‘Likes’ count on Facebook) where eventual consistency (so often used in NoSQL world) is acceptable.
Data gets queued through some kind of caching or message queuing infrastructure (e.g., RabbitMQZeroMQ) and digested by a separate database batch-write process, or computation intensive processing backend services, written in a better performing platform for such tasks. Similar behavior can be implemented with other languages/frameworks, but not on the same hardware, with the same high, maintained throughput.
Diagram of a database batch-write in Node.js with message queuing
In short: with Node, you can push the database writes off to the side and deal with them later, proceeding as if they succeeded.


In more traditional web platforms, HTTP requests and responses are treated like isolated event; in fact, they’re actually streams. This observation can be utilized in Node.js to build some cool features. For example, it’s possible to process files while they’re still being uploaded, as the data comes in through a stream and we can process it in an online fashion. This could be done for real-time audio or video encoding, and proxying between different data sources (see next section).


Node.js is easily employed as a server-side proxy where it can handle a large amount of simultaneous connections in a non-blocking manner. It’s especially useful for proxying different services with different response times, or collecting data from multiple source points.
An example: consider a server-side application communicating with third-party resources, pulling in data from different sources, or storing assets like images and videos to third-party cloud services.
Although dedicated proxy servers do exist, using Node instead might be helpful if your proxying infrastructure is non-existent or if you need a solution for local development. By this, I mean that you could build a client-side app with a Node.js development server for assets and proxying/stubbing API requests, while in production you’d handle such interactions with a dedicated proxy service (nginx, HAProxy, etc.).


Let’s get back to the application level. Another example where desktop software dominates, but could be easily replaced with a real-time web solution is brokers’ trading software, used to track stocks prices, perform calculations/technical analysis, and create graphs/charts.
Switching to a real-time web-based solution would allow brokers to easily switch workstations or working places. Soon, we might start seeing them on the beach in Florida.. or Ibiza.. or Bali.


Another common use-case in which Node-with-web-sockets fits perfectly: tracking website visitors and visualizing their interactions in real-time.
You could be gathering real-time stats from your user, or even moving it to the next level by introducing targeted interactions with your visitors by opening a communication channel when they reach a specific point in your funnel. (If you’re interested, this idea is already being productized by CANDDi.)
Imagine how you could improve your business if you knew what your visitors were doing in real-time—if you could visualize their interactions. With the real-time, two-way sockets of Node.js, now you can.


Now, let’s visit the infrastructure side of things. Imagine, for example, an SaaS provider that wants to offer its users a service-monitoring page (e.g., GitHub’s status page). With the Node.js event-loop, we can create a powerful web-based dashboard that checks the services’ statuses in an asynchronous manner and pushes data to clients using websockets.
Both internal (intra-company) and public services’ statuses can be reported live and in real-time using this technology. Push that idea a little further and try to imagine a Network Operations Center (NOC) monitoring applications in a telecommunications operator, cloud/network/hosting provider, or some financial institution, all run on the open web stack backed by Node.js and websockets instead of Java and/or Java Applets.
Note: Don't try to build hard real-time systems in Node (i.e., systems requiring consistent response times). Erlang is probably a better choice for that class of application.

Where Node.js Can Be Used


Node.js with Express.js can also be used to create classic web applications on the server-side. However, while possible, this request-response paradigm in which Node.js would be carrying around rendered HTML is not the most typical use-case. There are arguments to be made for and against this approach. Here are some facts to consider:
  • If your application doesn’t have any CPU intensive computation, you can build it in Javascript top-to-bottom, even down to the database level if you use JSON storage Object DB like MongoDB. This eases development (including hiring) significantly.
  • Crawlers receive a fully-rendered HTML response, which is far more SEO-friendly than, say, a Single Page Application or a websockets app run on top of Node.js.
  • Any CPU intensive computation will block Node.js responsiveness, so a threaded platform is a better approach. Alternatively, you could try scaling out the computation [*].
  • Using Node.js with a relational database is still quite a pain (see below for more detail). Do yourself a favour and pick up any other environment like Rails, Django, or ASP.Net MVC if you’re trying to perform relational operations.
[*] An alternative to these CPU intensive computations is to create a highly scalable MQ-backed environment with back-end processing to keep Node as a front-facing ‘clerk’ to handle client requests asynchronously.

Where Node.js Shouldn’t Be Used


Comparing Node.js with Express.js against Ruby on Rails, for example, there is a clean decision in favour of the latter when it comes to relational data access.
Relational DB tools for Node.js are still in their early stages; they’re rather immature and not as pleasant to work with. On the other hand, Rails automagically provides data access setup right out of the box together with DB schema migrations support tools and other Gems (pun intended). Rails and its peer frameworks have mature and proven Active Record or Data Mapper data access layer implementations, which you’ll sorely miss if you try to replicate them in pure JavaScript.[*]
Still, if you’re really inclined to remain JS all-the-way (and ready to pull out some of your hair), keep an eye on Sequelize and Node ORM2—both are still immature, but they may eventually catch up.
[*] It’s possible and not uncommon to use Node solely as a front-end, while keeping your Rails back-end and its easy-access to a relational DB.


When it comes to heavy computation, Node.js is not the best platform around. No, you definitely don’t want to build a Fibonacci computation server in Node.js. In general, any CPU intensive operation annuls all the throughput benefits Node offers with its event-driven, non-blocking I/O model because any incoming requests will be blocked while the thread is occupied with your number-crunching.
As stated previously, Node.js is single-threaded and uses only a single CPU core. When it comes to adding concurrency on a multi-core server, there is some work being done by the Node core team in the form of a cluster module [ref: http://nodejs.org/api/cluster.html]. You can also run several Node.js server instances pretty easily behind a reverse proxy via nginx.
With clustering, you should still offload all heavy computation to background processes written in a more appropriate environment for that, and having them communicate via a message queue server like RabbitMQ.
Even though your background processing might be run on the same server initially, such an approach has the potential for very high scalability. Those background processing services could be easily distributed out to separate worker servers without the need to configure the loads of front-facing web servers.
Of course, you’d use the same approach on other platforms too, but with Node.js you get that high reqs/sec throughput we’ve talked about, as each request is a small task handled very quickly and efficiently.


We’ve discussed Node.js from theory to practice, beginning with its goals and ambitions, and ending with its sweet spots and pitfalls. When people run into problems with Node, it almost always boils down to the fact that blocking operations are the root of all evil—99% of Node misuses come as a direct consequence.
In Node, blocking operations are the root of all evil—
99% of Node misuses come as a direct consequence.
Remember: Node.js was never created to solve the compute scaling problem. It was created to solve the I/O scaling problem, which it does really well.
Why use Node.js? If your use case does not contain CPU intensive operations nor access any blocking resources, you can exploit the benefits of Node.js and enjoy fast and scalable network applications. Welcome to the real-time web.
This post has been already published in Toptal Engineering Blog source link. We from i-visionblog.com reposting it in favour of Toptal Team. Want to publish your post in our blog ? drop us a mail or chat with me in Facebook/Google+. Share is care.