Amazon EC2 Windows AMI Password Re-enable


              Earlier, i had a scenario that i need to launch a Windows custom EBS AMI from Amazon EC2 and log-in to the machine. Unfortunately, I don’t have the password for that AMI. If you right-click on that instance and requiring password by triggering an API call to the AWS service – “Get Windows Password”, you will get the result (Password – Not available yet.). But, its never asked me to paste the key file that is associated to that instance for decrypt the password. Since, the password re-generation option was disabled by default. There is a EC2 config xml file, where ‘EC2Setpassword’ element state you can change ‘Enabled / Disabled’ modes. If you make this state as ‘Enabled’, it will re-generate a password for you, if you have ‘Diasabled’ state then you have to be used your password that you have given before bundle this AMI.


      Ec2SetPassword
      Disabled

      Ec2SetComputerName
      Disabled

    ...

So what i did, i stopped this instance and de-attached the root volume and attached to linux instance and modified the Ec2Service config file. Steps detailed in below.

  • Stopped the windows instance and detached the root volume (/dev/sda1)
  • Launched a micro Linux instance and attached, mounted this root NTFS volume (mount -t ntfs-3g /dev/xvdf2 /mnt/temp/) on the Linux machine and opened the config.xml (/mnt/temp/Program Files/Amazon/Ec2ConfigService/Settings/config.xml) and changed the “Ec2SetPassword” password state from “Disabled” to “Enabled” and saved the file
  • Detached that volume from Linux and terminated the Linux machine
  • Attached this Root volume to windows machine again and started the machine
  • In a few seconds, the “Get Windows Password” triggered the password generation window and asked the “.pem” file and password has been generated

After, the XML looks like


      Ec2SetPassword
      Enabled

      ...

Done everything!!!

Akka – Really mountains of Concurrency


Akka – Concurrent & Fault-tolerant framework

Now-a-days, concurrent, fault-tolerant, and scalability mechanism is required in most of the application. In most of the time we use concurrent utils in Java for achieving concurrency. But it is very hard to implement it in the abstract level. We need to manage thread – locking, to avoid race-conditions, synchronizations. I found Akka framework, I would say, it is a very thin layer for concurrent process to implement for our applications. Akka – everything is “Actor” model that forms a hierarchy for its child actors and also it does the supervisions of their child.

How Fault-tolerant can accomplish in Akka, the supervision and return back notification to its parent is the way to do. Actor is the light weight process to do the task in parallel as well as in distributed manner.

By the way, from Akka about Actors,

Actors give you:

• Simple and high-level abstractions for concurrency and parallelism.
• Asynchronous, non-blocking and highly performant event-driven programming model.
• Very lightweight event-driven processes (approximately 2.7 million actors per GB RAM).

Akka flow – Example 1

Akka flow – Example 2

Image Courtesy – http://typesafe.com/

Actor System

Actor System is the one, which will be the top node in the tree. There is a best practice to keep a one Actor System per application. Since, it is heavy weight and also it has the responsibility to do the initial setup of configuration in your actor model. The actor system as a collaborating ensemble of actors is the natural unit for managing shared facilities like scheduling services, configuration, logging, etc. An actor is a container for State, Behavior, a Mailbox, Children and a Supervisor Strategy. All of this is encapsulated behind an Actor Reference. Finally, this happens When an Actor Terminates.

Durable Mailboxes

Akka supports a set of durable mailboxes. A durable mailbox is a replacement for the standard actor mailbox that is durable. What this means in practice is that if there are pending messages in the actor’s mailbox when the node of the actor resides on crashes, then when you restart the node, the actor will be able to continue processing as if nothing had happened; with all pending messages still in its mailbox.

The durable mailboxes currently supported are:

• FileBasedMailbox – backed by a journaling transaction log on the local file system
• RedisBasedMailbox – backed by Redis
• ZooKeeperBasedMailbox – backed by ZooKeeper
• BeanstalkBasedMailbox – backed by Beanstalkd
• MongoBasedMailbox – backed by MongoDB

We can set the priority order in which to process the messages from the mailbox.

Actor Configurations

In Akka, every actor, we can define its execution process behavior and also can specify the cluster of actors in the configuration by scale out at any point of time without modify the code.

 ServerSys { include "common" akka { actor { provider = "akka.remote.RemoteActorRefProvider" } remote { transport = "akka.remote.netty.NettyRemoteTransport" netty { hostname = "127.0.0.1" port = 2552 } } } } ClientSys { include "common" akka { actor { provider = "akka.remote.RemoteActorRefProvider" deployment { /remoteServerActor { remote = "akka://ServerSys@127.0.0.1:2552" } } } } } 

For cluster configuration, it is similar to DHT model used in Cassandra. We can add / remove seeds at any time.

 akka { cluster { # Initial contact points of the cluster. Nodes to join at startup if auto-join = on. # Comma separated full URIs defined by a string on the form of # "akka://system@hostname:port" # Leave as empty if the node should be a singleton cluster. seed-nodes = [] # how long to wait for one of the seed nodes to reply to initial join request seed-node-timeout = 5s # Automatic join the seed-nodes at startup. # If seed-nodes is empty it will join itself and become a single node cluster. auto-join = on # Should the 'leader' in the cluster be allowed to automatically mark unreachable # nodes as DOWN? # Using auto-down implies that two separate clusters will automatically be formed # in case of network partition. auto-down = off …. } } 

Message Dispatchers

This is the heart of the system; it does the Inter actor communication. Every ActorSystem will have a default dispatcher that will be used in case nothing else is configured for an Actor. The default dispatcher can be configured, and is by default a Dispatcher with a “fork-join-executor”, which gives excellent performance in most cases.

There are four types of message dispatchers;

Dispatcher

This is an event-based dispatcher that binds a set of Actors to a thread pool. It is the default dispatcher used if one is not specified.

PinnedDispatcher

This dispatcher dedicates a unique thread for each actor using it; i.e. each actor will have its own thread pool with only one thread in the pool.

BalancingDispatcher

This is an executor based event driven dispatcher that will try to redistribute work from busy actors to idle actors. This will assume that all the actors are in the same pool.

CallingThreadDispatcher

This dispatcher runs invocations on the current thread only. This dispatcher does not create any new threads, but it can be used from different threads concurrently for the same actor.

Software Transactional Memory

This will keeps the transactional data set in the memory for begin/commit/rollback semantics. This is very useful, when a child actor fails, the parent can rollback the transaction.

By Experimental Use-Cases:

Transaction processing (online gaming, finance, statistics, social media, telecom, …), Service backend (any industry, any app), Batch processing ( any industry ), Communications hub ( telecom, web media, mobile media ), Game server (online gaming), BI/datamining/general purpose crunching and more similar.

To get more familiar in Akka framework, please try the examples in the github –
https://github.com/write2munish/Akka-Essentials

Node.js Event driven versus traditional Multi Thread


Traditional Multi Threading
            Multi-Threading, I love to hate this mechanism in now-a-days, since most of the time a performance war  happening inside the server. Earlier, I felt happier that I have implemented more parallel execution solutions. Of course, it scaled very well. But the flip-side on the server end, application container fights with memory & I/O unit. So, it’s required to control the threads beyond certain limit. Heap memory allocations, allotting an amount of space in the memory, it’s all big overhead.
Another important caveat is, continuous polling for the requests. Let me explain my own use-case here. I have a daemon thread which listening the queue for the new messages at regular intervals. The message contains the information about the file URL path from the remote server. Each thread will read the message and download the file from the remote server and extract the content then it will index the content. For every message, parent thread spawns every new child thread. But spawning the threads is controllable

 from the configuration property. It’s all more about I/O operations inside the server. Such a loop would be a CPU hog, as it would spike the CPU at 100% for the entire duration of the program that processing the messages.Though it is multi-threaded, but the process is one thing at a time per core. Hyper threading is a dirty lie.
Event – Driven approach
What is Event driven? Simply, an event can be defined as a type of signal to the program that something has happened. In real world scenario, let’s take a School admissions program. A famous school has opened the admissions for the children. People are trying to get the admissions for their kids.
There is a big queue in front of the admission office. The procedure is to fill the form and pay the money to get the seat. Assume, there is one administrate officer (one thread) doing the admission approving formalities.  Every person is blocking him or her until fills the form, from servicing other persons. The only real way to scale a thread-based system is to add more officers. This, however, has financial implications in that you have to pay more people and physical implications in that you have to make the room for the additional officer windows. In an event based system, the administrative officer gives you the form and tells you to come back once you complete the form. You go out from the queue and complete the form then come back into the same queue. Meanwhile, the officer can serve for others in same manner. So it’s faster and makes more available.
Node.js with event driven call back
Asynchronous Non-blocking I/O is one of the main advantages in Node.js. How this approach is very fast? It’s running only one thread; whenever an I/O calls happen it does in asynchronous mode and it gets the notification from the operating system using epoll in Linux. So Node.js never waits for the completeness of the I/O calls since its working on event call back mechanism.  During this time, it serves for another request. It allocates small heap memory for an event and also does not have many stacks in your memory even in the concurrency level. It scales very well without any overheads.
It is very happy to use light weight at the same time very powerful framework “Node.js” to avoid Dead lock situations, Race-Conditions, heavy Context Switching head-aches and big memory consumptions issues.

Play framework and Twitter’s Bootstrap – tutorial


              Play is a most efficient high-productivity framework; it allows the users to develop the application in Java or Scala and both. It is purely helps to develop modern web applications in a very effective manner. I don’t want to list out the all features that are leverages on this framework.  You can refer the play framework website to know more about it. The most highlighted features that are attracted me,
  • Async HTTP Programming
  • Template Engine
  • In-built Cache
  • In-built Webservice binder
  • Easy integration with AKKA framework
  • WebSocket support
    Twitter’s Bootstrap
              Twitter’s Bootstrap provides flexible Javascript and Stylesheet.  I never ever have seen this kind of a very portable Javascripts & CSS in earlier.  All we need to do is just download and import the Bootstrap’s CSS and Javascripts in our HTML and can utilize the features. If we want to integrate the Fluid Grid,
             * Includes CSS & JavaScript files
             * Defines the Fluid Grid class in the div
          <div class="row-fluid">
            <div class="span4">...</div>
            <div class="span8">...</div>
         </div>
     
The below tutorial will explains about,

1. How to create Asynchronous HTTP method using AKKA in Play application?
2. How to expose the result in JSON response?
3. How to call the Async HTTP service from JQuery?
4. What template engine does?
5. Also, how we can effectively utilize Bootstrap’s components?

This below application has been hosted on GitHub; this sample simply shows a message on the popup message by clicking the button in the page.
        Play is a MVC framework structure, the sample project seems like

Play Package Structure

Under the “controllers”, ‘Application’ is the class which controls both the Modal and View. Controller classes has the responsibility to servers the data to the views via web service. All the web binding methods are static which emits the “Result” as a HTTP response.

   public class Application extends Controller {

	/**
	 * Home page render with the string
	 */
       public static Result index() {
         return ok(asyncworld.render("User"));
      }
       ...
   }

    
In the above method it returns ‘ok’ with string “User”. In the “views” package, you can find the file called “asyncworld.scala.html”, which will render the HTML file along with the string that gets as a response.
Below is the Html Scala template, which is tightly integrates with ‘Application.java’
  @(name: String)
<!DOCTYPE html>

<html>
    <head>
        <title>Welcome to Play Application demo</title>
        <link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/main.css")">
        <link rel="shortcut icon" type="image/png" href="@routes.Assets.at("images/favicon.png")">
        <link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/bootstrap.min.css")">
        <link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/bootstrap.css")">
         ...
         ...
         <body>
	    <h2>Welcome @name!</h2>
          ...
          ...
    </html>
In the template engine of above html, @name which gets the value from the controller. You can find the return value type in the below java controller snippet.
   return ok(asyncworld.render("User"));
Asyncworld scala, which accepts String, so we are passing the string value “User” to there. At runtime @name will replace the value – “user”.
In our example, we have Welcome Message and a button in the window. By Clicking the button, JQuery will calls the service which is binded from Async method in the controller class. This method sets the response as a JSON object. Response object have the values, Sample Username and Current Timestamp. Play framework is really amazing, binding into JSON response, we no need to dependent on any third-party libraries. It has many in-built options are available. The async method in below,
     /**
   * Asynchronous method to provide the JSON result
   */
     public static Result fetchPopuMessage(final String text){
	  return async(Akka.future(new Callable<String>() {

		public String call() throws Exception {
			return text;
		}
	}).map(new Function<String, Result>() {

		@Override
		public Result apply(String data) throws Throwable {
                 Map<String,String> jsonData = new HashMap<String, String>();

                 jsonData.put(AsyncConstants.USER_NAME, data);
                 jsonData.put(AsyncConstants.CURRENT_TIME, new Date().toString());

               return ok(toJson(jsonData));
		}
	}));
  }
   
The asynchronous method, which gets the input from the service request parameter. Call() method will parse the data and send the output to Play’s Function, the Result method will binds the data as a JSON object using ‘toJson(..)’ method.
The routes file is the configuration template for the service URIs.
   GET   /                         controllers.Application.index()
   GET	 /asyncresponse/:name      controllers.Application.fetchPopuMessage(name :String)
  
Now we will see, how can consume this service and render the data in the Bootstrap’s popup using JQuery. Bootstrap’s popup div, see here.
   <!-- This is Bootstrap's modal popup plugin. Used to populate the response
                      by clicking on the button -->

       <div class="modal hide" id="profile_modal" style="width:350px;height:200px">
         <div class="modal-header">
           <button type="button" class="close" data-dismiss="modal">x</button>
           <div class="popup_header" style="margin-top:0">
             <div id="myheader"><h4>Welcome to Play Application</h4></div>
           </div>
         </div>
         <div class="modal-body">
           <div id="mypopup"></div>
         </div>
       </div>
  
JQuery script is the important code block, which fills the data into the popup. This javascript, calls the webservice and fetch the JSON as a response data. This JSON data will be parse and render into the popup div. The javascript snippet in below,
    <script type="text/javascript">
           $(function() {
        	   $("#click").click(function(){
        		   $("#mypopup").empty();
        		   $('#profile_modal').modal('show')
        		   $.get("asyncresponse/RaghavPrabhu", function(data, textstatus){
        			   $("#mypopup").append("<h4>Hello "+ data.username+",</h4><br>");
        			   $("#mypopup").append("<h5>Current Date/Time is : "+data.currenttime+",</h5>")
        		   });

        	   });
           });
          </script>
  
      Below is the Home screen, The value “User” is being rendered from Scala template engine using  <<Welcome @name!>>
Bootstrap’s popup message output,

This is project is hosted on Github. Use this sample tutorial as a hint for your real time applications using Play and Bootstrap.
Please post your feedbacks!!!

MongoDB / Cassandra JPA Service using Kundera


           Earlier, i have discussed many NoSQL datastores in my previous blog. Schema less datastores are really required to make the application scale at any point of time. MySQL can fits only up to small set of data. When the data grows up, its very hard to scale the MySQL DB by using shards/clusters. I opted to chose, MongoDB is one of the best key-value and schema-less store for the immediate migration strategy from SQL. Because, MongoDB is very ease to install. For querying the complex data, in other NoSQL DBs, you need to write a separate Map/Reduce program to segregate  your data. But in MongoDB, simply we can write the queries to analyze it.
Java Objects mapping with NoSQL data store similar to SQL ORM (Object Relational Mapping) using JPA. I am sure the developer will be more happier to use JPA in NoSQL. I found a NoSQL JPA library named Kundera. The beauty of this library that it supports, many NoSQL and relational DBs  which we can switch over by simply changing the configuration.  The below table is the list that Kundera supporting DBs.
 
I wrote a core API by using this Kundera library for manipulating any set of operations irrespective of the data store.  Its using JPA to persists the data and all the CRUD (Create, Read, Update & Delete) operations can perform via Restful webservice. Kundera’s Persistence manager, is a configurable one which we can map any datastore details. To setting up the Entity Manager connection object in below from persistence unit.

 JPA’s Persistence Manager will keeps the connection of the MongoDB datastore. We can easily switch over to Cassandra, HBase or any RDBMS databases, by simply modifying the connection URL and DB parameters in the “persistence.xml“.  In my API, i have a class called “AbstractJPA” which does all the CRUD operations.
   
 The Database mapping object class which contains the table name, column fields and id column mapping to Java objects in Class name and all property fields respectively. The mappings all are annotation based in the java class.
Entity name is the actual DB name mapped in annotation header of the data object class. Annotation “@id” which prescribes the primary key column of the table and “@column” represents the column details. In this API, the generic DAO & Spring MVC patterns are used to do the data store operations and Web Service as  exposed as Spring controller & service are used here. The abstract flow are in below.,

Finally the Spring controller, which expose all the operations as a service.
You can find this code available in github.

Export war in GWT using maven


GWT is a framework for building large and high performance rich internet application (RIA) in comes under rapid development mode. But the structure of the project is not similar to traditional J2EE web application project. The one main problem is to export the GWT project as a war in eclipse is tricky.
The simple solution is to build the project under the control of maven, will solves your problem. Also you can inherits the maven features into your GWT project. I assume that, the reader of this blog having base knowledge of Maven. Below tutorial and screen-shots helps you to create the GWT-Maven project  within minutes. For this tutorial, i have used Spring Tool Suite (STS), you can use other Eclipse IDEs also.
 In STS/Eclipse, Choose File -> New -> Other -> Maven -> Maven Project
       
 Choose the GWT Maven archetype from the available list

       Under the next wizard, archetype parameters, give Group Id, Artifact Id and Module Name & value.
 The module value that you have given in the wizard, will be your web module name and html & css file names looks like ‘mygwt.html’ & ‘mygwt.css’. The project structure will similar of the below screen.
Maven will create the three default packages – client, server & shared. The plugin has some problem to create the Async java file by default. Your code throws an error
Need not to worry, There are some gwt maven goals are available to make it to the running mode. The goals are,
                                       * gwt:generateAsync
                                       * gwt:i18n
Execute the above two goals, by right-clicking on the project and open ‘Run Configuration’ wizard. Create the new maven configurations.
 Create the goal for i18n same like generateAsyn goal. These goals will automatically creates the classes, “GreetingServiceAsync.java” & “Messages.java”  under  ../target/generated-sources/…/client/
Copy these files and paste it into the client package. Now, the IDE resolves the error by recognizing these classes. Now your GWT with maven structured project is ready to test and package as a war file. There is one another maven goal by enabling the GWT development environment. The goal name is  “gwt:run”.  The development mode and web app running in below screens.
Copy the URL / click the “Launch Default Browser” button, now you will be able to see the application running on the browser.

Now the development is got completed, you need to package your project as a war file. This is very simple to build using the maven command. Execute the goal, ‘clean compile package’ in the ‘Run Configuration’ wizard. These set of goal, will clean your target, compile the packages – all permutations and builds the package as war.
Packaged war will have the proper structure and can deploy the war without any issues on the Catalina container.


Certain cases, you need to run the background tasks. In linux, the default scheduler tasks called “Cron Jobs”.  To view the list of cron jobs that are running in the machine by using the following command,

crontab -l
     # m h dom mon dow command
     33 11 * * * touch /mnt/test.sample >> /var/log/test.log
    0 8 * * * /mnt/CloudSmart/Start.sh > /var/log/cloudsmart_start.out 2>&1
   0 21 * * * /mnt/CloudSmart/Stop.sh > /var/log/cloudsmart_stop.out 2>&1
 

  To edit or add new cron jobs, by using the below command

      crontab -e

For the first time, it will ask you to choose the editor like, vi, nano etc. Every job, by default it represents along with 5 stars.

eg.,  * * * * * touch /mnt/test.sample >> /var/log/test.log

5 stars : Scheduling period

             1.  Minute Field – 0-59 (mins)
             2. Hour Field  0-23 (hours)
             3. Day of Month 1-31
             4. Month Field 1-12
            5. Days 0-6 (Sun – 0, Mon – 1, Tue – 2, Wed – 3, Thu – 4, Fri – 5, Sat -6)          
 
           Note : – For every field, you need to give tab, Don’t use keyboard space between the fields in   your cron
 

   In my above scenario, i want to execute the jobs in daily basis, to start the cloud machine daily @ 8.00 AM and to stop the machine (Virtual machine) @ 9.00 PM. My Cron job

 
                0 8 * * * /mnt/CloudSmart/Start.sh > /var/log/cloudsmart_start. out 2>&1 
                     0 – o mins
                     8 – 8 AM
                     *  – Every day (1-31)
                     * – Every Month (1-12)
                     * – No Custom days (all days) 
                     /mnt/CloudSmart/Start.sh  – Shell script file to execute your task
                    /var/log/cloudsmart_start. out 2>&1  – Log outting
              
               0 21 * * * /mnt/CloudSmart/Start.sh > /var/log/cloudsmart_start. out 2>&1 
                     0 – o mins
                     21 – 9 PM
                     *  – Every day (1-31)
                     * – Every Month (1-12)
                     * – No Custom days (all days) 
                     /mnt/CloudSmart/Stop.sh  – Shell script file to execute your task
                    /var/log/cloudsmart_stop. out 2>&1  – Log outting
                     

    If, i want to execute the task by only on weekdays,

               0 8 * * 1-5 /mnt/CloudSmart/Start.sh > /var/log/cloudsmart_start.out 2>&1
              0 21 * * 1-5 /mnt/CloudSmart/Stop.sh > /var/log/cloudsmart_stop.out 2>&1
                          – These jobs only executes on Monday  to  Friday
 

            To save and update the crons, you need to restart the cron service

           /etc/init.d/cron restart