Friday, October 3, 2014

Postgre SQL Commands

Following are some of the commands I though very useful for newbie developers.
  • Connect to the DB
/usr/local/pgsql/bin/psql -U postgres [DB]
  • Execute postgres functions
\i /test/query/run.sql
  • How to get differecne between two schems
             Here I have have used apgdiff    
/usr/java/[JDK Verssion]/bin/java -jar apgdiff-xxxxx.jar target_db.sql source_db.sql > diff.sql
  • How to get the dump
/usr/local/pgsql/bin/pg_dump --file=/home/uploads/schema_backup/[file name].sql --schema=[schema name] -U [user name] [DB Name]
  • How to restore a schema 
-U [user name] --pset pager -f /home/uploads/pgdump/[file anme].sql [DB Name]
  • How to connect to the Database
/usr/local/pgsql/bin/psql  -U [user name]  [schema name]
  • How to backup only the selected table
pg_dump --file=/rezsystem/backup/[file name].sql --host=[connect IP] --table=[Table name] --username=[user name] --schema=[schema name] [DB Name]
  • How to backup only the table structure
pg_dump --file=$1 --host=$2 -U $3 --schema=$4 --schema-only $5 > $6 2>&1
           ex. pg_dump --file=/rezsystem/rm_upload/[file name].sql --host=localhost -U [user name] --schema=[schema name] --schema-only [DB Name] > /rezsystem/rm_upload/[file name].sql 2>&1
  • How to update bulk DDL statement in a one SQL file
psql --file=/rezrelmgmt/uploads/Lsi/[file name].sql --host=localhost --username=postgres --quiet --tuples-only [DB Name]
  • How to check check the running datbase processors with the status
ps auxww | grep ^postgres
  • How to kill the exact process
kill -INT prodessId
  • How to check the total connections
ps ax |grep post |wc -l

Monday, June 30, 2014

How to Implement Audit Functionality In PostgreSQL

As I can remember 6 years back in 2008 Dec. we were researching on a better audit module to audit all transactions (Create, Modify, Delete) in PostgreSQL 8.0.3. It was a Friday which I started to implement this and I spent couple of days including Saturday and Sunday to get this completed. By Monday Morning I was able to implement the audit module to the existing product. Since then the time I'm writing this Blog still the audit functionality is working perfectly without any issues even with upgrade of the PostgreSQL versions.

So I though share this experience with you'll as when we eager to do something and also when we have the determination and courage, we are very much keen on to get things done whether it is weekend or 1 - 2 am in the morning. I know most of the Software Development enthusiastic would agree with me.

While I was researching on this, I found a way to implement the audit module with PL/Tcl language.
PL/Tcl is a loadable procedural language for the PostgreSQL database system that enables the Tcl language to be used to write functions and trigger procedures.

Please allow me to explain how we can get this implemented easily. Steps 1 - 3 has to be implemented only once. So thereafter if you need to implement audit then follow steps 4 - 5 (Create the Audit Table and Trigger). Pretty easy ha. Let's start...

  • Step 1 - In order to implement the audit module install the PL/TCL language.
 
createlang -U postgres pltcl <DB-Name>
  • Step 2 - Create the Function -  log_to_audit_table
Function          Arguments     Returns   Programming language
log_to_audit_table             "trigger"   pltcl
Definition
 
#This function crated to audit the data for all the modules
#Created By : Kosala De Silva
#Created On : 12 - Dec - 2008

spi_exec "SELECT CURRENT_USER AS tguser"
spi_exec "SELECT c.relname AS tgname,n.nspname AS schema_name
           FROM pg_class c , pg_namespace n
           WHERE n.oid = c.relnamespace
           AND relfilenode = $TG_relid"

 
if {[string equal -nocase $tgname audit_table]} { return OK }

set pk_name ""

 spi_exec "SELECT a.attname AS pk_name FROM pg_class c, pg_attribute a, pg_index i
 WHERE c.relname = '$tgname'
 AND c.oid=i.indrelid
 AND a.attnum > 0
 AND a.attrelid = i.indexrelid
 AND i.indisprimary='t'"

  spi_exec " select substring('$schema_name',0,instr('$schema_name','_', 1))||'_admin'
  AS main_schema"

 spi_exec "SELECT audit_table_name AS m_aud_tbl_name
 FROM $main_schema.audit_table_mapping
 where schema_name = '$schema_name'
 and tabel_name = '$tgname'"

switch $TG_op {
INSERT {
  set pk_value ""

  #get PK value
  foreach field $TG_relatts {
    if {[string equal -nocase [lindex [array get NEW $field] 0] $pk_name]} {
      set pk_value [lindex [array get NEW $field] 1]
      break;
    }
  }
  #log inserted row values
  foreach field $TG_relatts {
    if {! [string equal -nocase [lindex [array get NEW $field] 0] $pk_name]} {
      set modified_field [lindex [array get NEW $field] 0]
      set current_value [lindex [array get NEW $field] 1]
      spi_exec -array C "INSERT INTO $schema_name.$m_aud_tbl_name
         (time_stamp,pg_user,tbl_name,fld_name,primarykey_name,primarykey_value,modify_type,old_value,new_value)
        VALUES (CURRENT_TIMESTAMP, '$tguser', '$tgname', '$modified_field', '$pk_name', '$pk_value', '$TG_op', NULL, '$current_value')"
    }
  }
}
UPDATE {
  set pk_value ""

  #get PK value
  foreach field $TG_relatts {
    if {[string equal -nocase [lindex [array get NEW $field] 0] $pk_name]} {
      set pk_value [lindex [array get NEW $field] 1]
      break;
    }
  }
  #log inserted row values
  foreach field $TG_relatts {
    #check changed fields
    if {[string equal -nocase [array get NEW $field] [array get OLD $field]] == 0} {
      set modified_field [lindex [array get OLD $field] 0]
      if {[string compare $modified_field ""] == 0} {
        set modified_field [lindex  [array get NEW $field] 0]
      }
      set previous_value [lindex [array get OLD $field] 1]
      set current_value  [lindex [array get NEW $field] 1]
      spi_exec -array C "INSERT INTO $schema_name.$m_aud_tbl_name
        (time_stamp,pg_user,tbl_name,fld_name,primarykey_name,primarykey_value,modify_type,old_value,new_value)
        VALUES (CURRENT_TIMESTAMP, '$tguser', '$tgname', '$modified_field', '$pk_name', '$pk_value', '$TG_op', '$previous_value', '$current_value')"
    }
  }
}
DELETE {
  set pk_value ""

  #get PK value
  foreach field $TG_relatts {
    if {[string equal -nocase [lindex [array get OLD $field] 0] $pk_name]} {
      set pk_value [lindex [array get OLD $field] 1]
      break;
    }
  }
  #log inserted row values
  foreach field $TG_relatts {
    if {! [string equal -nocase [lindex [array get OLD $field] 0] $pk_name]} {
      set modified_field [lindex [array get OLD $field] 0]
      set previous_value [lindex [array get OLD $field] 1]
      spi_exec -array C "INSERT INTO $schema_name.$m_aud_tbl_name
        (time_stamp,pg_user,tbl_name,fld_name,primarykey_name,primarykey_value,modify_type,old_value,new_value)
        VALUES (CURRENT_TIMESTAMP, '$tguser', '$tgname', '$modified_field', '$pk_name', '$pk_value', '$TG_op', '$previous_value', NULL)"
    }
  }
}
}
return OK


Note : 
Main function should be created in the public schema. This function can be used for any system audit tracking. ex. "public"."log_to_audit_table" () 
This should be executed only once per DB server.

  • Step 3 - Create a table called audit_table_mapping
CREATE TABLE "<Schema Name>"."audit_table_mapping" (
  "id" INTEGER NOT NULL, 
  "schema_name" VARCHAR(100), 
  "tabel_name" VARCHAR(200), 
  "audit_table_name" VARCHAR(200), 
  CONSTRAINT "audit_table_mapping_pkey" PRIMARY KEY("id")
);
Note : 
When you implement the audit for each table there should be a mapping record in this table as one audit mapping table can be used to audit multiple tables.
DDL script should be executed only once.
A new record should be inserted for each audit table.


  • Step 4 - Create the Audit table which you need to audit data

  • CREATE TABLE "<Schema Name>"."<audit_Table Name>" (
      "time_stamp" TIMESTAMP WITH TIME ZONE,
      "pg_user" VARCHAR(30),
      "tbl_name" VARCHAR(30),
      "fld_name" VARCHAR(30),
      "primarykey_name" VARCHAR(30),
      "primarykey_value" VARCHAR(40),
      "modify_type" CHAR(6),
      "old_value" TEXT,
      "new_value" TEXT
    ) ;
    Note:
    You can use the same DDL script to create other audit tables as well. What you have to do is just change only the audit table name. ex. audit_[table name]

    • Step 5 - Execute the Trigger
     
    CREATE TRIGGER "trig_audit_<audit_Table Name>"
    AFTER INSERT OR DELETE OR UPDATE ON "<Schema Name>"."<Table Name>"
    FOR EACH ROW
    EXECUTE PROCEDURE "log_to_audit_table" ();
    
    Note:
    Trigger name should be created as follows. trig_audit_[table name]

    Once above steps are done you are good to log data. So when you try to insert / modify / delete that will be audited in the audit table. The advantage of using this functionality is that the audit table will have only the modified data with the old value and the new value. 
    
    

    Tuesday, June 17, 2014

    CVS Commands


    Following are the basic CVS commands

  • Update the project - Go to the relevent folder and issue the following command
  •           Option 1 :
            cvs -d:pserver:user@cvsServer:/usr/local/cvsroot -Q update
              Option 2 :
            cvs -d:pserver:user@cvsServer:/usr/local/cvsroot update -d Project_V1
    • Update a file
          cvs -d:pserver:user@cvsServer:/usr/local/cvsroot update -d "ProConfirmSv.java"
    • Update a new folders
          cvs -d:pserver:user@cvsServer:/usr/local/cvsroot update -d
    • Commit a file
    cvs add 
    cvs -d:pserver:user@cvsServer:/usr/local/cvsroot add ModificationUtil.java
      • go to the relevant folder and issue the following command
           cvs -d:pserver:user@cvsServer:/usr/local/cvsroot commit -m '[Message]'
    • Remove a branch 
    Option 1
         cvs -d:pserver:user@cvsServer:/usr/local/cvsroot rtag -d -B [TagName] [Project]
    Option 2
         cvs "-q" "checkout" "DocLibrary"
    • remove a tag
         cvs -d:pserver:user@cvsServer:/usr/local/cvsroot rtag -d Project_V1
    cvs rtag -d Project_V1 
    • chcekout a project - checkout HEAD
    cvs -d:pserver:user@cvsServer:/usr/local/cvsroot checkout Project
    • checkout a branch 
    1. got to the workspace
    cvs -d:pserver:user@cvsServer:/usr/local/cvsroot checkout -r Project_V1_QA Project
    2. then rename the foder
    checkout -d branch name Module name
    ex
    cvs -d:pserver:user@cvsServer:/usr/local/cvsroot checkout -r Project_V1 Project
    • check the status of the files by files or folder  - cvs status
    Summary of CVS commands are:
            add          Add a new file/directory to the repository
            admin        Administration front end for rcs
            annotate     Show last revision where each line was modified
            checkout     Checkout sources for editing
            commit       Check files into the repository
            diff         Show differences between revisions
            edit         Get ready to edit a watched file
            editors      See who is editing a watched file
            export       Export sources from CVS, similar to checkout
            history      Show repository access history
            import       Import sources into CVS, using vendor branches
            init         Create a CVS repository if it doesn't exist
            kserver      Kerberos server mode
            log          Print out history information for files
            login        Prompt for password for authenticating server
            logout       Removes entry in .cvspass for remote repository
            pserver      Password server mode
            rannotate    Show last revision where each line of module was modified
            rdiff        Create 'patch' format diffs between releases
            release      Indicate that a Module is no longer in use
            remove       Remove an entry from the repository
            rlog         Print out history information for a module
            rtag         Add a symbolic tag to a module
            server       Server mode
            status       Display status information on checked out files
            tag          Add a symbolic tag to checked out version of files
            unedit       Undo an edit command
            update       Bring work tree in sync with repository
            version      Show current CVS version(s)
            watch        Set watches
            watchers     See who is watching a file
    (Specify the --help option for a list of other help options)
    
    

    Friday, May 30, 2014

    Git - Resolve Merge conflicts


    When you pull changes from the Git server you might get conflict in your code. So we will see how to resolve those conflicts.

    You can use git mergetool to check the conflicts. This is the merge conflict resolution tools to resolve merge conflicts.

    Additionally you can resolve conflicts in your file by running below commands.

    git checkout --ours <your File name> 
    git checkout --theirs <your File name> 
    git add <your File name> 
    git commit -m "Conflicts Resolve"
    And then pull the changes with

    git pull origin master

    You will see the message similar to below

    From ssh://[git URL]/projectname
    
     * branch            master     -> FETCH_HEAD
    
    Already up-to-date
    
    


    Sunday, May 4, 2014

    Apache Camel - EIP Pattern Splitter

    The Splitter from the EIP patterns allows you split a message into a number of pieces and process them individually.

    You need to specify a Splitter as split(). In earlier versions of Camel, you need to use splitter()

    I have given samples how we can implement splitter with Java DSL

    Sample 1 - Split the Message Body with @ sign.

     CamelContext context = new DefaultCamelContext();
    
      ProducerTemplate camelTemplate = context.createProducerTemplate();
    
      context.addRoutes(new RouteBuilder() {
    
       @Override
       public void configure() throws Exception {
        // this routes starts from the direct:start endpoint
        // the body is then splitted based on @ separator
        // the splitter in Camel supports InOut as well and for that we
        // need
        // to be able to aggregate what response we need to send back,
        // so we provide our
        // own strategy with the class AggregationStrategy.
        from("direct:start")
          .split(body().tokenize("@"), new AggregationStrategy())
          // each splitted message is then send to this bean where
          // we can process it
          .to("bean:common.SearchRequestService?method=handleOrder")
          // this is important to end the splitter route as we do
          // not want to do more routing
          // on each splitted message
          .end()
          // after we have splitted and handled each message we
          // want to send a single combined
          // response back to the original caller, so we let this
          // bean build it for us
          // this bean will receive the result of the aggregate
          // strategy: AggregationStrategy
          .to("bean:common.SearchRequestService?method=buildCombinedResponse");
    
       }
      });
      context.start();
      camelTemplate.sendBodyAndHeader("direct:start", "A@B@C",
        "searchCriteria", "headingValue");
    
      context.stop();
    
    
    AggregationStrategy Class aggregate method.
     public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
      // put hotel search criteria together in old exchange 
      // by adding the search criteria from new exchange
    
      if (oldExchange == null) {
       // the first time we aggregate we only have the new exchange,
       // so we just return it
       return newExchange;
      }
    
      String requests = oldExchange.getIn().getBody(String.class);
      String newLine = newExchange.getIn().getBody(String.class);
    
      logger.info("Aggregate old requests: " + requests);
      logger.info("Aggregate new requests: " + newLine);
    
      // put Request together separating by semi colon
      requests = requests + ";" + newLine;
      // put combined Request back on old to preserve it
      oldExchange.getIn().setBody(requests);
    
      // return old as this is the one that has all the Request gathered until
      // now
      return oldExchange;
     }
    
    Sample 2 - Split the message with split().method()
      CamelContext context = new DefaultCamelContext();
    
      ProducerTemplate camelTemplate = context.createProducerTemplate();
      context.addRoutes(new RouteBuilder() {
       @Override
       public void configure() throws Exception {
        from("direct:start").to("log:+++before+++?showHeaders=true")
          .split().method(MySplitterBean.class, "splitBody")
          .streaming().to("log:+++after+++?showHeaders=true")
          .choice().when(header("foo").contains("bar"))
          .to("mock:mock").otherwise().to("mock:error");
       }
      });
      context.start();
      camelTemplate.sendBodyAndHeader("direct:start", "msg1,msg2", "foo",
        "bar");
    

    Wednesday, April 9, 2014

    Apache Camel - Request / Reply Pattern with Java DSL

    Apache camel is a powerful tool yet  a lightweight integration framework. This implements all EIPs an  you can easily integrate different applications using the required patterns.  You can use Java, Spring XML, Scala or Groovy. Almost every technology you can imagine is available, for example HTTP, FTP, JMS, EJB, JPA, RMI, JMS, JMX, LDAP, Netty, and many, many more (of course most ESBs also offer support for them).

    You can get more information on Apache camel by going through the following link - Apache Camel

    There are many ways that we can use message routing and below article will illustrate one of the ways to use Request / Reply Pattern with Java DSL. Here I'm using Active MQ as my message Broker.

    Apache ActiveMQ is one of the the most popular and powerful open source messaging and Integration Patterns server. Apache ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes with easy to use Enterprise Integration Patterns. I like this because it has lot of features and useful tools. Also the latest ActiveMQ (5.9.0) is bundle with the hawtio web console which you can monitor all your queues.

    Hawtio itself you can monitor all your camel Contexts, Routers, ActiveMQ Queues ect.

    You can get more information on Apache ActiveMQ by going through following link - Apache ActiveMQ

    Now lets check the Sample code.

    Producer Sample

     public void myMethod() throws Exception {
      // TODO Auto-generated method stub
    
      CamelContext context = null;
      ProducerTemplate camelTemplate = null;
      
      context = new DefaultCamelContext();
      context.getProperties().put(Exchange.LOG_DEBUG_BODY_STREAMS, "true");
      
      // Connect to embedded ActiveMQComponent JMS broker
      ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(
        "failover:(tcp://localhost:61616,tcp://localhost:61617)?randomize=true"
          + "&priorityBackup=true&timeout=40000");
      context.addComponent("jms",
        JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
    
      //Create the Template
      camelTemplate = context.createProducerTemplate();
      Object response = camelTemplate
        .requestBodyAndHeader(
          "jms:myQueue.queue?exchangePattern=InOut&requestTimeout=40000&timeToLive=40000"
            + "&asyncConsumer=true&asyncStartListener=true&concurrentConsumers=10"
            + "&useMessageIDAsCorrelationID=true",
            "mBodyMsg", "HeaderString", "HeaderValue");
      
      camelTemplate.stop();
      context.stop();
    
     }
    

    If you are implementing InOut pattern you need ensure that processed data set back to the same message queue.
    In order to achieve this you need to ensure that there is a RouteBuilder implemented which listen to the
    above queue and process data. And also set the response back to the same Queue.

    Below given the RouteBuilder Configure method implementation.

     @Override
     public void configure() throws Exception {
    
      CamelContext context = null;
    
      try {
       // create CamelContext
       context = new DefaultCamelContext();
       ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(
         "failover:(tcp://localhost:61616,tcp://localhost:61617)?randomize=true&priorityBackup=true");
    
       context.addComponent("jms",
         JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
    
       context.addRoutes(new RouteBuilder() {
        public void configure() {
         // Error Handler with deadLetterChannel
         errorHandler(deadLetterChannel("jms:queue:dead"));
         GenerateRandomNumber.getInstance();
         int num = GenerateRandomNumber.randInt(1, 100000);
         from("jms:myQueue.queue")
           .setHeader("JMSMessageID", constant("ID : " + num))
           .setHeader("JMSReplyTo",
             constant("myQueue.queue"))
           .process(new RequestProcess());
        }
       });
       context.start();
      } catch (Exception e) {
       e.printStackTrace();
      } finally {
      }
     }
    
    
    Data Processing will happen in the RequestProcess class which should be extend with the Apache camel Processor. Below given the process method implementation.
     public void process(Exchange exchange) throws Exception {
      String body = exchange.getIn().getBody(String.class);
      /***
       * Process data and get the response and set the resposen to the Exchage
       * body.
       */
      exchange.getOut().setBody(body + 
        "response; ID : " + exchange.getExchangeId());
     }
    

    Tuesday, March 4, 2014

    Maven - Deploy war in Tomcat 7 & 8

    This article will explain on how to deploy a war fine in to Tomcat 7 through maven build.

    Note : I have tested same settings for Tomcat 8 as well. So below settings can be apply for both versions.

    • Configure tomcat7-maven-plugin in pom.xml
       <!-- Tomcat plugin -->
       <plugin>
        <groupId>org.apache.tomcat.maven</groupId>
        <artifactId>tomcat7-maven-plugin</artifactId>
        <version>2.2</version>
        <configuration>
         <path>/${project.build.finalName}</path>
         <update>true</update>
         <url>http:// localhost:8080/manager/text</url>
         <username>tomcat</username>
         <password>tomcatuser</password>
        </configuration>
       </plugin>
    
    
    • Configure user rights in the tomcat-user.xml which is in the tomcat conf file.
       <role rolename="tomcat" />
       <role rolename="manager-gui" />
       <role rolename="manager-script" />
       <role rolename="admin-gui" />
       <user username="tomcat" password="tomcatuser"
        roles="tomcat,manager-gui,admin-gui,manager-script" />
    
    •  Once above steps are followed, You can deploy the war by running following maven goal
    mvn tomcat7:redeploy

    More details of the Apache Tomcat Maven Plugin refer maven-plugin-2.2


    Monday, February 17, 2014

    Java EE 7 is here! What’s New in Java EE 7

              The most important goal of the Java EE 7 platform is to simplify development by providing a common foundation for the various kinds of components in the Java EE platform. Developers benefit from productivity improvements with more annotations and less XML configuration, more Plain Old Java Objects (POJOs), and simplified packaging. The Java EE 7 platform includes the following new features,

    The main features of the new specifications :

    • Java API for WebSocket - WebSocket is an application protocol that provides full-duplex communications between two peers over TCP. The Java API for WebSocket enables Java EE applications to create endpoints using annotations that specify the configuration parameters of the endpoint and designate its lifecycle callback methods.
    • Java API for JSON Processing - JSON is a text-based data exchange format derived from JavaScript that is used in web services and other connected applications. The Java API for JSON Processing (JSON-P) enables Java EE applications to parse, transform, and query JSON data using the object model or the streaming model.JSON-P is new to the Java EE 7 platform. The Java EE 7 platform requires JSON-P 1.0.
    • Batch Applications for Java Platform - Allows for description of a Batch Job using Job Specification Language defined by an XML schema. It defines a complete execution sequence of the jobs.
    • Concurrency Utilities for Java EE - The Concurrency Utilities for Java EE is a standard API for providing asynchronous capabilities to Java EE application components through the following types of objects: managed executor service, managed scheduled executor service, managed thread factory, and context service.

    The main features of the updated specifications :

    • Java API for RESTful Web Services - The Java API for RESTful Web Services (JAX-RS) defines APIs for the development of web services built according to the Representational State Transfer (REST) architectural style. A JAX-RS application is a web application that consists of classes packaged as a servlet in a WAR file along with required libraries.
    • Java Message Service - The Java Message Service (JMS) API is a messaging standard that allows Java EE application components to create, send, receive, and read messages. It enables distributed communication that is loosely coupled, reliable, and asynchronous.
    • Expression Language - This chapter introduces the Expression Language (also referred to as the EL), which provides an important mechanism for enabling the presentation layer (web pages) to communicate with the application logic (managed beans). The EL is used by both JavaServer Faces technology and JavaServer Pages (JSP) technology. The EL represents a union of the expression languages offered by JavaServer Faces technology and JSP technology.
    • Enterprise JavaBeans - Support for EJB 2.1, EJB QL, and JAX-RPC-based Web Service endpoints and client view is now optional.
    • Servlets - Defines a standard mechanism to upgrade existing HTTP connection to a different protocol using HttpUpgradeHandler, Offers nonblocking request and response processing for async servlets, Defines rules for which HTTP methods are covered by <security-constraint>
    • JavaServer Faces - Faces Flow provides an encapsulation of related views/pages with application defined entry and exit points.
    • Java Persistence - The Java Persistence API provides Java developers with an object/relational mapping facility for managing relational data in Java applications.
    • Interceptors - Interceptors are used in conjunction with Java EE managed classes to allow developers to invoke interceptor methods on an associated target class, in conjunction with method invocations or lifecycle events. Common uses of interceptors are logging, auditing, and profiling.
    • Contexts and Dependency Injection - Contexts and Dependency Injection for Java EE (CDI) is one of several Java EE features that help to knit together the web tier and the transactional tier of the Java EE platform.
    • Bean Validation - Validating input received from the user to maintain data integrity is an important part of application logic.
    • Java Transaction - 
    • @Transactional provides the application to declaratively control transaction boundaries on CDI-managed beans, as well as classes defined as managed beans by the Java EE specification, at both the class and method level where method-level annotations override those at the class level.
    • JavaMail - @MailSessionDefinition and @MailSessionDefintions defines MailSession to be registered with JNDI
    • Java EE Connector Architecture - Provides @AdministeredObjectDefinition , @AdministeredObjectDefintions , @ConnectorFactoryDefinition , and @ConnectorFactoryDefinitions to define a connector-administered object and factory to be registered in JNDI

             You can find the Java EE7 sample in Java GitHub site. Go to javaee7-samples

    Sunday, February 2, 2014

    Install nodejs and Grunt in Ubuntu 13.10

    You can install the nodejs, bower and Gruntjs using below commands.


    • Install nodejs
          sudo apt-get install npm
    • Install Grunt 
         npm install -g grunt-cli
    • Install bower
     npm install -g bower

    After install above libraries run the grunt command.  If you see an error you can run the below commands to overcome in ubuntu 13.10.
    sudo apt-get install nodejs 
    sudo ln -s /usr/bin/nodejs /usr/bin/node 
    curl https://npmjs.org/install.sh | sudo sh
    

    Thursday, January 16, 2014

    MySQL Commands

    Important MySQL Commands.


    • How to check if multiple instance are running on the Server
    sudo netstat -tap | grep mysql
    • Kill mySql processors
    pkill mysqld
    • Start , Stop or Restart MySQL
    /etc/init.d/mysql start|stop|restart|reload|force-reload|status
    • Backup MySQL Database
    mysqldump -u root -p DB > database_dump.txt
    • Restore MySQL Database
    mysql -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
    • Back up MySQL Database with Compress
    mysqldump -u [uname] -p[pass] [dbname] | gzip -9 > [backupfile.sql.gz]
    • Restore compressed backup files
    gunzip < [backupfile.sql.gz] | mysql -u [uname] -p[pass] [dbname]
    • Restore a database that already exists
    mysqlimport -u [uname] -p[pass] [dbname] [backupfile.sql]