Friday, 27 December 2013

Java Versions, Features and History

Java Version SE 7

Code named Dolphin and released on July 28, 2011.

New features in Java SE 7

  • Strings in switch Statement
  • Type Inference for Generic Instance Creation
  • Multiple Exception Handling
  • Support for Dynamic Languages
  • Try with Resources
  • Java nio Package
  • Binary Literals, underscore in literals
  • Diamond Syntax
  • Automatic null Handling.

Java Version SE 6

Code named Mustang and released on December 11, 2006.

New features in Java SE 6

  • Scripting Language Support
  • JDBC 4.0 API
  • Java Compiler API
  • Pluggable Annotations
  • Native PKI, Java GSS, Kerberos and LDAP support.
  • Integrated Web Services.
  • Lot more enhancements.

J2SE Version 5.0

Code named Tiger and released on September 30, 2004.

New features in J2SE 5.0

  • Generics
  • Enhanced for Loop
  • Autoboxing/Unboxing
  • Typesafe Enums
  • Varargs
  • Static Import
  • Metadata (Annotations)
  • Instrumentation

J2SE Version 1.4

Code named Merlin and released on February 6, 2002 (first release under JCP).

New features in J2SE 1.4

  • XML Processing
  • Java Print Service
  • Logging API
  • Java Web Start
  • JDBC 3.0 API
  • Assertions
  • Preferences API
  • Chained Exception
  • IPv6 Support
  • Regular Expressions
  • Image I/O API

J2SE Version 1.3

Code named Kestrel and released on May 8, 2000.

New features in J2SE 1.3

  • Java Sound
  • Jar Indexing
  • A huge list of enhancements in almost all the java area.

J2SE Version 1.2

Code named Playground and released on December 8, 1998.

New features in J2SE 1.2

  • Collections framework.
  • Java String memory map for constants.
  • Just In Time (JIT) compiler.
  • Jar Signer for signing Java ARchive (JAR) files.
  • Policy Tool for granting access to system resources.
  • Java Foundation Classes (JFC) which consists of Swing 1.0, Drag and Drop, and Java 2D class libraries.
  • Java Plug-in
  • Scrollable result sets, BLOB, CLOB, batch update, user-defined types in JDBC.
  • Audio support in Applets.

JDK Version 1.1

Released on February 19, 1997.

New features in JDK 1.1

  • JDBC (Java Database Connectivity)
  • Inner Classes
  • Java Beans
  • RMI (Remote Method Invocation)
  • Reflection (introspection only)

JDK Version 1.0

Codenamed Oak and released on January 23, 1996.






Wednesday, 18 December 2013

Reading properties file in Spring

weekendplan_prod.properties
activation.user.notvalid.msg  = Requested user is not authorized.
 
applicationContext.xml
 
<bean id="kleverlinksProperties" 
class="com.kleverlinks.common.utils.KleverlinksPropertyutil"> <property name="systemPropertiesModeName" 
value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/>
          <property name="ignoreResourceNotFound" value="true"/>
          <property name="locations">
            <list>
              <value>classpath:weekendplan_prod.properties</value>
            </list>
          </property>
          <property name="ignoreUnresolvablePlaceholders" value="true"/>
        </bean>
weekendplan_prod.properties is name of the property file.
 
KleverlinksPropertyutil.java
package com.kleverlinks.common.utils;
import java.io.IOException;
import java.util.Properties;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.config.PropertyPlaceholderConfigurer;

/**
 * 
 * Loads properties from the property file.
 * 
 */
public class KleverlinksPropertyutil extends PropertyPlaceholderConfigurer{
  private static Log log = LogFactory.getLog(KleverlinksPropertyutil.class);
  static Properties proeprties;
  public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory)
  throws BeansException
   {
       try
       {
        proeprties = mergeProperties();
        convertProperties(proeprties);
        processProperties(beanFactory, proeprties);
       }
       catch(IOException ex)
       {
        log.error("Exception while reading the properties file " +ex);
       }
    }
     public static String getStringValue(String key) {
      return proeprties.getProperty(key);
     }

}
How to use properties in property file
System.out.println( KleverlinksPropertyutil.getStringValue("activation.user.notvalid.msg"));  
OUTPUT:
Requested user is not authorized.
Another way to configure property file:
<bean
    class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
     <property name="location">
      <value>database.properties</value>
    </property>
  </bean>
 
 
 

Tuesday, 17 December 2013

How to access css/js files in jsps or js file path in jsp context path

PROBLEM:
Project Structure is

webapp
   ||
   ||==css
   ||==images
   ||==js
   ||==WEB-INF

I have given
<link type="text/css" rel="stylesheet" href="/css/accordian.css" /> 
  
http://localhost:8080/--here it will be appended.

If I remove "/css " and place "css"

http://localhost:8080/EmployDetailsSpring/controller/userauthentication/login
It removes last appended string and places  css/accordian.css 
http://localhost:8080/EmployDetailsSpring/controller/userauthentication/css/accordian.css

SOLUTION:
Place project name before path.

    <img alt="" src="/EmployDetailsSpring/images/banner.png" />

EmployDetailsSpring is my Project name.

But if Project name changes its a headache to change all paths.To over come that we use
<%=request.getContextPath()%> which means Project name it self.

For IMAGE
  <img alt="" src="<%=request.getContextPath()%>/images/banner.png" />
For JS
<script type="text/javascript" src="<%=request.getContextPath()%>/js/cycle-plugin.js"></script>
For CSS
<link type="text/css" rel="stylesheet" href="<%=request.getContextPath()%>/css/accordian.css" />

above one means same but if Project name changes we dont need to change manually.

Tuesday, 10 December 2013

Hadoop: What it is, how it works, and what it can do

Anyone concerned with information technology needs to know about Hadoop.

The seeds of Hadoop were first planted in 2002.

What it is:
 The Hadoop platform was designed to solve problems where you have a lot of data — perhaps a mixture of complex and structured data — and it doesn’t fit nicely into tables.

Example: Images,Audio,Video,PDF,Word,XML any kind of unstructured data.

Hadoop is a data storage and processing system. It is scalable,fault-tolerant and distributed. The software was originally developed by the world’s largest internet companies to capture and analyze the data that they generate. Unlike older platforms, Hadoop is able to store any kind of data in its native format and to perform a wide variety of analyses and transformations on that data. Hadoop stores terabytes, and even petabytes, of data inexpensively. It is robust and reliable and handles hardware and system failures automatically, without losing data or interrupting data analyses.

Hadoop is designed to store big data cheaply on a distributed file system across commodity servers. How you get that data there is your problem. And it’s a surprisingly critical issue because Hadoop isn’t a replacement for existing infrastructure, but rather a tool to augment data management and storage capabilities. Data, therefore, will be continually going in and out.

How it works:

Hadoop runs on clusters of commodity servers. Each of those servers has local CPU and storage. Each can store a few terabytes of data on its local disk.

The two critical components of the Hadoop software are:

The Hadoop Distributed File System, or HDFS

HDFS is the storage system for a Hadoop cluster. When data arrives at the cluster, the HDFS software breaks it into pieces and distributes those pieces among the different servers participating in the cluster. Each server stores just a small fragment of the complete data set, and each piece of data is replicated on more than one server.

A distributed data processing framework called MapReduce

Because Hadoop stores the entire dataset in small pieces across a collection of servers, analytical jobs can be distributed, in parallel, to each of the servers storing part of the data. Each server evaluates the question against its local fragment simultaneously and reports its results back for collation into a comprehensive answer.

MapReduce is the plumbing that distributes the work and collects the results.
Both HDFS and MapReduce are designed to continue to work in the face of system failures. The HDFS software continually monitors the data stored on the cluster. If a server becomes unavailable, a disk drive fails or data is damaged, whether due to hardware or software problems, HDFS automatically restores the data from one of the known good replicas stored elsewhere
on the cluster. When an analysis job is running, MapReduce monitors progress of each of the servers participating in the job.If one of them is slow in returning an answer or fails before completing its work, MapReduce automatically starts another instance of that task on another server that has a copy of the data. Because of the way that HDFS and MapReduce work, 
Hadoop provides scalable, reliable and fault-tolerant services for data storage and analysis at very low cost.

Hadoop stores any type of data, structured or complex, from any number of sources, in its natural format. No conversion or translation is required on ingest. Data from many sources can be combined and processed in very powerful ways, so that Hadoop can do deeper analyses than older legacy systems. Hadoop integrates cleanly with other enterprise data management
systems. Moving data among existing data warehouses, newly available log or sensor feeds and Hadoop is easy. Hadoop is a powerful new tool that complements current infrastructure with new ways to store and manage data at scale.

What it can do:
Hadoop solves the hard scaling problems caused by large amounts of complex data.As the amount of data in a cluster grows, new servers can be added incrementally and inexpensively to store and analyze it. Because MapReduce takes advantage of the processing power of the servers in the cluster, a 100-node Hadoop instance can answer questions on 100 terabytes of data just as quickly as a ten-node instance can answer questions on ten terabytes.
Of course, many vendors promise scalable, high-performance data storage and analysis.Hadoop was invented to solve the problems that early internet companies like Yahoo!and Facebook faced in their own data storage and analysis. These companies and others actually use Hadoop today to store and analyze petabytes thousands of terabytes of data. Hadoop is not merely faster than legacy systems. In many instances, the legacy systems 
simply could not do these analyses.