Showing posts with label Amazon. Show all posts
Showing posts with label Amazon. Show all posts

Friday, February 25, 2011

How to Buy Used Books on Amazon

Instructions

    Sign In

  1. 1
    Go to Amazon.com.
  2. 2
    Click on "New Customer Start Here" or sign into your account.
  3. 3
    Fill in login and sign in information.
  4. Create New Account

  5. Enter your email address. Click "Sign In" using our secure server.
  6. Fill in Registration info: Name, Email, Password. Click "continue."
  7. Your Amazon Account is set up.
  8. Buy Used Books

  9. Go to Amazon.com home page.
  10. In the browse toolbar, click "Books."
  11. Search for your book: book genre, book title or author. You also can use "Browse Keyword" toolbar categories.
  12. When you find an item that interests you, click the title or the name of the item to see its product detail page.
  13. Click the "New and Used" link to check out the list of books that are available in used condition.
  14. Click the "Used" tab.
  15. Check out reviews for the book's condition and seller info.
  16. Shopping Cart

  17. After finding the book that you are looking for, click "Add to Cart."
  18. If you have more shopping to do, repeat steps 3-7 above.
  19. If you are done shopping, go to shopping cart link at top of page.
  20. Confirm shopping cart items. Enter quantity if it is different then default 1. Update the shopping cart if any changes were made to your order.
  21. Click on yellow "Proceed to Check out" button on right toolbar. Sign in. Enter and verify your shipping address. Click "Continue."
  22. Select shipping method and press "Continue."
  23. Select your payment method. Fill out your payment information and press "Continue."
  24. Review details about the products you ordered, billing info and shipping info. Click on "Place Your Order."
  25. Check your order status for current info about shipping and delivery times.

Wednesday, February 16, 2011

How Developing an Amazon Web Services Client


Introduction

Host Your Web Site In The Cloud: Amazon Web Services Made Easy: Amazon EC2 Made Easy

Sun Microsystem's Java 2 Enterprise Edition (J2EE) platform, coupled with the Sun ONE Studio 4 Enterprise Edition product, provides an environment that greatly simplifies the process of developing a Web Services client. The J2EE 1.3 platform incorporates the Java Web Services Developer Pack (Java WSDP), which includes the Java technologies needed to develop and deploy Web Services and their clients. In addition to simplifying the development process, Sun ONE Studio 4 takes care of many of the administrative tasks associated with developing and deploying a Web Service client application. In contrast to other IDEs on the market, Studio tightly integrates the various Web Service technologies.
The example in this article uses a Web Service that Amazon.com -- a large seller of books, CDs, and other products -- has created so that client applications can browse their product catalog and place orders. Your client can access the Amazon Web Services using either XML over HTTP or a remote procedure call API with a Simple Object Access Protocol (SOAP) interface. Both techniques return structured data (product name, manufacturer, price, and so forth) about products at Amazon.com, based on such parameters as keywords and browse tree nodes. To make it possible to use the service, Amazon.com has provided a Web Services Definition Language (WSDL) file, which contains the definition of the Web Service, the messages that can be sent to the service, and so forth. A developer with access to this WSDL file can write a client application to use the Amazon Web Service.
This article shows you how to use Sun ONE Studio 4 to create a client that accesses the Amazon Web Service. A Web Service client can be a simple client based on JavaServer Pages (JSP) technology, or it can be more sophisticated and use the Swing APIs. This article shows you how to use the tool to develop a Swing-based client that communicates to the Amazon Web Service through proxy or stub classes generated from the WSDL file. The communication is via SOAP and relies on remote procedure calls implemented with the Java API for XML-based Remote Procedure Call (JAX-RPC) runtime. It is this remote procedure call mechanism that enables a remote procedure call from a client to be communicated to a remote server.

What's Covered in this Article

  • Example application
  • Background
  • Setting up the environment, including installing and downloading all necessary software
  • Setting up Sun ONE Studio
  • Using Sun ONE Studio to generate SOAP messages and proxy classes
  • Writing the Swing components for the client
  • Writing the proxy class to connect to the Web Service

The Example Web Services Client Application

The example client application is a Swing client that searches the Amazon catalog for books whose subject matches a user-selected topic. The application displays ten books that match the chosen topic, and shows the author name, book title, list price, Amazon discount price, and the cover icon. The user may optionally view one review per displayed title.
You will write five Java classes for this application.
  • AmazonProxy.java--This class includes the code to communicate with the JAX-RPC proxy or stub classes generated for you by Sun ONE Studio.
  • AmazonServices.java--This class is the main Swing component. In addition to its graphic functions, it instantiates the AmazonProxy class.
  • BookDetailsPanel.java--This Swing component displays individual book detail information.
  • CategoryPanel.java--This Swing component lets the user select categories of books from Amazon's catalog.
  • EditorialReviewPanel.java--This Swing component displays review comments on a particular book.
Note that the four Swing component classes have corresponding .form files that Sun ONE Studio generates to depict the GUI you design.
This application has been intentionally kept simple to illustrate how to create Web Service clients. It is not intended to showcase the entire range of functionality available to an application through the Amazon Web Services, nor is it intended to illustrate all the capabilities of a Swing client.
Figure 1 gives a quick view of the application's GUI interface. In the first screen, the user selects from the various available book categories.

Figure 1: Client Application Category Selection Screen
Figure 2 shows a sample of the kind of data that the client can retrieve from the Amazon Web Service.

Figure 2: Matching Book Display
Click image to enlarge
If the user clicks a title's See Review button, the example client application displays a review of that book, shown in Figure 3.

Figure 3: Book Review
Click image to enlarge

Background

This example application uses Java bindings or classes generated from the WSDL description and the JAX-RPC runtime for the remote access to the service (JAX-RPC over SOAP), rather than using the option of XML over HTTP. By using JAX-RPC and SOAP, you're shielded from XML processing details, particularly parsing and transforming the XML document, and the application can be written entirely using Java technology. Sun ONE Studio provides extensive support for accessing Web Services via JAX-RPC over SOAP, because the SOAP access approach allows a richer API, and potentially a deeper Web Service provider access, compared to the HTTP access approach.
The XML over HTTP option also passes information between the Web Service and the client as XML documents. The client must transform these XML documents, using XSLT, so that the data can be displayed on a Web page.
JAX-RPC defines Java APIs that Java applications use to develop and access Web Services regardless of the platforms upon which they are deployed or the language in which they are coded. In JAX-RPC, remote procedure calls are represented by an XML infoset -- SOAP -- carried over some network transport (in this case, HTTP). SOAP is a lightweight, XML-based protocol for exchanging information in a distributed environment. The core of this protocol, a SOAP envelope, defines a framework for describing a message's contents and how to process it.
WSDL specifies an XML format for describing a Web Service as a set of endpoints operating on messages. WSDL describes the Web Service in a standard format: it specifies the service's location and the operations exposed by the service.

Setting Up the Environment

Before you can get started, you must install the necessary software and obtain a copy of Amazon's Web Services WSDL file (links below). You should also obtain a copy of the Amazon developer's license, since you need to pass this license token as a parameter when making a request to the service.

Install Sun ONE Studio Enterprise Edition and Related Software

To follow the example presented in this article, you should install the Sun ONE Studio 4 update 1 Enterprise Edition development environment. If you don't already have this product, you can download a trial copy at no charge. You can go directly to the product page and select the Enterprise Edition for Java product from the Trial Downloads page. The download page explains what you need to do to download the Sun ONE Studio products. It also tells you what other software you might need to install, such as the Java 2 Platform, Standard Edition (J2SE), v 1.4.1 software, and how to obtain that software. Download the correct version of Sun ONE Studio (and the J2SE software, if need be) for your system and follow the installation instructions that come with these products.
Note that installing Sun ONE Studio also installs the J2EE 1.3 software, the Java WSDP software, and a default server, Apache Tomcat 4. It also includes the JAX-RPC runtime (part of the Java WSDP), which is essential for this example.
The example uses the Apache Tomcat 4 server, which is included in the software package and is available to everyone -- so it's a good server to use for test implementations such as this. However, for production purposes, you may prefer to use the Sun ONE Application Server or any other J2EE-compliant Web server.

Obtain the Amazon Web Services WSDL File

Through its Web Service, Amazon.com has expanded its Web site so that it is not limited to browsing by individual customers, and its partner businesses are not restricted to merely linking to the site. The Amazon Web Service lets Web applications dynamically make calls to the Amazon database (its catalog), extract information about its complete product line, and purchase products. Applications make the calls to the Amazon Web Service and receive responses containing the very latest data in real time.
Amazon has made available an entire toolkit, including documentation and sample programs, for accessing their Web Services. You can download this toolkit from their site. For this article, you only need to obtain a copy of the Amazon Web Services Web Services Definition Language (WSDL) file, called AmazonWebServices.WSDL. The WSDL file is an XML file that describes the Web Service, and is necessary for passing messages to and from the service using SOAP.
Be sure to register with the Amazon Web Services and obtain a developer license or token. Copy the Amazon WSDL file to your local directory through your browser. Point your browser to http://soap.amazon.com/schemas2/AmazonWebServices.wsdl and use the save option from your browser to save the WSDL file to your local directory.
This example places the WSDL file in the top directory for its Web Services client: D:\amazonwebservice.

Developing the Web Client Application

Although you can manually develop a Web Services client using the J2EE platform and the Java WSDP APIs and scripts, this article shows you how to create your Web client application using Sun ONE Studio 4 Enterprise Edition. This way of creating a Web Services client is much easier and less prone to error.

Set the Development Options in Sun ONE Studio

Begin by starting Sun ONE Studio, since the development work is done from within it. Once Sun ONE Studio is running, verify that the development options are set correctly.
Set the Default Web Server
You must have a Web server container installed and running for this example to work. This example deploys the Web Services client to the Apache Tomcat 4.0 Web server, so make sure Apache Tomcat 4.0 is set to be the default server.
To set Tomcat as the default server:
  1. Select the Runtime tab in the Explorer window.
  2. Expand Server Registry, then Default Servers.
  3. Right click the Web Tier Applications and select the Set Default Server option.
  4. Once Tomcat is set as the default Web tier server, start the server.
  5. Expand Installed Servers, then Tomcat 4.0
  6. Right click Internal [Not Running] and select the Start Server option.
Figure 4 shows the screen for setting the default application server and starting it.

Figure 4: Setting Default Server and Starting the Server
Click image to enlarge
Mount the Directory
Prior to starting Sun ONE Studio, you should have copied the AmazonWebServices.WSDL file to the local directory in which you plan to place the Web Services client (as described earlier). Mount this directory so that Sun ONE Studio includes it in the required class paths. A directory that has been mounted appears in the Explorer [Filesystems] window. As mentioned previously, this example mounts the directory called amazonwebservice, which happens to be located on its D drive (D:\amazonwebservice).
From the main window's File pull-down menu, select Mount Directory. This opens a wizard screen, from which you select Local Directory. Then, within the wizard, navigate to the directory you want to mount, select it. and click Finish.
Create a Client Folder
You must create a folder to hold the Web client. Later, you use this folder as the package name for your client. Right click the directory you are using for your Web Services client, then select New->Folder. (Figure 5.) You are prompted for a name for the new folder. This example creates a new folder, which it calls myamazonclient, within its amazonwebservice directory.

Figure 5: Create a New Folder for the Client
Click image to enlarge
The Explorer window shows the new folder, myamazonclient, and the WSDL file, AmazonWebServices.wsdl, in the directory amazonwebservices. (Figure 6.)

Figure 6: Directory Containing Client Folder and WSDL File
Click image to enlarge

Create a Web Service Client

You are now ready to create the Web Service client. The Sun ONE Studio wizard guides you through these steps.
  1. In the Explorer [Filesystems] window, right click the folder for the client. This example selects the myamazonclient folder in the directory D:\amazonwebservice.
  2. Select New->All Templates.. then select Web Services->Web Service Client. (Figure 7.) The wizard opens a template for creating a Web Service client.


    Figure 7: Starting the New Web Service Client Wizard
    Click image to enlarge


  3. Specify the client name, its package, and the option to use the local WSDL file. In the template, enter the name of the new Web Services client and select its package by choosing a folder using the browse function. Use whatever name you want for the client. Click the option to use the local WSDL file. The example named the client AmazonClient and selected myamazonclient for the package. (Figure 8.)


    Figure 8: Specifying the Client Name, Package, and Local WSDL File
    Click image to enlarge


  4. Select the WSDL file that you copied to your local directory. The example selects the AmazonWebService.wsdl file in the myamazonclient directory. (Figure 9.)


    Figure 9: Select the WSDL File
    Click image to enlarge
    Sun ONE Studio creates the new Web Services client in the specified package. (Figure 10.) Right click the client and view its properties.
    Be sure that the SOAP runtime property is set to JAXRPC. The JAX-RPC runtime is the means by which the Swing client makes remote calls to the Web Services. The Welcome Page property shows you a simple HTML page that Sun ONE Studio has generated, and which you can use later to access the different methods exposed by the WSDL file on Amazon's Web Service.
    The Amazon developers designed their Web Services to be accessed either via XML over HTTP or using a remote procedure call API with SOAP. When data is retrieved using XML over HTTP, it must be parsed or transformed to extract information (such as by using XSLT, or Extensible Stylesheet Language Translation) and reformatted into a Web page. Data can also be retrieved using JAX-RPC and sending and receiving SOAP-encoded messages. SOAP messages are well-suited to XML data binding; the messages can be represented as Java objects, making them easier to manipulate programmatically with JAX-RPC.

    Figure 10: AmazonClient Web Service Client and its Properties
    Click image to enlarge


  5. To generate the Client Proxy, right click the Web Service client and select Generate Client Proxy. (See Figure 11.)


    Figure 11: Generating the Client Proxy
    Click image to enlarge
    Sun ONE Studio reads the information in the WSDL file and generates two sets of files. (Figure 12.)

    Figure 12: Generated Client Proxies
    One set of files are SOAP documents that let you test the Amazon Web Service via a browser and HTML. Sun ONE Studio generates a test client-- a simple JSP page -- that lets you invoke these operations. It places these files in the subdirectory AmazonClient$Documents.
    Sun ONE Studio also generates a set of Java classes or objects from the WSDL file, which it places in the subdirectory AmazonClientGenClient. These classes, considered client-side implementation classes, are stub files that enable the client application to get a handle to the Amazon Web Service endpoint and to invoke the operations exposed by the WSDL file on Amazon's Web Service interface. The example Swing client uses these generated Java classes, via JAX-RPC, to access the Web Service.
  6. It's very simple to execute the HTML client. Just right click the client for which you just generated the client proxy, and select Execute.

Figure 13: Execute the HTML Client
Sun ONE Studio completes whatever assembly and deployment tasks are required, starts the server (if it is not already running), and displays the simple test HTML page (the welcome page for the client) in a browser window. This is useful for testing a Web Service. You could also use these generated SOAP documents as a start to creating a more interesting JSP-based client. (For this test JSP page to work correctly, you must complete the request parameters according to Amazon's guidelines, which you can find in their toolkit documentation.)

Figure 14: Keyword Search Request Form
Click image to enlarge
For example, to perform a keyword search (as shown in Figure 14), you need to enter values for these parameters as indicated in the following list. (Note that your application client will programmatically be setting these same parameter values. See Set the parameter values, below.)
  • keyword: the key for the search, such as "food" or "EJB." The keyword only applies to this type of request.
  • page: an integer indicating the number of pages with matching data that the search request returns. One page contains up to 10 matching results.
  • mode: the Amazon product line, such as books, music, electronics, and so forth.
  • tag: the Amazon Associates identifier, if you have one. Otherwise, you can use the default identifier: webservices-20 or your developer's 14-digit token.
  • type: set type to either lite or heavy, depending on how much information you want returned in the response document.
  • dev-tag: the Amazon developer's 14-digit token (or 14 zeros).
  • sort: set to null.
  • variation: set to null.
  • locale: set to null.
Note: You should change the input proxy if you are working behind the firewall.
Figure 15 shows an example of a completed product keyword search request form.

Figure 15: Completed Product Keyword Request
Click image to enlarge
After you enter these values for the respective parameters, select Invoke, which appears at the bottom of the browser page. The request executes and the browser page shows a table of attributes and their values. (Figure 16)

Figure 16: Response to Product Keyword Request
Click image to enlarge

Creating the Application as a Swing Client

Now, let's look at how to create this Web client application as a Swing client. As noted earlier, the application consists of five classes: four classes are Swing components that pertain to the application GUI, and one class serves as a proxy to invoke the Web Services methods on the generated Java classes that are in the AmazonClientGenClient subdirectory. These Swing components have corresponding .form files which are generated by Sun ONE Studio's Form Editor. You can find all the Java classes and form files in the AmazonClientCode zip file. Unzip this file into the myamazonclient directory.
You create the classes for the client application in the client folder. This example creates them in the myamazonclient folder.
Note that this article assumes that you know how to code these GUI classes. If you are not already familiar with the Java Foundation Classes and the Swing components, look here for more information. You can also find help on using Sun ONE Studio to create JFC classes and Swing components through the Sun Developer Connection. Rather than take you through the details of developing the Swing classes, this article focuses on the code you need to write to enable your Swing client to access the Amazon Web Service.
Create the Main Application Class and Swing Components
The main client application class, AmazonServices.java, is a javax.swing.JFrame class that initializes the form for the client. This class, along with the three classes that extend javax.swing.JPanel, create the look and feel of the client application.
To create the AmazonServices.java class, right click the client folder, myamazonclient, then select New->GUI Form->JFrame. (Figure 17.)

Figure 17: Create a Swing Component
Click image to enlarge
Sun ONE Studio creates a new Java class that extends JFrame and includes any required methods and default values. Use the associated wizard to define the class, or enter the code directly in the source editor. Sun ONE Studio has a form editor through which you can design the layout for the GUI screen, place buttons on the screen, and so forth.
In a similar fashion, use Sun ONE Studio to create the classes that control the various panel displays within the screen and that render the returned data. These classes (BookDetailsPanel.java, CategoryPanel.java, and EditorialReviewPanel.java) extend javax.swing.JPanel.
To create these panel classes, right click the client folder and select New->GUI Form->JPanel. (Figure 17.) Complete the code for the three panel classes and the main frame class. Feel free to try out the Sun ONE Studio form editor, which opens automatically when you double-click a Swing component class, and change the look and feel of the application.
Instantiating the Amazon Proxy
AmazonServices starts the client application, initializes it, and controls the application's progression through its several screens. From the perspective of a Web Service client, the most interesting thing is that AmazonServices instantiates an AmazonProxy object. It is AmazonProxy that enables access to the JAX-RPC generated methods and subsequently to Amazon's Web Services.
AmazonServices instantiates a new AmazonProxy within its showScreen method:
private void showScreen() {
  ...
  myamazonclient.AmazonProxy amp = new AmazonProxy();
  ...
}

Create the Proxy Class
The proxy class -- AmazonProxy.java in this example -- is of most interest, since this class invokes the methods in the generated stub classes to access the Amazon Web Service. The proxy class uses the JAX-RPC runtime through the javax.xml.rpc.Stub class.
Create a new Java class for AmazonProxy, in a similar fashion to the way you created the JFrame and JPanel classes. Right click the myamazonclient client folder, then select the File->New->Classes->Class option in the Explorer window. (Figure 18.) Sun ONE Studio creates a Java class for you.

Figure 18: Create a Java Class
Click image to enlarge
Write the Proxy Code
To write the code in the proxy class, you need to know how to use the JAX-RPC API to access a Web Service. This article takes you through the code in the AmazonProxy class, which you write in Sun ONE Studio's Source Editor. For a more in-depth explanation, refer to the Java Web Services Tutorial and other articles on Web Services.
The code in AmazonProxy must do the following:
Get access to the JAX-RPC API and runtime. Import the javax.xml.rpc.Stub class so that you have access to the JAX-RPC API and runtime.
import javax.xml.rpc.Stub;
Import the generated Amazon classes. The example imports the generated classes in the AmazonServicesGenClient subdirectory.
import myamazonclient.AmazonServicesGenClient;
Get access to the Amazon service's port. Clients access a Web Service via the service's port, which in turn passes a client request to the service's endpoint, ultimately invoking the service's method on the service interface. To gain access to the Web Service port, the client must first get a handle to the service's generated stub file, AmazonSearchPort_Stub.java, by using methods in the impl file, AmazonSearchServices_Impl.java. The stub class includes the location of the Amazon service port: http://soap.amazon.com/onca/soap2. The impl file includes the getAmazonSearchPort method, which returns a handle to the stub file and access to the port.
To do this, instantiate an instance of the generated AmazonSearchServices_Impl class and invoke the AmazonSearchServices_Impl.getAmazonSearchPort method. You must cast the method's returned object to a javax.xml.rpc.Stub type. The example invokes getAmazonSearchPort after instantiating AmazonSearchService_Impl, casts its return value to Stub, and sets the object into stub.
Stub stub = (Stub)
  (new AmazonSearchService_Impl().getAmazonSearchPort());
Then, cast the Stub object returned by getAmazonSearchPort to an AmazonSearchPort type and set an AmazonSearchPort object equal to this cast Stub object. The example casts stub to an AmazonSearchPort type, then sets the AmazonSearchPort object asp equal to stub.
AmazonSearchPort asp = (AmazonSearchPort) stub;
Set the parameter values. Set up the parameters for the Amazon Web Service methods that the client application will invoke. In this example, the client requests that the product mode books be selected by a particular keyword. AmazonProxy sets up the parameters by instantiating a new KeywordRequest object; the KeywordRequest class was generated from the WSDL description. KeywordRequest's constructor sets the appropriate values into the new object, using the values passed to it when AmazonProxy invoked the new method. (In the same manner, the client can use other generated classes to search the catalog by a key phrase, browse by nodes, get shopping cart data, and so forth.)
KeywordRequest kwr = new KeywordRequest
  (type,"1","books","D3HW0PG66IPLAM","heavy",
    "D3HW0PG66IPLAM","");
When instantiating the object, you must be careful to correctly set the parameters that KeywordRequest (or any other Amazon Web Service request) expects. Here, AmazonProxy sets the variable type to the category type selected by the user. This is the keyword for the search. The number 1 refers to the number of pages of matching book titles that will be returned. The third value, books, is the product mode for the search. The associate's identifier and/or developer token (which you were given when you registered with Amazon) is passed as the fourth and sixth parameters. Amazon allows two types of searches -- heavy and lite -- and this is indicated by the fifth parameter. The last parameter, left as null here, lets you specify a sort option.
Invoke the search request. Invoke the Web Service's keywordSearchRequest method through the AmazonSearchPort stub reference, passing it the KeywordRequest object you just instantiated. This example invokes the search method, passing it the kwr object, through the asp object. The Amazon Web Service returns the information as a ProductInfo type. (ProductInfo.java is also a generated class.)
ProductInfo pinfo = asp.keywordSearchRequest(kwr);
Extract the data from the response. Extract the individual pieces of data that you are interested in from the returned object. Use the generated helper class Details.java to extract the returned data into an array. The Details.java file contains an array with an element for each piece of data from the Amazon catalog. This examples retrieves the data into a Details object.
Details[] details = pinfo.getDetails();
Once the information is in the Details object, set up a loop to extract the elements of interest for each book in the array. Keep in mind that the array potentially holds data on up to ten different books. (There may be data on less than ten books, if there were less than ten matches to the search parameters.)
Here is the portion of the AmazonProxy class that accesses the Web Service, and sends the request and receives the response:
package myamazonclient;
import myamazonclient.AmazonClientGenClient.*;
import javax.xml.rpc.Stub;
import java.util.Vector;
import java.net.URL;
public class AmazonProxy{
  public AmazonProxy() {
  }
  public Vector getBookTypes() {
    Vector v = new Vector();
    v.addElement("Blue Prints");
    v.addElement("Web Services");
    v.addElement("Wireless");
    v.addElement("J2EE");
    v.addElement("Solaris");
    v.addElement("J2SE");
    v.addElement("EJB");
    v.addElement("JMS");
    return v;
  }
  public String[] getBookNames(String type) {
    String[] books;
    try{
      Stub stub = (Stub) (new
        AmazonSearchService_Impl().getAmazonSearchPort());
      AmazonSearchPort asp = (AmazonSearchPort) stub;
      KeywordRequest kwr = new
        KeywordRequest(type,"1","books","D3HW0PG66IPLAM",
          "heavy","D3HW0PG66IPLAM","");
      ProductInfo pinfo = asp.keywordSearchRequest(kwr);
      Details[] details = pinfo.getDetails();
      String newline = System.getProperty("line.separator");
      books = new String[details.length];
      imageURL = new String[details.length];
      bookReviews = new String[details.length];
      for (int i=0;i<details.length;i++) {
        // extract data from details vector
      }
    } catch (Exception e) {
      //handle exception
    }
  }
  public  String[] getImageURL() {
    return imageURL;
  }
  public String[] getBookReviews() {
    return bookReviews;
  }
  private String[] imageURL;
  private String[] bookReviews;
  }

Compile, Build, and Execute the Client Application

With Sun ONE Studio, you can do the compilation, build, and execute steps separately, or you can have Sun ONE Studio do all the steps at once. Use Execute to compile, build, and execute the application in one step. To perform these steps separately, use Compile to compile the Java classes and Build to assemble and deploy the application.
Before executing this code, you should check your proxy setting if you are behind the firewall. To set the proxy, follow these steps:
  • Right click AmazonServices in the Explorer window, then select Properties.
  • Select the Execution tab and choose the External Execution option for the executor.
  • Click on the property's ellipsis (...) button and select External Process.
  • Click again on the property's ellipsis (...) and write
    -Dhttp.proxyHost=yourproxyhostname  -Dhttp.proxyPort=yourproxyport
    in the beginning of the Arguments text box.

  • Click OK.
This example uses Execute. Right click AmazonServices in the Explorer window, then select Execute.

Figure 19: Compiling and Executing the Client Application
Sun ONE Studio compiles the five classes in the application. It then performs the necessary assembly and deployment tasks, starts the Tomcat server if necessary, and runs the client.

Conclusion

This article showcases the Sun ONE Studio 4 product from Sun Microsystems, Inc. It illustrates Sun ONE Studio's powerful and comprehensive functionality that allows for rapid application development, while at the same time shielding developers from the complexities of the Web Service infrastructure. Using an example of creating a Swing-based Web Service client with Sun ONE Studio, it shows how to set up and create the client, and illustrates how you can use the classes generated by Sun ONE Studio to write code that accesses a Web Service such as Amazon's. The article demonstrates how Sun ONE Studio simplifies many of the development and deployment tasks for you so that you can concentrate on writing the application code.
With the knowledge gained from this article as a foundation, you can use Sun ONE Studio to write production-quality clients that can be deployed on any J2EE-compliant Web server, such as the Sun ONE Application Server. Sun ONE Studio can handle most, if not all, of the packaging and deployment details for you, especially when the target platform is the Sun ONE Application Server. For deploying to other Web servers, you should refer to their specific product documentation.

Tuesday, February 15, 2011

How To Learn Amazon’s Elastic Block Store (EBS)

Amazon’s Elastic Block Store explained

Host Your Web Site In The Cloud: Amazon Web Services Made Easy: Amazon EC2 Made Easy

Now that Amazon’s Elastic Block Store is live I thought it’d be helpful to explain all the ins and outs as well as how to use them. The official information about EBS is found on the AWS site, I’ve written about the significance of EBS before and I’ll follow-up with a post about some new use-cases it enables.

The Basics

EBS starts out really simple: you create a volume from 1GB to 1TB in size and then you mount it on a device (like /dev/sdj) on an instance, format it, and off you go. Later you can detach it, let it sit for a while, and then reattach it to a different instance. You can also snapshot the volume at any time to S3, and if you want to restore your snapshot you can create a fresh volume from the snapshot. Sounds simple, eh? It is but the devil is in the detail!

Amazon Elastic Block Store features

Reliability

EBS volumes have redundancy built-in, which means that they will not fail if an individual drive fails or some other single failure occurs. But they are not as redundant as S3 storage which replicates data into multiple availability zones: an EBS volume lives entirely in one availability zone. This means that making snapshot backups, which are stored in S3, is important for long-term data safeguarding.
I know that folks at Amazon have thought long and hard how to characterize the reliability of EBS volumes, so here’s their explanation taken from the EC2 detail page:
Amazon EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. The durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot. As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume. This compares with commodity hard disks that will typically fail with an AFR of around 4%, making EBS volumes 10 times more reliable than typical commodity disk drives.
From a practical point of view what this means is that you should expect the same type of reliability you get from a fully redundant RAID storage system. While it may be technically possible to increase the reliability by, for example, mirroring two EBS volumes in software on one instance, it is much more productive to rely on EBS directly. Focus your efforts on building a good snapshot strategy that ensures frequent and consistent snapshots, and build good scripts that allow you to recover from many types of failures using the snapshots and fresh instances and volumes.

Volume performance

Our performance observations are based on the pre-release EBS volumes, thus some variations on the production systems should be expected. On the one hand our pre-release tests were probably running on a small infrastructure with fewer users, but on the other hand many of these users were also running stress tests, so it’s really hard to tell how all this will carry over. Only time will tell.
EBS volumes are network attached disk storage and thus take a slice off the instance’s overall network bandwidth. The speed of light here is evidently 1GBps, which means that the peak sequential transfer rate is 120MBytes/sec. “Any number larger than that is an error in your math.” We see over 70MB/sec using sysbench on a m1.small instance, which is hot! Presumably we didn’t get much network contention from other small instances on the same host when running the benchmarks. For random access we’ve seen over 1000 I/O ops/sec, but it’s much more difficult to benchmark those types of workloads. The bottom line though is that performance exceeds what we’ve seen for filesystems striped across the four local drives of x-large instances.
With EBS it is possible to increase the I/O transaction rate further by mounting multiple EBS volumes on one instance and striping filesystems across them. For streaming performance this doesn’t seem worthwhile as the limit of the available instance network bandwidth is already reached with one volume, but it can increase the performance of random workloads as more heads can be seeking at a time.

Snapshot backups

Snapshot backups are simultaneously the most useful and the most difficult to understand feature of EBS. Let me try to explain. A snapshot of an EBS volume can be taken at any time, it causes a copy of the data in the volume to be written to S3 where it is stored redundantly in multiple availability zones (like all data in S3). The first peculiarity is that snapshots do not appear in your S3 buckets, thus you can’t access them using the standard S3 API. You can only list the snapshots using the EC2 API and you can restore a snapshot by creating a new volume from it. The second peculiarity is that snapshots are incremental, which means that in order to create a subsequent snapshot, EBS only saves the disk blocks that have changed since previous snapshots to S3.
How the incremental snapshots work conceptually is depicted in the diagram below. Each volume is divided up into blocks. When the first snapshot of a volume is taken all blocks of the volume that have ever been written are copied to S3, and then a snapshot table of contents is written to S3 that lists all these blocks. Now, when the second snapshot is taken of the same volume only the blocks that have changed since the first snapshot are copied to S3. The table of contents for the second snapshot is then written to S3 and lists all the blocks on S3 that belong to the snapshot. Some are shared with the first snapshot, some are new. The third snapshot is created similarly and can contain blocks copied to S3 for the first, second and third snapshots.

Illustration of EBS snapshots to show incremental storage of a snapshots block in Amazon S3
There are two nice things about the incremental nature of the snapshots: it saves time and space. Taking subsequent snapshots can be very fast because only changed blocks need to be sent to S3, and it saves space because you’re only paying for the storage in S3 of the incremental blocks. What is difficult to answer is how much space a snapshot uses. Or, to put it differently, how much space would be saved if a snapshot were deleted. If you delete a snapshot, only the blocks that are only used by that snapshot (i.e. are only referenced by that snapshot’s table of contents) are deleted.
Something to be very careful about with snapshots is consistency. A snapshot is taken at a precise moment in time even though the blocks may trickle out to S3 over many minutes. But in most situations you will really want to control what’s on disk vs. what’s in-flight at the moment of the snapshot. This is particularly important when using a database. We recommend you freeze the database, freeze the file system, take the snapshot, then unfreeze everything. At the file system level we’ve been using xfs for all the large local drives and EBS volumes because it’s fast to format and supports freezing. Thus when taking a snapshot we perform an xfs freeze, take the snapshot, and unfreeze. When running mysql we also “flush all tables with read lock” to briefly halt writes. All this ensures that the snapshot doesn’t contain partial updates that need to be recovered when the snapshot is mounted. It’s like USB dongles: if you pull the dongle out while it’s being written to “your mileage may vary” when you plug it back into another machine…
Snapshot performance appears to be pretty much gated by the performance of S3, which is around 20MBytes/sec for a single stream. The three big bonuses here are that the snapshot is incremental, that the data is compressed, and that all this is performed in the background by EBS without affecting the instance on which the volume is mounted much. Obviously the data needs to come off the disks, so there is some contention to be expected, but compared to having to do the transfer from disk through the instance to S3 it is like night and day.

Availability Zones

EBS volumes can only be mounted on an instance in the same availability zone, which makes sense when you think of availability zones as being equivalent to datacenters. It would probably be technically possible to mount volumes across zones, but from a network latency and bandwidth point of view it doesn’t make much sense.
The way you get a volume’s data from one zone into another is through a snapshot: You snapshot one volume and then immediately create a new volume in a different zone from the snapshot. We have really gotten away from the idea that we’re unmounting a volume from one instance and then remount it on the next one: we always go through a snapshot for a variety of reasons. The way we think and operate is as follows:
  • You create a volume, mount it on an instance, format it, and write some data to it.
  • Then you periodically snapshot the volume for backup purposes.
  • If you don’t need the instance anymore, you may terminate it and, after unmounting the volume you always take a final snapshot. If the instance crashes instead of properly terminating, you also always take a final snapshot of the volume as it was left.
  • When you launch a new instance on which you want the same data, you create a fresh volume from your snapshot of choice. This may be the last snapshot, but it could also be a prior one if it turns out that the last one is corrupt (e.g. in the case of an instance crash or of some software failure).
By creating a volume from the snapshot you achieve two things: one, you are independent of the availability zone of the original volume, and second, you have a repeatable process in case mounting the volume fails, which can easily happen especially if the unmount wasn’t clean.
Now, of course, in some situations you can directly remount the original volume instead of creating a new volume from a snapshot as an optimization. This applies if the new instance is in the same availability zone, the volume corresponds to the snapshot that we’d like to mount, and the volume is guaranteed not to have been modified since (e.g. by a failed prior mount). The best is to think of the volume as a high-speed cache for the snapshot.

Price

Estimating the costs of EBS is really quite tricky. The easy part is the storage cost of $0.10 per GB per month. Once you create a volume of a certain size you’ll see the charge. The $0.10 per million I/O transactions are much harder to estimate. To get a rough estimate you can look at /proc/diskstats on your servers. This will include something like this:
8  160 sdk 9847 77 311900 56570 1912664 3312437 160672914 211993229 0 1597261 212049797
   8  176 sdl 333 86 4561 1538 895 51 19002 20131 0 4043 21669
which is just a pile of numbers. Following the explanation for the columns you should sum the first number (reads completed) and the fifth number (writes completed) to arrive at the number of I/O transactions (9847+1912664 for /dev/sdk above). This is not 100% accurate but should be close (I believe subtracting the 2nd and 6th numbers gets you closer yet, but I prefer an over-estimate). As a point of reference, our main database server is pretty busy and chugs along at an average of 17 transactions per second, which should total to around $4.40 per month. But our monitoring servers, prior to some recent optimizations, hammered the disks as fast as they would go at over 1000 random writes per second sustained 24×7. That would end up costing over $250 per month! As far as I can tell, for most situations the EBS transaction costs will be in the noise, but you can make it expensive if you’re not careful.
The cost of snapshots is harder to estimate due to their incremental nature. First of all, only the blocks written are captured on S3 (i.e. blocks on the volume that have never been written are not stored on S3). Second it’s tricky to talk about the cost of a snapshot due to their incremental sharing.

Summing it up

All in all it’s amazing how simple EBS is, yet how complex a universe of options it opens. Between snapshots, availability zones, pricing, and performance there are many options to consider and a lot of automation to provide. Of course at RightScale we’re busy working out a lot of these for you, but beyond that it is not an overstatement to say that Amazon’s Elastic Block Store brings cloud computing to a whole new level. I’ll repeat what I’ve said before: if you’re using traditional forms of hosting it’s gonna get pretty darn hard for you to keep up with the cloud, and you’ve probably already fallen behind at this point!

Friday, February 11, 2011

How To Using Elastic IP to Identify Internal Instances on Amazon EC2

 Using Elastic IP to Identify Internal Instances on Amazon EC2

 

Elastic IP

Amazon EC2 supports Elastic IP Addresses to implement the effect of having a static IP address for public servers running on EC2. You can point the Elastic IP at any of your EC2 instances, changing the active instance at any time, without changing the IP address seen by the public outside of EC2.
This is a valuable feature for things like web and email servers, especially if you need to replace a failing server or upgrade or downgrade the hardware capabilities of the server, but read on for an insiders’ secret way to use Elastic IP addresses for non-public servers.

Internal Servers

Not all servers should be publicly accessible. For example, you may have an internal EC2 instance which hosts your database server accessed by other application instances inside EC2. You want to architect your installation so that you can replace the database server (instance failure, resizing, etc) but you want to make it easy to get all your application servers to start using the new instance.
There are a number of design approaches which people have used to accomplish this, including:
  1. Hard code the internal IP address into the applications and modify it whenever the internal server changes to a new instance (ugh and ouch).
  2. Run your own DNS server (or use an external DNS service) and change the IP address of the internal hostname to the new internal IP address (extra work and potentially extra failover time waiting for DNS propagation).
  3. Store the internal IP address in something like SimpleDB and change it when you want to point to a new EC2 instance (extra work and requires extra coding for clients to keep checking the SimpleDB mapping)
The following approach is the one I use and is the topic of the rest of this article:
  1. Assign an Elastic IP to the internal instance and use the external Elastic IP DNS name. To switch servers, simply re-assign the Elastic IP to a new EC2 instance
This last option uses a little-known feature of the Elastic IP Address system as implemented by Amazon EC2:
When an EC2 instance queries the external DNS name of an Elastic IP, the EC2 DNS server returns the internal IP address of the instance to which the Elastic IP address is currently assigned.
You may need to read that a couple times to grasp the implications as it is non-obvious that an “external” name will return an “internal” address.

Setting Up

You can create an Elastic IP address in an number of ways including the EC2 Console or the EC2 API command line tools. For example:
$ ec2-allocate-address 
ADDRESS 75.101.137.243
The address returned at this point is the external Elastic IP address. You don’t want to use this external IP address directly for internal server access since you would be charged for network traffic.
The next step is to assign the Elastic IP address to an EC2 instance (which is going to be your internal server):
$ ec2-associate-address -i i-07612d6e 75.101.137.243
ADDRESS 75.101.137.243  i-07612d6e
Once the Elastic IP has been assigned to an instance, you can describe that instance to find the external DNS name (which will include the external Elastic IP address in it):
$ ec2-describe-instances i-07612d6e | egrep ^INSTANCE | cut -f4
ec2-75-101-137-243.compute-1.amazonaws.com
This is the permanent external DNS name for that Elastic IP address no matter how many times you change the instance to which it is assigned. If you query this DNS name from outside of EC2, it will resolve to the external IP address as shown above:
$ dig +short ec2-75-101-137-243.compute-1.amazonaws.com
75.101.137.243
However, if you query this DNS name from inside an EC2 instance, it will resolve to the internal IP address for the instance to which it is currently assigned:
$ dig +short ec2-75-101-137-243.compute-1.amazonaws.com
10.254.171.132
You can now use this external DNS name in your applications on EC2 instances to communicate with the server over the internal EC2 network and you won’t be charged for the network traffic as long as you’re in the same EC2 availability zone.

Changing Servers

If you ever need to move the service to a new EC2 instance, simply reassign the Elastic IP address to the new EC2 instance:
$ ec2-associate-address -i i-3b783452 75.101.137.243
ADDRESS 75.101.137.243  i-3b783452
and the original external DNS name will immediately resolve to the internal IP address of the new instance:
$ dig +short ec2-75-101-137-243.compute-1.amazonaws.com
10.254.171.132
Existing connections will fail and new connections to the external DNS name will automatically be opened on the new instance and

Using CNAME

It is not entirely intuitive to have your application use names like ec2-75-101-137-243.compute-1.amazonaws.com but you can make it clearer by creating a permanent entry in your DNS which points to that name with a CNAME alias. For example, using bind:
db.example.com.    CNAME    ec2-75-101-137-243.compute-1.amazonaws.com.
You can then use db.example.com to refer to the server internally and still not have to update your DNS when you change instances.

Thursday, February 10, 2011

How To Install and Configure FTP Server in Amazon EC2 instance

 Install and Configure FTP Server in Amazon EC2 instance


For many users, running FTP Sever in Amazon EC2 instance is headache at the first time. You need to experiment before being able to transfer data. The main problems are Ingress firewall in Amazon environment and NAT traversal.
Here I’m using vsftp (vsfptd) Server, which is one of the most popular and easy to configure. The instance is running from base Fedora 4 AMI but the setup should be identical to other Red Hat based distros.
Install vsftpd FTP server, if not installed earlier:
# yum install vsfptd
Its upto you which FTP method i.e. Active or Passive you want to use. The problem with active mode is that your computer is sending a request out of port 21 when all of a sudden, the server attempts to initiate a request with your computer on port 20. Since communication on port 21 does not imply communication on port 20, it appears as if some unauthorized host has attempted to initiate a new connection with your computer. Kind of sounds like a hack right? Your firewall may think so too (or your NAT router may have no idea to which computer to route the request). Active mode is not used as default method of ftp transfer in many clients these days.
On the other hand, as the Ingress firewall is running in AWS, from the firewall’s standpoint, to support passive mode FTP the following communication channels need to be opened:
FTP server’s port 21 from anywhere (Client initiates connection).
FTP server’s port 21 to ports > 1023 (Server responds to client’s control port).
FTP server’s ports > 1023 from anywhere (Client initiates data connection to random port specified by server).
FTP server’s ports > 1023 to remote ports > 1023 (Server sends ACKs (and data) to client’s data port).
That second part is the problem: FTP server listens on a random port and hands that back to the client, so the client initiates a connection to a random server port, which you must allow.
Opening up all ports > 1023 isn’t so good for security. But what you can do is allow the ports through the distributed firewall and then setup your own filtering inside your instance. Instead, you would better open a fixed number of ports (such as 1024 to 1048) and configure your FTP Server to only use that ports.
Check whether required ports are open or not in your EC2 security group. (if you are unaware about security group, it should be ‘defaul’ unless you created a new one).
# ec2-describe-group
This command will print all ports which are currently open. If you dont find port 20,21,1024-1048 then you need to open these ports but if you dont find the command itself i.e.
# ec2-describe-group
-bash: ec2-describe-group: command not found

You need to install ec2 command line tools. You can find them here and the instructions to setup/configure can be found here.
Open the ports now:
# ec2-authorize default -p 20-21
# ec2-authorize default -p 1024-1048

Here, ‘default’ is the name of security group. You can also open ports for specific IPs. For ease of use, you better install ElasticFox, a firefox extension to manage EC2 stuff. you can find more about it here.
At this moment, you can start your FTP server and if you try to connect it, the process will get failed. By checking logs, you should find something like:
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE A
Response: 200 Type set to A
Command: PASV
Response: 227 Entering Passive Mode (216,182,238,73,129,75).
Command: LIST
Error: Transfer channel can't be opened. Reason: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Error: Could not retrieve directory listing

Time to configure vsftpd.conf file:
# vi /etc/vsftpd/vsftpd.conf
---Add following lines at the end of file---
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=Public IP of your instance

Put public IP of your EC2 instance and then Save the file. Now restart the server:
# /etc/init.d/vsftpd restart