RESTful or RESTless

At first glance implementation of REST (REpresentational State Transfer) looks very simple and easy. When I created my first RESTful webservice, I was like WOW!! You can do so much with such a little effort. jMaki framework makes REST even more powerful. But what about big stuff? creating enterprise Webservices demands more than just the simplicity.

In enterprise world we have everything defined in various Business Processes, which require more than one resource for successful completion of a particular business process. When we say more than one resource, orchestration and choreography jumps into the picture and makes the business process execution more complex.

BPEL is the rescue, it is way ahead from the existing WSCI and BPML in terms of adapting new standards and eliminating the old ones. However BPEL heavily depends on the traditional way of implementing the webservices: WSDL and SOAP webservices. So no BPEL for REST, means no choreography or orchestration in case of REST(?)

At present there are many projects which are working in this direction: Apache ODE, BPEL Light. We are still at least one year away from a complete enterprise REST implementation.


A Cross Platform VMI Implementation for Multiple Retailers and Vendors using BPEL

Report | Presentation

The traditional supply chain often includes more than one company in a series of supplier-customer relationships. It is often defined as the series of links and shared processes that involve all activities from the acquisition of raw materials to the delivery of finished goods to the end consumer. Raw materials enter into a manufacturing organization via a supply system and are transformed into finished goods. The finished goods are then supplied to customers through a distribution system. Generally several companies are linked together in this process, each adding value to the product as it moves through the supply chain.
Effective supply chain management is the act of optimizing all activities throughout the supply chain, and it is the key to a competitive business advantage. Consequently, an organization’s ability to gain a competitive advantage is heavily dependent on coordination and collaboration with its supply chain partners. Yet, even today, a typical supply chain is too often a sequence of disconnected activities, both within and outside of the organization. To remedy this situation, it is important that an organization and its suppliers, manufacturers, customers, and other third-party providers engage in joint strategic planning and operational execution with an eye to minimizing cost and maximizing value across the entire supply chain.
Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. The underlying enabler of supply chain integration is the fast and timely exchange of information between supply chain partners. This information may take the form of transactional documents such as purchase orders, ship notices, and invoices, as well as planning-related documents like demand forecasts, production plans and inventory reports. It is this sharing and coordination of information and planning activities that can enable cost reduction, value enhancement, and the execution of advanced collaborative planning activities.
In the past, the cost and complexity of executing electronic data interchange (EDI) transactions made this type of information exchange suitable for only the largest corporations. The ubiquity of Internet-based communication tools now makes it possible for organizations of all sizes to exchange information. However, challenges still exist and being able to successfully deal with all the new technologies is one of these challenges. The good news is that this data exchange challenge can be overcome; and the opportunities become endless once companies are able to exchange information efficiently with their suppliers, customers, and partners.
Vendor Managed Inventory (VMI) is a supply chain practice where the inventory is monitored, planned and managed by the vendor on behalf of the consuming organization, based on the expected demand and on previously agreed minimum and maximum inventory levels. Traditionally, success in supply chain management derives from understanding and managing the tradeoff between inventory cost and the service level. Types of information that can be shared between supply chain partners in a VMI partnership include inventory levels and position, sales data and forecasts, order status, production and delivery schedules and capacity, and performance metrics. Sharing information yields many benefits to supply chain members.
A service-oriented architecture (SOA) is a style of design that guides all aspects of creating and using business services throughout their lifecycle (from conception to retirement), as well as defining and provisioning the IT infrastructure that allows different applications to exchange data and participate in business processes regardless of the operating systems or programming languages underlying those applications. An important goal of an SOA is to help align IT capabilities with business goals. Another goal of an SOA is to provide an agile technical infrastructure that can be quickly and easily reconfigured as business requirements change. The key organizing concept of an SOA is a service. The processes, principles, and methods defined by the SOA are oriented toward services (sometimes called service-oriented development). The development tools selected by an SOA are oriented toward creating and deploying services.
For designing SOA UML is used, for the simulation of buyer and supplier scenario web services are used and for implementing business logic in multiple suppliers and multiple retailers case BPEL (Business Process Execution Language) is used. For performance testing of web services WAPT 3.0 (Web Application Load, Stress and Performance Testing) is used, WAPT can simulate multiple clients to invoke web services and test the performance of web services.
Composite Web Services
A composite Web Service (WS) provides higher-level functionality by utilizing one or more (composite or non-composite) individual WS that it invokes in a well-defined order. In general, each WS defines its own schema of its input as well as its output messages based on the data model of its underlying implementation. In consequence, a composite WS must be able to interpret every message definition schema of every WS it invokes and has to transform those into either its own messages or the messages of other WS it subsequently invokes to enable frictionless data flow between them. Since the message definitions of each WS follow in general a different data model based on a specific underlying ontology or data model, messages require transformation in order for the composite WS to be semantically meaningful.
A laboratory Scale Implementation
Following figure shows the complete structure of database tables required by the both the sides. For simulating customers at retailer side we decrease the StockLevel of table R1 in a random manner, this information is made available to the supplier through a web service, supplier access the information by using a client. Supplier updates the table S1 at a regular interval and when StockLevel becomes less than or equal to the ReorderLevel (shown in table S4), supplier sends the consignment stock to the retailer and updates the information in his consignment stock table S2. When retailer obtains this information he stores it in his consignment stock table R2 and when StockLevel in table R1 becomes zero retailer starts using the new consignment stock and changes the status in R2 table from unused to using. This status is made available to the supplier through a web service. Supplier updates the status in the S2 table. Every time when status changes from unused to using, he generates an invoice which is received by the retailer and stored in invoice table R3. On the payment by retailer an acknowledgement is received by the supplier.

Towards implementation of multiple retailer and multiple vendor case:
New architecture for multiple retailers and multiple suppliers’ case is given below. Which consists of three new components as comparing to the previous architecture adopted in case of single supplier and single retailer.
BPEL: This is used for implementing business logic for web services.
UDDI: It is used for web registry.
WS-Security: To secure web services.

Choreography and orchestration are used for combining web services and sequential execution of the web services. In orchestration, which is usually used in private business processes, a central process takes control of the involved Web services and coordinates the execution of different operations on the web services involved in the operation.

BPEL (Business Process Execution Language)
BPEL builds on the foundation of XML and Web services; it uses an XML-based language that supports the Web services technology stack, including SOAP, WSDL and UDDI. BPEL is mainly used for orchestrations and choreography among web services. For creating a composite web service we need a BPEL Engine. A BPEL Engine handles web services by three methods <receive>, <reply>, and <invoke>.
Information flow:
All three clients continuously check for the changes occurring at their respective web services. When client CSInfoClient requests for the consignment stock information from composite web service CWS0, CWS0 invokes StockInfo() web service to get the present stock information. If stock level becomes equal (or less) to reorder point composite web service CWS0 invokes CSInfo() web service and gets the consignment information and sends back to the CSInfoClient. In a similar manner InvoiceInfoClient sends a request to CWS1 composite web service to get the invoice information. CWS1 invokes the web service CSStatus() (which keeps track of the consignment stock usage begin information) and if the present quantity becomes equal to the consignment stock quantity than it invokes the InvoiceGen() web service (which generates the invoice) and gets the invoice information and sends the data to InvoiceInfoClient.
After getting invoice information retailer updates the invoice acknowledgement database. Client InvAckClient receives invoice acknowledge by sending a request and receiving a response from web service InvAck().

Performance Testing of the Proposed Architecture
Tests were performed on two web services hosted on retailer side. The tool used for this purpose was WAPT 3.0, which simulated a number of clients to access these services. Following are the results obtained from stress testing of two web services CSAck and TestInfo.

Performance of CSAckService

Performance of TestInfoService

From Figures shown above we can see that as number of clients increase the web transaction time increase. Transaction time also depends on the size of data received by a client. In case of CSAck service the data size is small as comparing to TestInfo service therefore the average Web Transaction time is smaller in case of CSAck, however it is also rising with as number of clients is increasing. Both the web services can bear a maximum of 176 simultaneous clients at a time, after that the web server shut down.

Enterprise Integration using Web Services: A Vendor Managed Inventory (VMI) Implementation

Report | Presentation

In this project I have considered one supplier and one retailer. Both of them act like two different enterprises. Both supplier and retailer have their own information systems within their organization and these information systems are hosted on two different software platforms. In this case we have assumed that retailer has .Net platform on the other hand supplier has Java EE platform. Our main objective is to provide a method by which they can share the information across the platforms. The project mainly focuses on interoperability of tow existing architecture i.e. .Net and Java EE. Here we are not only implementing a cross platform technology, we are also making possible a communication between two widely used programming frameworks. The following figure shows the Web Services architecture adopted for the VMI implementation.

In present scenario the frameworks for writing web services are Java EE and .Net. WSDL is used for service description. For communication protocol SOAP is used. There are other protocols also but they do not provide the flexibility in data transfer as SOAP does. For transport we are using HTTP protocol, which helps communication through browser.

Retailer Side Development:
As shown in the sequence diagram under the section 3.1 that the retailer needs to send the data three times in a VMI system. And also retailer needs to get the data two times from supplier. For web services implementation we need three web services and two clients on retailer side. Three web services will be sending the data about “stock level”, “acknowledgement on consignment stock” and “confirmation about invoice”. The two clients will receive the data about “information on consignment stock” and “invoice information” from supplier.

Supplier Side Development:
From sequence diagram we can see that a supplier needs to receive the data from retailer three times and send the data to retailer two times in a VMI system. Therefore two web services and three clients are required for communication. The two web services send the data about “Information on consignment stock” and “invoice information” and the three clients receive the data about “stock level”, “acknowledgement on consignment stock” and “confirmation about invoice”.

Information flow over the network

This web services implementation of VMI was done successfully on two different platforms. Both of the supplier and retailer were able to share the information without any complexity. The client interfaces of both the sides are shown below:

Retailer side client to consume supplier’s web services

Supplier side client window for consuming retailer’s web services

Application of HCI Techniques for Developing a Web-based User Interface using AJAX


The main technologies which were adopted for this project were HCI and AJAX.
HCI stands for Human Computer Interaction; it is also referred as Computer Human Interaction (CHI); vaguely it is called Software Ergonomics. HCI mainly focuses on user-centered design and user-centered design is achieved when “people need not have to change the way that they use a system in order to fit in with it. Instead, the system should be designed to match their requirements.” HCI follows these principles:

1. Simple and natural dialogue
2. Speak the users’ language
3. Minimize user’s memory load
4. Be consistent
5. Provide feedback
6. Provide clearly marked exits
7. Provide shortcuts
8. Minimize the user’s slips and errors.
9. Provide help

AJAX stands for Asynchronous Javascript And XML. AJAX is a web method for creating interactive web applications. The intent is to make web pages feel more responsive by exchanging small amounts of data with the data-source (server), so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page’s interactivity, speed, and usability; which are the main factors in order to make a design user-centered. AJAX is not a technology but a technique refers to a group of technologies. The main component of AJAX is “XMLHttpRequest” object in JavaScript language, used for exchanging data asynchronously as well as synchronously. AJAX also helps in real time updating of data that is any changes made at server-side will reflect in no time at client-side.

Other Technologies
Other technologies which were used for this project were XHTML, XML, JavaScript and CSS.

Development of the User-Interface
The development of the user interface can be divided into three main parts:
1. Dashboard design
2. Sidebar and Main display panel design
3. Header design

The main page of the user-interface is shown in the figure. This is the first page after user login. This user-interface is designed in such a way that a user does not need to click three times in order to navigate at any level. Dashboard is loaded synchronously when the whole page loads. The sidebar and header area is made static since a user requires it all the time while dashboard area is acts like a container to other pages and data. New data is loaded in dashboard area asynchronously.
Any window can be maximized by double clicking on the window or by clicking the link provided in the dashboard module named as Dashboard. The maximized window loads on top of the interface to save the time of the user by avoiding unnecessary loading. The maximized window shows the content of the smaller window in detail.

The following figure shows the area where the new content will load after clicking the “Schedule New Scan”. In the next figure we can see that the content is loaded which was requested by clicking “Schedule New Scan” button.

This whole process of loading takes very less time and user remains intact with interface. And also user finds it very easy to work with the application when he can get all the information required very fast and at one place with minimum number of modifications in the user interface. This also reduces the idleness of the user enhancing the continuity of the interaction between user and the interface. And also this is the maximum level of navigation required in this application but still if some data is required, it loads on the top of the interface as shown in the next figure. When clicked on the “[Click here to select]” link a small window loads avoiding any modification in the present interface.

Besides of designing a user-centered interface there are some other important parameters which are also taken care of:

  • The web application is cross-browser. It can run in any browser without any change in the basic structure of the application. It helps user in running the application on different platforms.
  • This application can run on any screen-resolution. The problem of resizing during the change of screen resolution is also taken care of. The application does not show any inconsistency at various resolutions.

A Semantic Web based Framework for Knowledge Management System


Following Figure shows the proposed framework for knowledge management system on the semantic web, which reflects the variety of knowledge transformations in this distributed environment: knowledge can be collected from various sources in different formats, and then stored in the common representation formalism, processed in order to compute interdependencies between knowledge (Example: relationship between bird and Kiwi) items or to resolve conflicts (Example: Kiwi is a bird. Birds can fly. Kiwi can not fly.), shared/searched and finally used for problem solving. Therefore this approach has following processes:

  • Knowledge Capturing
  • Knowledge Representation
  • Knowledge Processing
  • Knowledge Sharing
  • Using of Knowledge

Knowledge Capturing: We identify four types of knowledge sources, which could be treated in knowledge capturing phase: (a) expert knowledge, (b) legacy (rule-base) systems, (c) metadata repositories and (d) documents. For knowledge capturing DSpace can be used. DSpace is an open source and combined project of HP Labs and MIT. DSpace has the ability of indexing and crawling the captured metadata. Because of high flexibility DSpace can be modified further to capture the expert knowledge (editor) as well as to convert the legacy system. Further this information will be converted to RDF Rules using a converter.
Knowledge Repository: Knowledge repository is a relational database organized in a way that enables efficient storing and access to RDF metadata. This repository can be seen as a RDF repository.
Knowledge Processing: Knowledge processing component enables efficient manipulation with the stored knowledge, especially graph-based processing for the knowledge represented in the form of rules, e.g. deriving dependency graph or consistency checking
Knowledge Sharing: Knowledge sharing is realized by searching for rules that satisfy the query conditions. In the RDF repository rules are represented as reified RDF statements and while in RDF any statement is considered to be an assertion, we can view an RDF repository as a set of ground assertions in the form (subject, predicate, and object). Rules are also related to domain ontology, which contains domain axioms used for deriving new assertions. Therefore the searching is realized as an inferencing process.
Using of Knowledge: The main advantage of this approach is using a conditional statement for the semantic annotation of knowledge sources. In that way we put statements used in the annotation into the context of each other, which consequently leads to efficient searching for knowledge. Moreover, annotating knowledge resources using Precondition-Action statements enables semantic hyper linking of each two resources, which satisfies the condition that the Precondition part of one annotation subsumes the Action part of the annotation of another resource. In that way querying for a problem can result in a composition of documents, which cover problem solving. This is a very important process in knowledge management or e-learning search.