Hi guys! I own a domain now and now my blog can be located @ http://www.vikasing.com
Thanks for the visiting,
Do computers understand the Web?
Till 2001 the answer to the above question was a big ‘NO’. Semantic Web is an attempt from Tim Berners-Lee (the father of the web) to turn that big ‘NO’ to big ‘YES’
Semantic Web means a highly interconnected network of data that could be easily accessed and understood by any computer or handheld device. To be frank the Semantic Web is not different from World Wide Web, it is just an enhancement that gives the Web far greater utility. It comes to life when people immersed in a certain field or vocation, whether it be genetic research or hip-hop music, agree on common schemes for representing information they care about. As more groups develop these taxonomies, Semantic Web tools allow them to link their schemes and translate their terms, gradually expanding the number of people and communities whose Web software can understand one another automatically.
Let’s try to understand the meaning of the above boring definition in a practical manner. Suppose you have to travel immediately to New Delhi from Bangalore to attend an important business meeting of 5:00 PM, you login to your Semantic Web agent (an application!) on your handheld device (PDA) and type “book the cheapest flight from Bangalore to New Delhi leaving between 11:30 AM -12:30 PM”. Now the agent will search for the flights, if it finds one, it will get the pre-stored credit card number and book the flight. The job of Semantic Web is to convert the instruction into machine readable format, and look for the information across the web.
Figure 1 shows the flow in which the Semantic Web Agent performs the task. The main instruction (I) is disintegrated in three smaller instructions (I-1, I-2, I-3) which are easier for a computer to understand. Still you may ask that how a computer can distinguish between two different meanings of book i.e. ‘a physical object consisting of a number of pages’ or ‘reservation’? To understand this lets consider a few sentences:
Ravi booked a flight.
Hari read a non-fiction book.
Again the meaning of the word book is different in both the sentences, in one case it is a noun and in another it is a verb. How to distinguish? Well we can assign a unique identifier to each word. Before that we need to take care whether a computer can understand the similarity between book, booked, booking etc. Perhaps a little XML can help; code snippet (Listing 1) below shows the relation between reserve and book and another snippet (Listing 2) shows the noun book and similar items. The code is written in an XML–based language RDF (Resource Description Framework), which is primarily used for Semantic Web applications. Now let’s focus on above sentences “Ravi booked a flight” means “Ravi reserved a flight” from Listing 1. And in second sentence “Hari read a non-fiction book” from Listing 2 we can observe that book is a noun with a type of physical object.
<concepts id=“reservation-similar-terms“ version=“20090225“>
<concept id=“reservation“ created-by=“firstname.lastname@example.org“ on=“20090225“>
<definition xml:lang=“en“>Words with meaning reservation</definition>
<verb tense = “present“ meaning =“reserve-1“ id= “res101“>reserve</verb>
<verb tense = “present“ meaning =“reserve-1“ id= “res-102“>book</verb>
<verb tense = “past“ meaning =“reserve-2“ id= “res-103“>reserved</verb>
<verb tense = “past“ meaning =“reserve-2“ id= “res-104“>booked</verb>
<concepts id=“book-object“ version=“20090225“>
<concept id=“book“ created-by=“email@example.com“ on=“20090225“>
<definition xml:lang=“en“>Items similar to a book</definition>
<noun type =“physicalObject“ meaning =“book-1“ id= “b01“>textbook</noun>
<noun type =“physicalObject“ meaning =“book-1“ id= “b02“>novel</noun>
<noun type =“physicalObject“ meaning =“book-1“ id= “b03“>non-fiction</noun>
So both the questions, different words with same meaning and same words with different meanings, are answered from the above example. Once the agent understood the meaning of the book in present context and also that cheapest means minimum price, it will search for the flights through 3rd party airline web services and on success it will book the flight by using your pre-stored credit card number and send a confirmation on your PDA.
By the use of ontologies this example becomes even much simpler. OWL (Ontology Web Language) is a sister language of RDF for creating ontology. Computer is a dumb machine; it requires very specific information in order to understand the logical meaning of the any instruction given to it, taxonomies and ontologies help computer understand better. I’ll talk about them in the future articles.
Installed the pre-beta release of windows 7 (Build 6956) yesterday and it didn’t surprise me as Vista (or Longhorn) did two years back. Windows 7 (code name Vienna) borrows a lot from Vista, its interface is similar to Vista. There are a few new features which are actually an improvement of existing features in Vista.
Here are a few screenshots:
MS Paint has a new look, also start-up panel has got a new functionality, it displays all the recent files opened by a program. Squared programs are the new addition into windows Accessories.
Window-flip of Vista is absent in W7, but new feature of displaying all the similar programs in a tab bar is better than Window-flip, you can close a window without switching to it.
This is the next release of windows media player i.e. version 12, there are a few minor enhancement but overall it looks same as version 11, which is what we use.
My day starts somewhere between 9:00 – 10:00 am, when I get up after wasting my 7 hours in sleep, why wasted seven hours? Because I can’t remember anything happened in that duration, it’s a time loss, and according to X-Files whenever time loss happens, you were abducted by aliens. I don’t know any other theories for time loss so I’ll go with X-Files theory. So after getting abducted by aliens I wake up after 9:00 AM.
I hurry as much as I can not caring about the cold water, electric-shock-giving taps and dark-(basin-mirror-shower)less-bathroom. Around 10:15 AM I hear an everyday morning voice “Vikash” from my flatmate (flaty), I shout “one minute”, I hurry up enqueueing all the new stuff in utorrent and take two objects of never-forget-otherwise-you’ll-get-fucked category: my ID card and the flat Key. While I lock the door, my flaty picks up his Indiatimes from the floor and starts reading it, he reads it whole the way to the bus by holding it by both the hands and amusing every passerby. His crusade of reading the whole news paper and showing himself as a very busy person ends just before the bus stop which is just 10 minutes from my flat. We pass a couple of crowded buses because my flaty has got bus-crowdo-fobia. After waiting 5-10 minutes we get a bus or a cab or a car or a whatever; which drops us to a stop called College, which is just in front of my office building. We walk to the elevator area without talking; we get to our cubicles again without uttering a single word to each other. In this whole process we make a lot better version of movie Gerry.
Usually I don’t have much work, so I read something or watch something or go to pantry to read news paper or look at some stupid magazines to watch models or go to CCD or Barista or cafeteria, the whole day disappears just like that. Officially I should leave the office at 7:30 PM but I can’t because the busiest person on the planet my flaty always has some work to finish. Around 8:30 PM he pings me to leave, I leave the cubical exhausted and irritated to face another pain in the ass nautanki from my flaty. I go to his cubical to find him busy in his so called work, after a few minutes he closes his java editor and then starts searching for movies on the company’s LAN, 5-6 minutes later he transfers some crappy movies into his pen drive, which takes another 5-10 minutes, after transfer he goes to the bathroom to wash his coffee mug, comes back in 2-3 minutes, packs up all the things he spilled in the whole day and we leave but just before the exit he suddenly enters into another bathroom and comes back after another 3-4 minutes. Finally we leave around 9:00-9:20 PM.
As usual we pass the crowded buses, this time a lot of them not just a couple of them. As we get closer to our dining place my flaty’s song of hygiene and same-taste starts. This is where I curse Evolution, why don’t we have ear-lids when we can have eye-lids and lips, what actually went wrong?? Despite all the singing we dine at the same place every day.
We walk back to our flat after dinner, again finishing the remaining part of a lot better version of Gerry movie. I watch those new episodes, movies which I had enqueued in the morning to download and then around 3 AM I once again prepare myself for another alien abduction.
That’s all for the day, Good Night !!
Today I was trying to install the latest version (2.6.2) of multiuser WordPress on one of my domains, I copied the files in the root directory, browsed the url and got this PITA:
Couldn’t find constant VHOST … path/wp-settings.php on line 33
which eventually reads
if( constant( ‘VHOST’ ) == ‘yes’ )
I tried to get the solution on net, on wpmu forum http://mu.wordpress.org/forums, and on other forums, I got some stupid replies none of them could get it solved, the only thing which worked was the using a different version of WPMU, an older one.
At first glance implementation of REST (REpresentational State Transfer) looks very simple and easy. When I created my first RESTful webservice, I was like WOW!! You can do so much with such a little effort. jMaki framework makes REST even more powerful. But what about big stuff? creating enterprise Webservices demands more than just the simplicity.
In enterprise world we have everything defined in various Business Processes, which require more than one resource for successful completion of a particular business process. When we say more than one resource, orchestration and choreography jumps into the picture and makes the business process execution more complex.
BPEL is the rescue, it is way ahead from the existing WSCI and BPML in terms of adapting new standards and eliminating the old ones. However BPEL heavily depends on the traditional way of implementing the webservices: WSDL and SOAP webservices. So no BPEL for REST, means no choreography or orchestration in case of REST(?)
The traditional supply chain often includes more than one company in a series of supplier-customer relationships. It is often defined as the series of links and shared processes that involve all activities from the acquisition of raw materials to the delivery of finished goods to the end consumer. Raw materials enter into a manufacturing organization via a supply system and are transformed into finished goods. The finished goods are then supplied to customers through a distribution system. Generally several companies are linked together in this process, each adding value to the product as it moves through the supply chain.
Effective supply chain management is the act of optimizing all activities throughout the supply chain, and it is the key to a competitive business advantage. Consequently, an organization’s ability to gain a competitive advantage is heavily dependent on coordination and collaboration with its supply chain partners. Yet, even today, a typical supply chain is too often a sequence of disconnected activities, both within and outside of the organization. To remedy this situation, it is important that an organization and its suppliers, manufacturers, customers, and other third-party providers engage in joint strategic planning and operational execution with an eye to minimizing cost and maximizing value across the entire supply chain.
Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. The underlying enabler of supply chain integration is the fast and timely exchange of information between supply chain partners. This information may take the form of transactional documents such as purchase orders, ship notices, and invoices, as well as planning-related documents like demand forecasts, production plans and inventory reports. It is this sharing and coordination of information and planning activities that can enable cost reduction, value enhancement, and the execution of advanced collaborative planning activities.
In the past, the cost and complexity of executing electronic data interchange (EDI) transactions made this type of information exchange suitable for only the largest corporations. The ubiquity of Internet-based communication tools now makes it possible for organizations of all sizes to exchange information. However, challenges still exist and being able to successfully deal with all the new technologies is one of these challenges. The good news is that this data exchange challenge can be overcome; and the opportunities become endless once companies are able to exchange information efficiently with their suppliers, customers, and partners.
Vendor Managed Inventory (VMI) is a supply chain practice where the inventory is monitored, planned and managed by the vendor on behalf of the consuming organization, based on the expected demand and on previously agreed minimum and maximum inventory levels. Traditionally, success in supply chain management derives from understanding and managing the tradeoff between inventory cost and the service level. Types of information that can be shared between supply chain partners in a VMI partnership include inventory levels and position, sales data and forecasts, order status, production and delivery schedules and capacity, and performance metrics. Sharing information yields many benefits to supply chain members.
A service-oriented architecture (SOA) is a style of design that guides all aspects of creating and using business services throughout their lifecycle (from conception to retirement), as well as defining and provisioning the IT infrastructure that allows different applications to exchange data and participate in business processes regardless of the operating systems or programming languages underlying those applications. An important goal of an SOA is to help align IT capabilities with business goals. Another goal of an SOA is to provide an agile technical infrastructure that can be quickly and easily reconfigured as business requirements change. The key organizing concept of an SOA is a service. The processes, principles, and methods defined by the SOA are oriented toward services (sometimes called service-oriented development). The development tools selected by an SOA are oriented toward creating and deploying services.
For designing SOA UML is used, for the simulation of buyer and supplier scenario web services are used and for implementing business logic in multiple suppliers and multiple retailers case BPEL (Business Process Execution Language) is used. For performance testing of web services WAPT 3.0 (Web Application Load, Stress and Performance Testing) is used, WAPT can simulate multiple clients to invoke web services and test the performance of web services.
Composite Web Services
A composite Web Service (WS) provides higher-level functionality by utilizing one or more (composite or non-composite) individual WS that it invokes in a well-defined order. In general, each WS defines its own schema of its input as well as its output messages based on the data model of its underlying implementation. In consequence, a composite WS must be able to interpret every message definition schema of every WS it invokes and has to transform those into either its own messages or the messages of other WS it subsequently invokes to enable frictionless data flow between them. Since the message definitions of each WS follow in general a different data model based on a specific underlying ontology or data model, messages require transformation in order for the composite WS to be semantically meaningful.
A laboratory Scale Implementation
Following figure shows the complete structure of database tables required by the both the sides. For simulating customers at retailer side we decrease the StockLevel of table R1 in a random manner, this information is made available to the supplier through a web service, supplier access the information by using a client. Supplier updates the table S1 at a regular interval and when StockLevel becomes less than or equal to the ReorderLevel (shown in table S4), supplier sends the consignment stock to the retailer and updates the information in his consignment stock table S2. When retailer obtains this information he stores it in his consignment stock table R2 and when StockLevel in table R1 becomes zero retailer starts using the new consignment stock and changes the status in R2 table from unused to using. This status is made available to the supplier through a web service. Supplier updates the status in the S2 table. Every time when status changes from unused to using, he generates an invoice which is received by the retailer and stored in invoice table R3. On the payment by retailer an acknowledgement is received by the supplier.
Towards implementation of multiple retailer and multiple vendor case:
New architecture for multiple retailers and multiple suppliers’ case is given below. Which consists of three new components as comparing to the previous architecture adopted in case of single supplier and single retailer.
BPEL: This is used for implementing business logic for web services.
UDDI: It is used for web registry.
WS-Security: To secure web services.
Choreography and orchestration are used for combining web services and sequential execution of the web services. In orchestration, which is usually used in private business processes, a central process takes control of the involved Web services and coordinates the execution of different operations on the web services involved in the operation.
BPEL (Business Process Execution Language)
BPEL builds on the foundation of XML and Web services; it uses an XML-based language that supports the Web services technology stack, including SOAP, WSDL and UDDI. BPEL is mainly used for orchestrations and choreography among web services. For creating a composite web service we need a BPEL Engine. A BPEL Engine handles web services by three methods <receive>, <reply>, and <invoke>.
All three clients continuously check for the changes occurring at their respective web services. When client CSInfoClient requests for the consignment stock information from composite web service CWS0, CWS0 invokes StockInfo() web service to get the present stock information. If stock level becomes equal (or less) to reorder point composite web service CWS0 invokes CSInfo() web service and gets the consignment information and sends back to the CSInfoClient. In a similar manner InvoiceInfoClient sends a request to CWS1 composite web service to get the invoice information. CWS1 invokes the web service CSStatus() (which keeps track of the consignment stock usage begin information) and if the present quantity becomes equal to the consignment stock quantity than it invokes the InvoiceGen() web service (which generates the invoice) and gets the invoice information and sends the data to InvoiceInfoClient.
After getting invoice information retailer updates the invoice acknowledgement database. Client InvAckClient receives invoice acknowledge by sending a request and receiving a response from web service InvAck().
Performance Testing of the Proposed Architecture
Tests were performed on two web services hosted on retailer side. The tool used for this purpose was WAPT 3.0, which simulated a number of clients to access these services. Following are the results obtained from stress testing of two web services CSAck and TestInfo.
Performance of CSAckService
Performance of TestInfoService
From Figures shown above we can see that as number of clients increase the web transaction time increase. Transaction time also depends on the size of data received by a client. In case of CSAck service the data size is small as comparing to TestInfo service therefore the average Web Transaction time is smaller in case of CSAck, however it is also rising with as number of clients is increasing. Both the web services can bear a maximum of 176 simultaneous clients at a time, after that the web server shut down.
In this project I have considered one supplier and one retailer. Both of them act like two different enterprises. Both supplier and retailer have their own information systems within their organization and these information systems are hosted on two different software platforms. In this case we have assumed that retailer has .Net platform on the other hand supplier has Java EE platform. Our main objective is to provide a method by which they can share the information across the platforms. The project mainly focuses on interoperability of tow existing architecture i.e. .Net and Java EE. Here we are not only implementing a cross platform technology, we are also making possible a communication between two widely used programming frameworks. The following figure shows the Web Services architecture adopted for the VMI implementation.
In present scenario the frameworks for writing web services are Java EE and .Net. WSDL is used for service description. For communication protocol SOAP is used. There are other protocols also but they do not provide the flexibility in data transfer as SOAP does. For transport we are using HTTP protocol, which helps communication through browser.
Retailer Side Development:
As shown in the sequence diagram under the section 3.1 that the retailer needs to send the data three times in a VMI system. And also retailer needs to get the data two times from supplier. For web services implementation we need three web services and two clients on retailer side. Three web services will be sending the data about “stock level”, “acknowledgement on consignment stock” and “confirmation about invoice”. The two clients will receive the data about “information on consignment stock” and “invoice information” from supplier.
Supplier Side Development:
From sequence diagram we can see that a supplier needs to receive the data from retailer three times and send the data to retailer two times in a VMI system. Therefore two web services and three clients are required for communication. The two web services send the data about “Information on consignment stock” and “invoice information” and the three clients receive the data about “stock level”, “acknowledgement on consignment stock” and “confirmation about invoice”.
|Information flow over the network|
This web services implementation of VMI was done successfully on two different platforms. Both of the supplier and retailer were able to share the information without any complexity. The client interfaces of both the sides are shown below:
Retailer side client to consume supplier’s web services
Supplier side client window for consuming retailer’s web services
The main technologies which were adopted for this project were HCI and AJAX.
HCI stands for Human Computer Interaction; it is also referred as Computer Human Interaction (CHI); vaguely it is called Software Ergonomics. HCI mainly focuses on user-centered design and user-centered design is achieved when “people need not have to change the way that they use a system in order to fit in with it. Instead, the system should be designed to match their requirements.” HCI follows these principles:
1. Simple and natural dialogue
2. Speak the users’ language
3. Minimize user’s memory load
4. Be consistent
5. Provide feedback
6. Provide clearly marked exits
7. Provide shortcuts
8. Minimize the user’s slips and errors.
9. Provide help
Development of the User-Interface
The development of the user interface can be divided into three main parts:
1. Dashboard design
2. Sidebar and Main display panel design
3. Header design
The main page of the user-interface is shown in the figure. This is the first page after user login. This user-interface is designed in such a way that a user does not need to click three times in order to navigate at any level. Dashboard is loaded synchronously when the whole page loads. The sidebar and header area is made static since a user requires it all the time while dashboard area is acts like a container to other pages and data. New data is loaded in dashboard area asynchronously.
Any window can be maximized by double clicking on the window or by clicking the link provided in the dashboard module named as Dashboard. The maximized window loads on top of the interface to save the time of the user by avoiding unnecessary loading. The maximized window shows the content of the smaller window in detail.
The following figure shows the area where the new content will load after clicking the “Schedule New Scan”. In the next figure we can see that the content is loaded which was requested by clicking “Schedule New Scan” button.
This whole process of loading takes very less time and user remains intact with interface. And also user finds it very easy to work with the application when he can get all the information required very fast and at one place with minimum number of modifications in the user interface. This also reduces the idleness of the user enhancing the continuity of the interaction between user and the interface. And also this is the maximum level of navigation required in this application but still if some data is required, it loads on the top of the interface as shown in the next figure. When clicked on the “[Click here to select]” link a small window loads avoiding any modification in the present interface.
Besides of designing a user-centered interface there are some other important parameters which are also taken care of:
Following Figure shows the proposed framework for knowledge management system on the semantic web, which reflects the variety of knowledge transformations in this distributed environment: knowledge can be collected from various sources in different formats, and then stored in the common representation formalism, processed in order to compute interdependencies between knowledge (Example: relationship between bird and Kiwi) items or to resolve conflicts (Example: Kiwi is a bird. Birds can fly. Kiwi can not fly.), shared/searched and finally used for problem solving. Therefore this approach has following processes:
Knowledge Capturing: We identify four types of knowledge sources, which could be treated in knowledge capturing phase: (a) expert knowledge, (b) legacy (rule-base) systems, (c) metadata repositories and (d) documents. For knowledge capturing DSpace can be used. DSpace is an open source and combined project of HP Labs and MIT. DSpace has the ability of indexing and crawling the captured metadata. Because of high flexibility DSpace can be modified further to capture the expert knowledge (editor) as well as to convert the legacy system. Further this information will be converted to RDF Rules using a converter.
Knowledge Repository: Knowledge repository is a relational database organized in a way that enables efficient storing and access to RDF metadata. This repository can be seen as a RDF repository.
Knowledge Processing: Knowledge processing component enables efficient manipulation with the stored knowledge, especially graph-based processing for the knowledge represented in the form of rules, e.g. deriving dependency graph or consistency checking
Knowledge Sharing: Knowledge sharing is realized by searching for rules that satisfy the query conditions. In the RDF repository rules are represented as reified RDF statements and while in RDF any statement is considered to be an assertion, we can view an RDF repository as a set of ground assertions in the form (subject, predicate, and object). Rules are also related to domain ontology, which contains domain axioms used for deriving new assertions. Therefore the searching is realized as an inferencing process.
Using of Knowledge: The main advantage of this approach is using a conditional statement for the semantic annotation of knowledge sources. In that way we put statements used in the annotation into the context of each other, which consequently leads to efficient searching for knowledge. Moreover, annotating knowledge resources using Precondition-Action statements enables semantic hyper linking of each two resources, which satisfies the condition that the Precondition part of one annotation subsumes the Action part of the annotation of another resource. In that way querying for a problem can result in a composition of documents, which cover problem solving. This is a very important process in knowledge management or e-learning search.