National Integrity

Table of Content

Today, when we use a web search engine, the engine is not able to really able to understand our search. It looks for web pages that contain the “Keywords” found in our search term. The search engine can’t tell if the web page is actually relevant for our search. With the emerging technology in www, we will be able to sit back and let the internet do all the work for us. A search service can be used to narrow the parameters of the search. The browser then gathers, analyzes and presents the data to you in a way that makes a comparison snap.

It can do this because Web 3. 0 will be able to understand information on the web. This paper describes a new type of search technology that allows the users to have more interaction and control of their Internet. Keywords : Future Internet, Service Web 3. 0, Service Web 2. 0, Service Web 1. 0, Semantic Web, Internet of Services, Ontology, Wiki, Software Agents. 1. INTRODUCTION In 1989, WWW was created as an interface for the internet and a way for people to share the information with one another.

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

The first generation of web i. e. web 1. was being used as a library which was a source for information over internet. But we can’t contribute or change the information in any way. Then web2. 0 came as a big group of friends and acquaintances. We can still use it to receive information, but we also contribute to the conversation and make it a richer experience. Web 2. 0 is an interactive and social web facilitating collaboration between the people. Web 3. 0 is the next fundamental change in how websites are created and more importantly how people interact with them. It is broadly referred as Semantic Web.

According to Wikipedia, “web 3. 0 is a third generation of Internet based web services is collectively consist of semantic web, microformats, natural language search, datamining, machine learning, recommendation agents that is known as Artificial technology or Intelligent Web. ” ____________________________ 1 Assistant Professor, Department of Computer Science & Engineering, IIMT College of Engineering, G. Noida, Uttar Pradesh, INDIA. 2Assistant Professor, Department of Master of Computer Applications, IIMT College of Engineering, G. Noida, Uttar Pradesh, INDIA. Correspondence : deependra1983@gmail. com Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011

2. EVOLUTION OF WEB The world wide web is going to shift its concentration from application users to data. This is the reason why we need to know about the transition from web 1. 0 to web 2. 0 to web 3. 0. “Web 3. 0 is the web of Openness. A web that breaks the old silos, links everyone, everything, everywhere, and makes the whole thing potentially smarter. ” – Greg Boutan In comparison of web 3. 0 & web 2. 0 off course, Web 3. 0 will be better than web 2. , because it will contain all that was present in Web 2. 0, along with new concepts, new methodologies and most importantly, new applications. But then, the transition that web 2. 0 had brought over from web 1. 0, was a significant one. Web 3. 0 is going to be like having a personal assistant who knows practically everything about you and can access all the information on the Internet to answer any question.

As you search the web, the browser learns what you are interested in. The more you use the web, the more your browser learns about you. Search engines of the web 3. era are supposed to be handle in complex queries, queries typed in very much the way we speak. Something like a user types in a query “I am shifting over from California, to New Jersey and I am searching for accommodation. I am married and have a son and daughter. What would be the cost of living in NJ? “. The search engine will fetch information from different sites and give the right result pages withing the fraction of a second. So, one point that is supposed is that the websites will become more and more communicative. No doubt, websites did communicate in the web 2. phase too.

A single click on the url on one website could let you travel down all the way to a new website. But in the web 3. 0 phase, they will share information with each other to produce results which the user precisely wants. Another example of the same is that a single login will allow you to set your status update on Facebook, Twitter and MySpace together. So much for the precise search part. Now, what do we mean by user specific data. Concepts like that of iGoogle will become popular and enhanced. The search results for each user will vary.

Search engines will keep track of what are the results that a particular user is interested in and produce different search results for different users. Even the advertisements that a user views will be different from what another person views from the same search query. It will be all about artificial intelligence. Web 3. 0 applications will be designed such that, though not equally intelligent to the human brain, but way ahead from your text editor, with word prediction capability. Some may consider it as a breach into their privacy, but search engines have to say otherwise. Nevertheless, we

Page 125 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 will not get into that discussion. 3. CONCEPT OF SEMANTIC WEB ONTOLOGY With the Semantic Web, computers will scan and interpret information on Web pages using software agents. These software agents will be programs that crawl through the Web, searching for relevant information. They’ll be able to do that because the Semantic Web will have collections of information called ontologies [1] . In terms of the Internet, an ontology is a file that defines the relationships among a group of terms.

For example, the term “cousin” refers to the familial relationship between two people who share one set of grandparents. A Semantic Web ontology might define each familial role like this: ? ? ? ? ? ? Grandparent: A direct ancestor two generations removed from the subject Parent: A direct ancestor one generation removed from the subject Brother or sister: Someone who shares the same parent as the subject Nephew or niece: Child of the brother or sister of the subject Aunt or uncle: Sister or brother to a parent of the subject Cousin: child of an aunt or uncle of the subject

For the Semantic Web to be effective, ontologies have to be detailed and comprehensive. In Berners-Lee’s concept, they would exist in the form of metadata. Metadata is information included in the code for Web pages that is invisible to humans, but readable by computers. Ontology is a term in philosophy and its meaning is “theory of existence”. Ontologies are considered one of the pillars of the Semantic Web, A (Semantic Web) vocabulary can be considered as a special form of (usually light-weight) ontology, or sometimes also merely as a collection of URIs with an (usually informally) described meaning.

The architecture of semantic web is illustrated in the figure below. The first layer, URI and Unicode, follows the important features of the existing WWW. Unicode is a standard of encoding international character sets and it allows that all human languages can be used (written and read) on the web using one standardized form. Page 126 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 Uniform Resource Identifier (URI) is a string of a standardized form that allows to uniquely identify resources (e. g. , documents).

Semantic Web Architecture in Layers Extensible Markup Language (XML) layer with XML namespace and XML schema definitions makes sure that there is a common syntax used in the semantic web. XML is a general purpose markup language for documents containing structured information. A XML document contains elements that can be nested and that may have attributes and content. XML namespaces allow specifying different markup vocabularies in one XML document. XML schema serves for expressing schema of a particular set of XML documents. A core data representation format for semantic web is Resource Description Framework (RDF).

RDF is a framework for representing information about resources in a graph form. It was primarily intended for representing metadata about WWW resources, such as the title, author, and modification date of a Web page, but it can be used for storing any other data. It is based on triples subject-predicate-object that form graph of data. All data in the semantic web use RDF as the primary representation language. The normative syntax for serializing RDF is XML in the RDF/XML form. Formal semantics of RDF is defined as well. RDF itself serves as a description of a graph formed by triples.

Anyone can define vocabulary of terms used for more detailed description. To allow standardized description of taxonomies and other ontological constructs, a RDF Schema (RDFS) was created together with its formal semantics within RDF. RDFS can be used to describe taxonomies of classes and properties and use them to create lightweight ontologies. More detailed ontologies can be created with Web Ontology Language OWL[10]. The OWL is a language derived from description logics, and offers more constructs over RDFS. It is syntactically embedded into RDF, so like RDFS, it provides additional standardized vocabulary.

OWL comes in three species – OWL Lite for taxonomies and simple constrains, OWL DL for full description logic support, and OWL Full for maximum expressiveness and syntactic freedom of RDF. Since OWL is based on description logic, it is not surprising that a formal semantics is defined for this language. RDFS and OWL have semantics defined and this semantics can be used for reasoning within ontologies and knowledge bases described using these languages. To provide rules beyond the constructs available from these languages, rule languages are being standardized for the semantic web s well. Two standards are emerging RIF and SWRL. Page 127 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 For querying RDF data as well as RDFS and OWL ontologies with knowledge bases, a Simple Protocol and RDF Query Language (SPARQL) is available. SPARQL is SQL-like language, but uses RDF triples and resources for both matching part of the query and for returning results of the query. Since both RDFS and OWL are built on RDF, SPARQL can be used for querying ontologies and knowledge bases directly as well.

Note that SPARQL is not only query language; it is also a protocol for accessing RDF data. It is expected that all the semantics and rules will be executed at the layers below Proof and the result will be used to prove deductions. Formal proof together with trusted inputs for the proof will mean that the results can be trusted, which is shown in the top layer of the figure above. For reliable inputs, cryptography means are to be used, such as digital signatures for verification of the origin of the sources. On top of these layers, application with user interface can be built. 4. HOW MIGHT WEB 3. WILL WORK? A web is the place where documents are available for download on the internet. Imagine if there would be no hyperlinks among them. You would not be able to navigate among the web pages without the hyperlinks. As we know that the data on the web is not enough for the increasing users of internet globally.

So we need a proper infrastructure for a real web of data. ? ? The data available on the web must be accessible via standard Web technologies. The data should be interlinked over the Web i. e. , the data can be integrated over the web. This is where Semantic Web Technologies (Web 3. ) come in the picture. A web of data unleashes now applications as Artificial intelligence, Automated reasoning, Cognitive architecture, Composite applications, Distributed computing, Knowledge representation, Ontology (computer science), Recombinant text, Scalable vector graphics, Semantic Web, Semantic Wiki and Software agents. To deal with such situation we need to do data integration as follows: ? Map the various data onto an abstract data representation o ? ?

Make the data independent of its internal representation… Merge the resulting representations Start making queries on the whole! queries not possible on the individual data sets To introduce the main semantic Web concepts we will use a simple example. Let’s have an example of two books in different versions having some common properties. (A) Starting with the first book with the following dataset “A” Page 128 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 Step 1: Export the data as a set of relations ? Relations form a graph o o The nodes refer to the “real” data or contain some literal How the graph is represented in machine is immaterial for now ?

Data export does not necessarily mean physical conversion of the data o o o o relations can be generated on-the-fly at query time via SQL “bridges” scraping HTML pages extracting data from Excel sheets, etc. ? One can export part of the data (B) Now exporting the data of another book with the dataset “F” Page 129 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 Step 1: Export the second data as a set of relations (C) Start merging of the data from the datasets “A” & “F” The data of the two books from two datasets is shown in the following figure.

Now we see that there is a common property i. e. Same URL present in both datasets. We can use the common URL as the key to join both datasets. (D) After merging the datasets the resultant dataset is shown in the following figure: Page 130 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 Now this resultant dataset can be further used for the complex queries over the internet by user. Moving towards making queries for example.. ? User of data “F” can now ask queries like: o “give me the title of the original” ? ? ? well, … « donnes-moi le titre de l’original »

This information is not in the dataset “F”… …but can be retrieved by merging with dataset “A”! We feel that a:author and f:auteur should be the same but an automatic merge doesn’t know that. Let us add some extra information to the merged data as: ? ? ? a:author same as f:auteur. both identify a “Person”. a term that a community may have already defined: o o a “Person” is uniquely identified by his/her name and, say, homepage. It can be used as a “category” for certain type of resources. (E) After adding the extra knowledge to the merged data set, we get the following figure: Start making richer queries as follows:

Page 131 of 133 Deependra Kr. Dwivedi et. al / VSRD International Journal of CS & IT Vol. 1 (3), 2011 ? User of data “F” can now ask queries like: o “donnes-moi la page d’accueil de l’auteur de l’original” ? well, «give me the home page of the original’s ‘auteur’» ? ? This information is not in the dataset “F” or “A” …but was made available by: o Merging datasets “A” and datasets “F” adding three simple extra statements as an extra “glue” It could become even more powerful. We could add extra knowledge to the merged datasets for example, a full classification of various types of library data, geographical information, etc.

This is where ontologies, extra rules, etc, come in. Ontologies/rule can be relatively simple and small, or huge, or anything in between which results in more powerful queries is able to ask. The Semantic Web (Web 3. 0) provides technologies to make such integration possible. 5. FUTURE SCOPE Once Web 3. 0 phase ends, we will enter the era of Web 4. 0[2], the focus will return to the front end and we will see thousands of new programs that use Web 3. 0 as a foundation. Even though web 3. 0 is more theory than reality that has not stopped people from guessing what will come next.

We are reading to learn about the farflung future of the web. ? The Web will evolve into a three-dimensional environment. Rather than a Web 3. 0, we’ll see a Web 3D. Combining virtual reality elements with the persistent online worlds of massively multiplayer online roleplaying games (MMORPGs), the Web could become a digital landscape that incorporates the illusion of depth. ? Some people believe the Web will be able to think by distributing the workload across thousands of computers and referencing deep ontologies.

The Web will become a giant brain capable of analyzing data and extrapolating new ideas based off of that information. The Web will extend far beyond computers and cell phones. Everything from watches to television sets to clothing will connect to the Internet. Users will have a constant connection to the Web, and vice versa. More research can be done on developing new and more advance search engine. While this paper proposed one new way for people to interact with Web pages found on the Internet, future research can provide other innovative methods. Future research can also be done to provide many other aspects of building advanced search engine, including for example, advanced methods for Web page summarization.

Cite this page

National Integrity. (2016, Oct 19). Retrieved from

https://graduateway.com/national-integrity/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront