Ontology is the theory of what there is, in broad and general terms. Ontologists attempt to answer such questions as "Do numbers exist?", "Do properties exist?", "Do space and time exist?"

The most influential ontologist of the 20th century is W. V. Quine, whose essay "On What There Is" and book "Word and Object" established the standard ontological methodology for analytic philosophers.

Ontology is a branch of metaphysics concerned with the study of knowledge or what is. An ontology is a theory or set of theories about knowledge and reality.

About the two most commonplace uses of ontology are in curriculum selection (deciding what should and shouldn't be taught in schools) and classification systems (such as the Library of Congress Classification System). These are encodings of what knowledge is worth passing on and what knowledge exists, respectively.

Because of the close historical relationship between libraries and scholarship classification systems have traditionally been influenced by the curriculum of universities and colleges.

viterbiSearcher is right in stating that one of the commonplace uses of ontology is classification systems. What is missing is the fact that the internet has made this a very active field.

Most search engine companies employ ontologists to dream up their classification systems. The Open Directory Project is an attempt to publish a directory of the web (with categories, like Yahoo) by volunteers. Go there and put "ontology" into their search engine and you'll find that ontology is indeed a hot topic these days.

A brief history of existence.

Here's a top down list of different explanation of "what it's all about".


Thanks to JerboaKolinowski for pointing out Neutral Monism.

As dictionaries explain, ontology is the philosophical study of the nature of being, what exists and what can or cannot exist. It’s one of the metaphysics, the branch of philosophy that tries to sort out the first principles of things, the ultimate roots of life, the universe and everything. What could be more basic than ‘being’ itself? Indeed, ontology is also known as ‘first philosophy’.

Until very recently, the word ontology has been known to and used by a most miniscule fraction of humanity. It’s one of those obscure, irrelevant words whose meaning even the highly educated tend to forget because they never have occasion to use it, except maybe when trying to make an impression on the easily impressed.

Nevertheless, ontology is a term that has made the great leap up from esoteric philosophy directly into engineering, bypassing the intermediate stage of science almost altogether. It has recently been sucked into the geek glossary and become a buzzword in the fields of artificial intelligence, natural language processing, knowledge engineering and, of course, the World Wide Web, specifically in the Web 3.0 incarnation ("semantic Web").

How did this happen? Let’s start back at the beginning.

Traditional Philosophical Ontology

After several millennia of deep thought by minds so bright that no ordinary person could look directly at them, ontology has divided into two main streams. One stream follows Plato and his lot in believing that realia, the things that actually exist, are beyond experience. That’s classical idealism. The other stream flows from Aristotle, Hume and a bunch of other more recent professional thinkers called the logical positivists. They tell us, not surprisingly, just the opposite: the objects of our experience are the things that actually exist. That’s often called realism, of one kind. There are a few other isms that stick to ontology as well, solipsism, monism and dualism to name the main ones. They deal mostly with how many basic kinds of stuff exist and what that might mean.

It’s Full of Objects

The idea of objects, their characterization and their possible interrelations within their existence has always been a part of ontology. In Platonic realism, the categories of things and properties of things have real existence as things in themselves and are based on a priori knowledge. Aristotle, on the other hand, held that some things exist only 'in the understanding', or as mental phenomena that are built bottom-up from experience. Regardless of one's view of the nature of an object's existence, the shift in attention from the essence of being in itself to theorizing on objects and their properties, relationships and changes began to bleed ontology into epistemology, the philosophy of knowing and understanding. I would even say that this is where philosophy gives birth to what develops over the centuries into modern science. Science emphasizes analysis and specification of relationships with the goal of making testable predictions. That process is considered to be the basis of understanding. The idealist tradition, on the other hand, can be seen as the foundation for mysticism and dualism. The strong tension between these two streams of thinking is very much alive and visible in our daily lives, but especially in fictional literature.

In the modern context, ontology has become mainly a theorizing on objects. The development of theories on the identification and definition of objects, their properties, their part-whole relationships, their class-member relationships, and other imaginable relationships has been the most active area in ontology in the last few centuries. In the last decade or so, ontology has been hijacked for use in making things understandable to machines.

Is It Reasonable?

Ontology took a big step toward becoming something actually useful and meaningful outside the knitting circles of philosophy with the introduction of formal logic in the 19th century. Edmund Husserl seems to have been the first to call the marriage of formal logic and ontology 'formal ontology'. The fruit of this marriage is that we can create formal systems that allow us to reason on the relations and properties of objects with special computer programs that are, appropriately enough, called 'reasoners' (see Cyc, for example). That lets us (and computers) make assertions that are necessarily true about things and states of affairs based on our ontology, and that is a powerful knowledge multiplier.

For example, if our ontology tells us that there is a category of animals called quadrupeds that is defined by the feature of having four legs and some guy says that a bulwargle is a quadruped, then you immediately know that a bulwargle is an animal that has four legs and shares all of the other defining features of quadrupeds, and animals, and living things, and so on.

"Wow!", you say. "Brilliant!" But you are being a bit of a sarcastic smartass. This is just common sense and you are not impressed.

What we need to remember, however, is that this is a formal system, which means that clever machines like your computer, tablet device, or smartphone can use it. This is a way for your computer (or more exactly, a program that runs on your computer) to learn and use common sense and to generate new knowledge on the basis of existing knowledge. This is the reason for all the buzz in fields such as natural language processing, knowledge engineering and artificial intelligence. The people who are engaged in designing and building practical systems in these areas are finally catching on to ways of efficiently representing knowledge structures that can be created, manipulated and shared by machines and used to interface with humans in a natural and useful manner.

You might be more impressed now. You might even be thinking of some cool ways an ontology could jazz up your favorite Web 2.0 community with a generous splash of Web 3.0.

So what does one look like?

What passes for an 'ontology' in practice can be as simple as a 'name space' for a controlled vocabulary of properties.

Remaining Hurdles

The current approach to creating a semantics for machines that is based on formal ontologies suffers three big problems: domain dependence, the need for laborious, time-consuming construction of specific formal ontologies by humans, and the inflexibility of the resulting ontologies.

Domain dependence means that the vocabulary of the ontology, only works for some relatively narrow area of knowledge, such as 'wine,' 'nuclear disaster', or 'teledildonics'. Usually, a group of 'experts' in the domain get together and try to agree on a well-defined vocabulary and what the terms mean. The result usually involves subjective compromises and only very roughly approximates the range of expression humans actually use. The problem is that the same terms can mean different things in different specific contexts, so it is difficult for the machines to interact across the specific and generally arbitrary domain boundaries that someone came up with in designing the ontologies. 

The second problem, the labor-intensive hand-crafting of specific formal vocabularies ('ontologies') by experts, makes machine semantics very slow and expensive to implement and maintain.

The final problem is inflexibility. Once an ontology is designed and in use, it is very hard to modify. Any changes that need to be made require more hard work by experts, and the changes must not break what is already in place. Many important domains change frequently, if not constantly, so inflexibility is a serious problem.

Object-oriented Ontology

Accepting that ontology is about objects, it is reasonable to try to represent objects as they are in the object-oriented computer programming paradigm. And when we do, magic occurs.

 


  1. What is an Ontology?
  2. Definitions by leading philosophers
  3. OWL
  4. As construed by information scientists
  5. Protege, 'an open-source ontology editor and framework for building intelligent systems'

On*tol"o*gy (?), n. [Gr. the things which exist (pl.neut. of , , being, p.pr. of to be) + -logy: cf.F. ontologie.]

That department of the science of metaphysics which investigates and explains the nature and essential properties and relations of all beings, as such, or the principles and causes of being.

 

© Webster 1913.

Log in or register to write something here or to contact authors.