An ADT (abstract data type) is the name given to a form of data structure which can be implemented in different ways in different languages to achieve the same effect.

The best example would be the linked list, which in C in its most basic form would look something like

struct linked_list_node
{
    void *node_data;
    linked_list_node *next_node_in_list;
};
But it doesn't necessarily have to look like this, and you don't necessarily have to do it in C, or even in a language with real "pointers", for it to still be a linked list; it's an ADT, meaning it isn't an implementation, but an idea. Abstractly, the above would simply be "some sort of structure consisting of a series of nodes where each node contains some kind of information, and the location where the next node in the list can be found".

Advanced CS seems to consist mostly of learning new ADTs and new ways to use them. The most famous ADTs besides the linked list would be the doubly linked list, the queue, the stack, and probably most importantly the tree.

(computer science, data structures:)

An Abstract Data Type (which see) is specified by a list of axioms giving the legal operations defined on the data structure, along with semantics: what the operations actually do, and (possibly) bounds on time and space required to perform them.

In most cases, the specification and implementation are confused, and the actual data structure gets called the ADT. This is not a problem in "everyday" use.

The data type is abstract in the sense that nothing is specified about its particular implementation in the program. Thus, the Forth dictionary is a data structure of a Forth interpreter; a small hash table with open hashing is a common implementation; the ADT (or rather, the precise set of operations that this data structure allows!) is a mapping of words to their definitions, with few performance guarantees made. All are descriptions of the same thing, though.

Examples of ADTs include binary trees, hash tables, red-black trees, and arrays. As you can see, almost all of these are descriptions of implementation strategies, rather than the ADT itself.

As an example to better understand the definition above have a look at a possible specification of a list ADT:

First the signature of the ADT is defined. For that we need some sets and functions on these sets:

E = Possible list elements (e.g. int, String, ...)
L = Merge_from_i=0_to_infinity(Ei) = Set of all possible n-tuples with elements from E.
BOOL = { true, false }

new: -> L
add: L x E -> L
remove: L x E -> L
head: L -> E
tail: L -> L
isIn: L x E -> BOOL

So new is the operation which transforms nothing into a list, remove transforms a list and an element into a list, ... .

Second the semantics of the functions needs to be defined. This can be done e.g. by specifying some relationships between the functions: (in the following l element of L, e and f element of E)

new() = λ
head(add(l,e)) = e
tail(add(l,e)) = l
remove(
λ,e) = error
remove(add(l,e),e) = l
remove(add(l,f),e) = add(remove(l,e),f)
    (if f != e )
isIn(add(l,e),e) = true
isIn(
λ,e) = false
isIn(add(l,f),e) = isIn(l,e)
    (if f != e)

So new is creating the empty list (represented by the λ sign here). remove executed on an empty list causes an error, executed on a list onto which the to be removed element was added returns the list without that element, and executed on a list to which a different element was added iterates further through that list.

This completely determines what the ADT will do if its functions are used but not how it will be achieved (i.e. how it is implemented). If a little bit more about implementation is known, one can give the bounds on time and space mentioned in the WU above. In this example, time bounds could be (if implemented as linked list):

add: O(1)
remove: O(n) (with n = number of elements in the list)
isIn: O(n)
head: O(1)
tail: O(1)

Log in or register to write something here or to contact authors.