E is the Greek numeral for 5.

In mathematics, e is approximately:
(each clump is 20 digits)
2.
7182818284590452353 6028747135266249775 7247093699959574966
9676277240766303535 4759457138217852516 6427427466391932003
0599218174135966290 4357290033429526059 5630738132328627943
4907632338298807531 9525101901157383418 7930702154089149934
8841675092447614606 6808226480016847741 1853742345442437107
5390777449920695517 0276183860626133138 4583000752044933826
5602976067371132007 0932870912744374704 7230696977209310141
6928368190255151086 5746377211125238978 4425056953696770785
4499699679468644549 0598793163688923009 8793127736178215424
9992295763514822082 6989519366803318252 8869398496465105820
9392398294887933203 6250944311730123819 7068416140397019837
6793206832823764648 0429531180232878250 9819455815301756717
3613320698112509961 8188159304169035159 8888519345807273866
7385894228792284998 9208680582574927961 0484198444363463244
9684875602336248270 4197862320900216099 0235304369941849146
3140934317381436405 4625315209618369088 8707016768396424378
1405927145635490613 0310720851038375051 0115747704171898610
6873969655212671546 8895703503540212340 7849819334321068170
1210056278802351930 3322474501585390473 0419957777093503660

(source: Mathematica: N[E,1100])

The Naperian or natural base to logarithms; Naperian does not refer to Napier having created it, in fact his concept of a logarithm is different than the modern concept, but he is credited as creating logarithms, hence this extremely useful number is identified with him. e is approximately 2.718281828459045 to fifteen digits, and can be expressed in many ways:
it is the limit of the slowly converging function
       /     1  \
 lim  |  1 + -   | ^ n
n->oo  \     n  /
or the more quickly converging series
 oo
---
\     1
 |   ---
/     n!
---
n=0
or as the number satisfying d/du (e^u) = (e^u). Euler gave this number the letter it is associated with; some say it is because e is the first letter of his name, but Euler said that all others were taken up to e. Euler discovered the relationship e^(i*theta) = cos(theta) + i*sin(theta), which is one of the most important formulas in complex analysis, and is responsible for the amazing relationship between the five most important numbers in mathematics: e^(i*pi) + 1 = 0.

Euler's constant e can be expressed very prettily as an infinite continued fraction, following the pattern [1 0 1 1 2 1 1 4 1 1 6 1 1 8 1] where the (3n+2)th term is 2n and all other terms equal 1. This describes the following fraction:

1 + 1
   _______
    0 + 1
       _______
        1 + 1
           _______
            1 + 1
               _______
                2 + 1
                   _______
                    1 + 1
                       _______
                        1 + 1
                           _______
                            4 + 1
                               _______
                                1 + 1
                                   _______
                                    1 + 1
                                       _______
                                        6 + ...

Euler was probably the first to discover this (he used the expansion to prove e's irrationality, as well as that of its square). The zero up the top is of course somewhat redundant, however the alternative is to express the fraction as [2 1 2 1 1 4 1 1 6 1 1 8 1], which is not quite so homogenous a pattern and somewhat conceals the identity's beauty. (This "zero trick" was invented by Bill Gosper to smooth out what he saw as a glitch in the original representation.)


 ___________________________________
|                                   |
|   oo                              |
|  ---                              |
|  \     1                          |
|   |   ---   =   lim ( 1 + 1/n )n  |
|  /     n!      n->oo              | 
|  ---                              | 
|  n=0                              |
|___________________________________|
I did a proof (demonstration, actually, convergance isn't formally considered) of this while paying quite much too little attention in grade 11 math... I don't know whose it actually is. Probably no-one's, it's really straightforward. Phun, It's done from a high school perspective, so excuse the conceptual inelegance.

Now, since n is approaching infinity, applying the Binomial Theorem to the right side of the above equation looks promising... It'll give you a sum of an infinite number of terms. Let's take a look, see.

The Binomial Theorem:
              n
             ---
             \       n!
(a+b)n =      |  ----------- * an-k * bk  
             /    k!*(n-k)!
             ---
             k=0
Let's calculate a few terms of the expansion of e, using the theorem, and starting at tn (k = n).
tn (k = n)
 ___________________________________________________       
|                                                   |
|                      n!                           |
|  (1/n+1)n = lim  ----------- * (1/n)n-n * (1)n     |
|            n->oo  n!*(n-n)!                       |
|___________________________________________________|
Clean it up, and it all comes out to 1 (1/(0!)).

tn-1 (k = n-1)
 _____________________________________________________________
|                                                             |
|                          n!                                 |
|  (1/n+1)n = lim  ------------------- * (1/n)n-(n-1) * (1)n-1  |  
|            n->oo  (n-1)!*(n-(n-1))!                         |
|_____________________________________________________________|
                       
                  n(n-1)!
(1/n+1)n = lim  ------------- * (1/n)1 * (1)n-1  
          n->oo   (n-1)!*1!
                              
(1/n+1)n = lim  n * (1/n) * 1 = 1 
          n->oo  
...and 1 is 1/(1!)

tn-2 (k = n-2)
______________________________________________________________
|                                                              |
|                           n!                                 |
|  (1/n+1)n = lim  ------------------- * (1/n)n-(n-2) * (1)n-2  |
|            n->oo  (n-2)!*(n-(n-2))!                          |
|______________________________________________________________|
                       
                 n(n-1)(n-2)!
(1/n+1)n = lim  -------------- * (1/n)2 * (1)n-2  
          n->oo   (n-2)!*2!

                 n(n-1)     1 
(1/n+1)n = lim  -------- * --- * 1  
          n->oo    2!        n2                                               
We can use the limit product law to rearrange this and divide it into two seperate limits:
                 1          n(n-1)  
(1/n+1)n = lim  --- *  lim -------- * 1  
          n->oo  2!   n->oo   n2     
The first limit is simply 1/2!, by the constant limit theorem. The second can easily be show to be 1 by multiplying through by 1/n and taking the limit... intuitively, think of how the ratio between n and n-1 shrinks to zero as n approaches infinity.

A similar argument can be used for all following terms
                                                     oo
                                                    ---
                      1     1     1     1           \     1
 lim   ( 1 + 1/n )n = --- + --- + --- + --- + ... =  |   ---
n->oo                 0!    1!    2!    3!          /     n!        
                                                    ---
                                                    n=0
Whee.
Guitar chords: E major
E
 O   OO
 ======
 |||1||
 ------
 |23|||
 ------
 ||||||
 ------
 ||||||
 ------
 EBEGBE
    #

Notes: 1   : E 
       3   : G#
       5   : B  

Everything Guitar Project : The Everything Guide to Guitar Chords: E

High and funky, strictly an E/B chord:

-12- 
-9- 
-9- 
-9- 
-X- 
-X- 
E is one of the five authors (J, E, D, P, and R) of the Pentateuch or Torah, the first five books of the Bible, theorized by Biblical scholars. S/he probably lived around 850 or 800 BC, though some place the date contemporaneously with or earlier than J.

E is short for the Elohist, so named because E consistently refers to God by the name Elohim, though E is not the only author to do so. By comparison, J consistently uses the name Yahweh.

E is credited with the story of Abraham in Genesis 12 and may have written the prose version of the story of the warrior woman Deborah in the book of Judges.
As seen in the other write-ups there are many ways to describe the number e. But how do they fit together? Let's begin by using a definition which shows why e is such an important constant. The proofs explain a fair deal about exponential functions.

Definition:
e is the unique number such that dex/dx = ex.

Proof of uniqueness and existence:
Suppose that e1, e2 both have the desired property. Consider the function f(x) = e1xe2-x. f'(x) = 0, so e1/e2 = f(1) = f(0) = 1, and thus e1 = e2. So e is unique.
The existence is ascertained by the propositions below. QED

How do we find the value of e? Well, we know that ex describes exponential growth where the growth rate per unit time is equal to the current value. We can approximate this exponential growth by dividing a unit time interval into n equal subintervals. In each subinterval the increase is approximately by a factor 1 + 1/n. Thus we would expect the expression (1 + 1/n)n to give a good approximation of e for large n. Hence we are led to

Proposition:
e = limn→∞(1 + 1/n)n

Proof:
First we need to show that the limit exists.

(1 - 1/(n+1)2)n+1 > 1 - 1/(n+1) ⇒
(1 + 1/(n+1))n+1 > (1 + 1/n)n

so the sequence is increasing. Using the binomial expansion we find that

(1 + 1/n)n < SUM(k = 0, n)(1/k!) < SUM(k = 0, ∞)(1/k!)

for all n, so the sequence is bounded above. Thus the limit exists, and we may call it d.
Consider f : R -> R+, f(x) = dx. This is a continuous strictly increasing function, and therefore has a continuous inverse which we call ln.
Since ln is the inverse we have

1 = ln d = ln (limn→∞(1 + 1/n)n) = limn→∞ n*ln(1 + 1/n))

Hence as h → 0, (ln(1+h))/h → 1 and (dh-1)/h → 1.
Differentiating f from first principles gives

f'(x) = (dx+h - dx)/h = dx(dh-1)/h = dx

as h → 0.
So d satisifes the condition that defines e, and hence e = d = limn→∞(1 + 1/n)n. QED.

The other approach to take when exploring exponentials is to use power series.

Proposition:
e = SUM(k = 0, ∞)(1/k!)

Proof:
Define exp: R -> R by exp x = SUM(k = 0, oo)(xk/k!). Termwise differentiation gives that exp' x = exp x.
Consider f(x) = (exp x)(exp a+b-x). f'(x) = 0, so (exp a)(exp b) = f(a) = f(0) = exp a+b. We can consider this as a functional equation, and using that exp is continuous we find that exp x = (exp 1)x.
Thus exp 1 satisfies the condition that defines e and e = exp 1 = SUM(k = 0, ∞)(1/k!). QED.

Moniker of Mark Oliver Everett, a musician best known for his work fronting the band Eels. Everett was born in 1963, the son of famous physicist Dr. Hugh Everett III. Mark showed very little interest in math or physics, instead delving deeply into music, spending his childhood banging out tunes on the drums and piano.

Mark's teenage years were troubled, leading to drugs and trouble with the law. When Mark was nineteen, his father died of a heart attack. A year later, Mark packed up everything and moved to L.A. where he would begin his music career.

Taking the name 'E,' he went on to release two albums, A Man Called (E) in 1992, and Broken Toy Shop in 1993. These albums sold poorly, and after a three year span of writing songs, E teamed up with drummer Jonathan Norton and bassist Tommy Walters to form Eels.

Eels released their first album, Beautiful Freak in 1996, which found success and airplay for the single Novocaine for the Soul.

At the same time, tragedy struck E, his sister committed suicide, and his mother became terminally ill with lung cancer. In 1998, Eels released the powerful album Electro-shock Blues, both sad and beautiful, an expression of his feelings about his departed family.

Eels' most recent releases have been Daisies of the Galaxy (2000), and Souljacker (2001).

source: http://eels.artistdirect.com/biography/index.shtml

The definition

By definition, for all real numbers x, f(x)=ex is the function whose rate of change (derivative) is equal to f(x), i.e., f'(x)=f(x) for each x in the real numbers. Thus a Taylor polynomial would be easy to construct (and use to approximate f(x)), if we were given any specific information about the value of f(x) for some real number x. So let f(0)=a, an arbitrary (non-zero) real number. Then the nth Taylor polynomial is:

                                                              (n)
 (n)                f'(0)       1    f''(0)       2          f   (0)       n
T   (x,0) = f(0) + -------·(x-0)  + --------·(x-0)  + ... + ---------·(x-0)
                      1!               2!                       n!

               a       1    a       2          a       n
        = a + ---·(x-0)  + ---·(x-0)  + ... + ---·(x-0)
               1!           2!                 n!

               n
              --- /  i \
              \   | x  |
        = a · /   | -- |
              --- \ i! /
              i=0

Now, it would be nice if f(0)=1, because then a=1 and there are no extra constants floating around. Notice that a=0 is possible, but highly uninteresting, since then the function and all its derivatives are zero. It is also worth noting that the above formula is only an approximation; the real value of f(x) can only be determined by adding up the countable number of terms in the sequence.

An alternate definition

ex can also be defined as the inverse of ln(x), the natural logarithm of x. All the properties of powers of numbers apply to ex, which can be easily shown using the definition of ln(x).

E is a <deep breath> cross-platform scripting language with capabilities-based security and builtin support for distributed programming. It's called E because, while there are languages called A, B, and C, there is none called D; thus, the original designers decided that D must be bad luck, and moved on to E. The current implementation is written in Java, and comes with ELib, which lets Java programs interact and cooperate with ones written in E. It is closely related to the CapIDL and EROS projects, among several others, and is worked on by some very smart people.

I'm not going to even try to explain most of the language in this writeup; instead I'm going to explain a few core concepts which will hopefully get some few of you interested enough to learn the language.

One of the neat things about E is it's mechanisms for deadlock-free distributed programming, which are inspired by real life. Let's say that you have to get a document from Bob over in accounting. You walk over and find that Bob hasn't finished it yet, so what to do? In many distributed systems, you would stand there and wait for him to finish it (block). Instead, of course, you tell him you'll be back later, and go do something else. This is how E works.

The key operation is the 'eventually operator', which is <-. To tell a car object to move, you might say:


car <- moveToHere(5,7)
// The car will eventually move, but not right now.

But we don't even have to have an actual thing to make calls on with the eventually operator. We can do that just based on the promise that, at some point in the future, we will have the appropriate object:


// carVow is a promise that the car maker will send us a car
def carVow := makeCar <- ("Skyline")
// once we get the car, tell it to move
carVow <- moveTo(2,3)
// but we keep going, even if the car isn't built yet

You can also call in on a promise, waiting until something really is done:


when (carVow) -> done(car) {
   println("Right on, we've got wheels!")
   car <- goTo("The 7-11")
} catch(prob) {
   // Something bad happened, and now we don't get our car
   println("Dude, where's my car? Answer: " + prob)
}

It's impossible for me to write too much more without just replicating everything in E in a Walnut, so I will direct you there for all further information, including the 'normal' programming stuff for single process systems. E really is cool; check it out.

† Alas, StrawberryFrog informs me that there is a programming language named D (http://www.digitalmars.com/d/). In defence of both myself and the E programming team, E existed before D. :P

‡ Most examples either stolen from or heavily inspired by ones in E in a Walnut.

References:

A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y   Z

See also: Æ, æ, Œ, œ, È, è, É, é, Ê, ê, Ë, ë, , , , , Ε, ε, Ē, ē, Ĕ, ĕ, Ė, ė, Ę, ę, Ě, ě, Ə, Ǽ, ǽ, Ȁ, ɘ, ə, ɛ, ɜ, ɞ, Έ, Ξ, έ, ξ, Ё, Є, Е, Э, е, э, ѐ, ѓ, Ә, ә, Ӛ, ӛ, , , , , , , , ế, , , , , , , , , , , , ,

EE

'E' or 'e' is:

Values and Representations of "E" and "e"

e is an irrational real number that has interesting properties, much like that number pi that seems to be so popular.

How do we define e?

For the purposes of this writeup, we'll say that e is an irrational number that follows the definition that d/dx(ex)=ex.

What does this definition of e mean and imply?

Returning to arithmetic and functions, this definition depends on the nature of an exponent. By raising e to a certain exponent x, written ex, one is simply multiplying e by itself x times. (See writeups on exponents for more details on this)

The statement above refers to the graph of ex. At any point on the graph, because it is a continuous and differentiable function of x, there can be a line drawn tangent to the graph, so that the line touches the graph at that point only. Think of a ball resting on flat ground. The ball is tangent to the ground at the point of contact. According to this definition of e, the slope of a line tangent to any point on the graph of f(x)=ex is equal to the value of that value of f(x). It's crazy, but it works.

Another interesting thing about e

The alternate definition of e is the base of the natural lograthm, ln(x), which is defined as the integral1 1/x dx. This is an interestingly simple definition that has many applications in calculus.

So how do we calculate e?

This number, like π, has a simple definition, but cannot be represented as a simple rational or the value of an easily evaluable elementary function.

The answer lies within a concept called Taylor series, and more specifically, Maclaurin series (knowledge of some calculus required). A Taylor series polynomial attempts to reconstruct the behavior of a function around a certain point. A Maclaurin series polynomial is a Taylor series polynomial with its "seed point" being chosen as x=0.

An informal definition of Maclaurin series is that you continually differentiate the function to attempt to reproduce the behavior of each xn term with n from 0 to ∞.

       ∞
     ____
     \   `  f[i](0)   i
P(x)= )     ------- * x
     /___,     i!
      i=0

Where f[n](x) is the n-th derivative of the function f(x), defined as f(x)=ex. Note the term i! in the denominator. This is called the factorial operation, and is necessary in the denominator to counteract the factorial expansion of values in the numerator. Any number of derivatives of this function will still be f(x) because of its definition (see above for definition). Also, f(0) is 1 because any number (except 0) raised to the 0 power is 1. Also, we are looking for P(1), so we can replace x with 1.

       ∞
     ____
     \   `   1
P(1)= )     -- * 1
     /___,  i!
      i=0

    ∞
  ____
  \   `   1
e= )     --
  /___,  i!
   i=0

And so we have the definition of a series that can calculate the value of e, quite possibly the most important irrational number in math, science, and engineering, rivaled only by π and half the square root of two.

The 'e' language (not to be confused with 'E'), designed by Verisity, is a programming language targeted at hardware verification, implemented by their Specman Elite tool (which is, for the record, really quite expensive). It's ideally suited for creating testbenches for HDL models, and testing those models, thanks to several interesting features of the language:

  1. Builtin interface to logic simulator packages such as NCVerilog, Modelsim etc.
  2. Random generation of arbitrary data types, with linear constraint solving to generate data that meets some user-specified validity requirement.
  3. Event handling and processing, including a sophisticated temporal expression evaluation system to specify events in terms of sequences of other events.
  4. Functional coverage collection and analysis driven by the event system

The language provides a bare minimum of object oriented features, with no encapsulation and a form of inheritance which actually comes via its aspect oriented feature set.

Syntax

The syntax of the core language is based heavily on C, with some simplification. Uniquely, the language contains elements which can be used to extend the syntax of the language: its "preprocessor" is somewhere between cpp and yacc. This is used in the standard system to provide some syntactic sugar, and also some rather neat constructs including things borrowed from functional programming languages.

Lists are supported as a builtin data type (actually implemented by vectored dynamic arrays), as are keyed lists (almost, but not quite, a hash), and pleasingly they use a syntax for list literals which is identical to the syntax of structure members and statements within blocks in C: { item1; item2; }. Also, a semicolon is required after compound statement blocks, which keeps everything nice and regular.

Objects and Aspects

The object (or 'struct') model in e is superficially similar to C++ or Java (although data declarations are similar to Pascal or Ada). A declaration of the struct contains definitions of the struct's data members and methods:

  1 struct myStruct {
  2   myvar: int;
  3   getMyVar(): int is {
  4     result = myvar;
  5   };
  6 };

However, things soon get more complicated. The aspect oriented feature set includes the ability to extend previously declared structure types with additional data members (aspect variables as AspectJ would call them) and methods, and also extend the methods themselves. It's a very useful paradigm for use in a verification context, because verification is composed almost entirely of crosscutting issues.

Conditional extension, however, is where it starts to get hairy. And this is also how inheritance is implemented in e. Let's jump straight in at the deep end, and extend our 'myStruct' type from above so that we have a subtype which also has a 'myOtherVar' integer.

  1 extend myStruct {
  2   hasMyOtherVar : bool;
  3   when TRUE'hasMyOtherVar {
  4     myOtherVar: int;
  5     getMyOtherVar(): int is {
  6       return myOtherVar;
  7     };
  8   };
  9 };

If you followed that, your mind might be boggling by now. What we did was:

  1. We decided to extend the definition of a previously defined struct ('myStruct'). This code could be (and in fact almost certainly would be) in a completely different source file to the original definition.
  2. We add another data member, 'hasMyOtherVar'. This is unconditional, so every instance of myStruct in the system will also have this data member.
  3. We introduce a conditional part of the definition of myStruct. All of the following definitions only have effect when hasMyOtherVar is TRUE. So long as there are no other members of the struct (at least, none that are visible to this aspect) which could have the value TRUE (ie. no other bools) we could have simplified the condition to simply 'when TRUE. Which is kinda neat, when you think about it.
  4. Inside the conditional section, we now have a 'myOtherVar'. This is only valid if hasMyOtherVal is TRUE, and is otherwise completely invisible.
  5. Define a new method, which only exists on the struct when myOtherVal is TRUE and otherwise has no meaning.

In effect, we've subclassed myStruct. And in the process we've learned that trying to think in terms of classical object oriented design in e isn't going to get us very far.

Enumerated types can be extended by adding new values, which comes in particularly useful for exactly the situation we saw above. Methods can be extended, too:

  1 extend myStruct {
  2   getMyVar(): int is first {
  3     print("Retrieving value of myVar!\n");
  4   };
  5 };

And of course, all of these things can happen inside when conditional extensions. It's a risky way of doing things. Because structures and methods can be defined cumulatively in almost arbitrary order within and between source files, tracing execution and semantics becomes difficult. It can create serious maintenance headaches where more than one person is working with code, since the implicit interface definition between 'interface' and 'implementation' no longer exists as a tangible part of the language, and must be artificially (and consciously) reconstructed.

Generation and Constraint Solving

One of the most useful concepts in e is generation. The main focus of the language's designers is on functional verification, a large proportion of which is concerned with the generation of tests and test data.

Generating tests is hard for complex systems. For example, to test a microprocessor, we'd need to, essentially, generate code. We'd have to generate code which would behave in a well-defined manner (we don't want it reading any memory that's not physically implemented; there will likely be large sections of its instruction set architecture which have unpredictable results. And ideally, we'd like a test program to actually reach some defined "end" point at which we could say the test has passed or failed. Simply filling the program memory with random data and hitting 'Go' is unlikely to be an effective use of simulation time.

Instead, we'd like to define a set of constraints (along the lines of "make memory references point to some memory that really exists"), and e's keep construct allows us to do just that. With keep, we can specify an expression which the constraint solver will hold to be true while randomly generating the data items for a struct.

If we were to request the generation of a random myStruct, its 'myVar' values would be evenly spread over the integer range. Approximately half the time it would have a 'myOtherVar', which would also be spread over the integer range. But let's say, for the sake of example, that we want to keep myVar between zero and ten, and that if myVar happens to be 5, we really need the myStruct to have a myOtherVar. It's a somewhat arbitrary set of constraints, but it's precisely 22:42pm, and if I can't be abitrary at some arbitrary time of night, then when can I?

  1 extend myStruct {
  2   keep myvar >= 0 && myvar <= 10;
  3   keep (myvar == 5) => hasMyOtherVar;
  4 };

Lo and behold, every myStruct we generate from that moment on will satisfy those constraints. Unless, of course, we've accidentally specified a set of constraints that doesn't make sense, or which is contradicted by a later set of constraints, in which case the constraint solver detects the contradiction and gives us a ticking off for it.

Simulation interface

One of the things that makes e such a swiss army knife in the toolkit of a verification engineer is its ability to run as a PLI or VPI library inside a logic simulator, completely transparently to pre-existing e code. Having defined a struct with constraints to generate valid test data, we can then take that struct and apply its values to the actual design under simulation simply by enclosing the hierarchical signal name in single quotes inside specman, and assigning to it as if it were any e expression.

The same thing works in reverse, of course: reading the value of a signal name inside single quotes returns the value of the signal in the current simulation time.

Events, Threads and Time-Consuming Methods

To match up with the simulation semantics and simulated time, as well as the interface that's imposed by PLI and VPI, e has a notion of time. Much like in verilog, imperative code is considered to take no simulation time to execute. Since e is fundamentally imperative, though, with a top-level single thread of control (and doesn't, inherently, respond passively to events on signals: they have to be sampled explicitly), e introduces a cooperative multitasking thread model to allow signals to be monitored or polled in an imperative fashion.

To do this, an 'event' data type and a new type of method, a "time-consuming method", are introduced. The execution of a time-consuming method (TCM) spans more than one simulation event. A conventional method cannot call a TCM since this would imply the normal method might occupy simulation time.

Time passes in a TCM when a wait @event statement is executed. The syntax and semantics are suspiciously similar to those of verilog. A lot like a yield call in a cooperative multitasking environment, the wait statement causes the interpreter to check any other TCMs and execute them if possible, and if not, hand control back to the logic simulator to allow simulation to proceed until a TCM's wait condition is satisfied.

The similarity in simulation semantics to verilog means that it's viable to define a synntesizable subset of e, which can be converted into hardware. Significant work has been done in this direction, in fact: a single testbench environment for simulation and emulation and phhysical testing is highly valuable...

Temporal Expressions

The event system in e is actually quite rich, and a notation known as temporal expressions exists for specifying events in terms of other events, with temporal sequencing. They share a lot of common ideas and indeed syntax with regular expressions, and are very useful particularly for functional coverage analysis.

Functional Coverage

Handily enough, e provides facilites for specifying and measuring functional coverage of conditions in the design under test, and the test data applied to it.

The coverage system is driven by the event system: on a specified event, the system records any values which are significant to coverage, and adds them to its database, writing out the coverage data to disk at the end of its run.

Items to be covered on specified events can be specified individually, or the cartesian product of different parameters can be specified. Want to be sure you've seen every combination of A with every combination of B? Take the cross product! Some of those values are actually illegal and will never occur in the system? You can tell e this, and if they actually occur, it will flag an error.

Coupled to the coverage collection, specman comes with a tool to view and interactively explore the coverage data, showing which functional coverage points have not yet been tested and indicating which areas to direct further testing activity at.

Summing up...

Underneath all the "interesting" features, e is actually a fairly neat little language with a consistent syntax, which is actually a lot more fun to use than it has any right to be. The additional features, however, make the language pragmatic and insanely useful. If you happen to be doing functional verification, that is.

E (ee).

1.

The fifth letter of the English alphabet

. It derives its form, name, and value from the Latin, the form and value being further derived from the Greek, into which it came from the Phenician, and ultimately, probably, from the Egyptian. Its etymological relations are closest with the vowels i, a, and o, as illustrated by to fall, to fell; man, pl. men; drink, drank, drench; dint, dent; doom, deem; goose, pl. geese; beef, OF. boef, L. bos; and E. cheer, OF. chiere, LL. cara.

The letter e has in English several vowel sounds, the two principal being its long or name sound, as in eve, me, and the short, as in end, best. Usually at the end of words it is silent, but serves to indicate that the preceding vowel has its long sound, where otherwise it would be short, as in mane, as in cane, mte, which without the final e would be pronounced mn, cn, mt. After c and g, the final e indicates that these letters are to be pronounced as s and j; respectively, as in lace, rage.

See Guide to Pronunciation, §§ 74-97.

2. Mus.

E is the third tone of the model diatonic scale. Eb (E flat) is a tone which is intermediate between D and E.

 

© Webster 1913.

Log in or register to write something here or to contact authors.