zaterdag 20 juni 2015

A25.Inglish BCEnc. Blauwe Kaas Encyclopedie, Duaal Hermeneuties Kollegium.

Inglish Site.25.
*
TO THE THRISE HO-
NOVRABLE AND EVER LY-
VING VERTVES OF SYR PHILLIP
SYDNEY KNIGHT, SYR JAMES JESUS SINGLETON, SYR CANARIS, SYR LAVRENTI BERIA ; AND TO THE
RIGHT HONORABLE AND OTHERS WHAT-
SOEVER, WHO LIVING LOVED THEM,
AND BEING DEAD GIVE THEM
THEIRE DVE.
***
In the beginning there is darkness. The screen erupts in blue, then a cascade of thick, white hexadecimal numbers and cracked language, ?UnusedStk? and ?AllocMem.? Black screen cedes to blue to white and a pair of scales appear, crossed by a sword, both images drawn in the jagged, bitmapped graphics of Windows 1.0-era clip-art?light grey and yellow on a background of light cyan. Blue text proclaims, ?God on tap!?
*
Introduction.
Yes i am getting a little Mobi-Literate(ML) by experimenting literary on my Mobile Phone. Peoplecall it Typographical Laziness(TL).
The first accidental entries for the this part of this encyclopedia.
*
This is TempleOS V2.17, the welcome screen explains, a ?Public Domain Operating System? produced by Trivial Solutions of Las Vegas, Nevada. It greets the user with a riot of 16-color, scrolling, blinking text; depending on your frame of reference, it might recall ?DESQview, the ?Commodore 64, or a host of early DOS-based graphical user interfaces. In style if not in specifics, it evokes a particular era, a time when the then-new concept of ?personal computing? necessarily meant programming and tinkering and breaking things.
*
Index.
*
102.Heka.
103.Gu?a.
99.Differential Calculus.
*
102.Heka.
Heka can refer to:
1.Heka (god), the deification of magic in Egyptian mythology
2.The snake of Scarab (Harris Stone) in Mummies Alive! is named Heka after this.
3.Lambda Orionis, a star in the constellation of Orion, also known by the traditional names "Meissa" and "Heka"
4.AMD Phenom II, core name for a triple-core in the Phenom II CPU-line from AMD.
Heka (/?h?k?/; Egyptian: ?k?; also spelt Hike) was the deification of magic in ancient Egypt, his name being the Egyptian word for "magic". According to Egyptian writing (Coffin text, spell 261), Heka existed "before duality had yet come into being." The term "Heka" was also used for the practice of magical ritual. The Coptic word "hik" is derived from the Ancient Egyptian.
Heka literally means activating the Ka, the aspect of the soul which embodied personality. Egyptians thought activating the power of the soul was how magic worked. "Heka" also implied great power and influence, particularly in the case of drawing upon the Ka of the gods. Heka acted together with Hu, the principle of divine utterance, and Sia, the concept of divine omniscience, to create the basis of creative power both in the mortal world and the world of the gods.
As the one who activates Ka, Heka was also said to be the son of Atum, the creator of things in general, or occasionally the son of Khnum, who created specific individual Ba (another aspect of the soul). As the son of Khnum, his mother was said to be Menhit.
The hieroglyph for his name featured a twist of flax within a pair of raised arms; however, it also vaguely resembles a pair of entwined snakes within someone's arms. Consequently, Heka was said to have battled and conquered two serpents, and was usually depicted as a man choking two giant entwined serpents. Medicine and doctors were thought to be a form of magic, and so Heka's priesthood performed these activities.
Egyptians believed that with Heka, the activation of the Ka, an aspect of the soul of both gods and humans, (and divine personification of magic), they could influence the gods and gain protection, healing and transformation. Health and wholeness of being were sacred to Heka. There is no word for religion in the ancient Egyptian language, mundane and religious world views were not distinct; thus Heka was not a secular practice but rather a religious observance. Every aspect of life, every word, plant, animal and ritual was connected to the power and authority of the gods.
In ancient Egypt, medicine consisted of four components; the primeval potency that empowered the creator-god was identified with Heka, who was accompanied by magical rituals known as Seshaw, held within sacred texts called Rw. In addition Pekhret, medicinal prescriptions, were given to patients to bring relief. This magic was used in temple rituals as well as informal situations by priests. These rituals, along with medical practices, formed an integrated therapy for both physical and spiritual health. Magic was also used for protection against the angry deities, jealous ghosts, foreign demons and sorcerers who were thought to cause illness, accidents, poverty and infertility.
*
103.Gu?a.
Gu?a (Sanskrit: ???) depending on the context means 'string, thread or strand', or 'virtue, merit, excellence', or 'quality, peculiarity, attribute, property'.
The concept originated in Samkhya philosophy, but now a key concept in various schools of Hindu philosophy. There are three gu?as, according to this worldview, that have always been and continue to be present in all things and beings in the world. These three gunas are called: sattva (goodness, constructive, harmonious), rajas (passion, active, confused), and tamas (darkness, destructive, chaotic). All of these three gunas are present in everyone and everything, it is the proportion that is different, according to Hindu worldview. The interplay of these gunas defines the character of someone or something, of nature and determines the progress of life.
In some contexts, it may mean 'a subdivision, species, kind, quality', or an operational principle or tendency of something or someone. In human behavior studies, Guna means personality, innate nature and psychological attributes of an individual.
There is no single word English language translation for the concept guna. The usual, but approximate translation is "quality".
Guna appears in many ancient and medieval era Indian texts. Depending on the context, it means:
1.string or thread, rope, sinew, chord (music, vowel phonology and arts literature)
2.virtue, merit, excellence (dharma and soteriological literature)
3.quality, peculiarity, tendency, attribute, property, species (sastras, sutras, the Epics, food and analytical literature)
The root and origins.
Gu?a is both a root and a word in Sanskrit language. Its different context-driven meanings are derived from either the root or the word. In verse VI.36 of Nirukta by Y?ska, a 1st millennium BC text on Sanskrit grammar and language that preceded Panini, Gu?a is declared to be derived from the another root Ga?a, which means "to count, enumerate". This meaning has led to its use in speciation, subdivision, classification of anything by peculiarity, attribute or property. This meaning has also led to its use with prefixes such as Dviguna (twofold), Triguna (threefold) and so on.
In another context, such as phonology, grammar and arts, "Gu?a-" takes the meaning of amantrana (?????????, addressing, invitation) or abhyasa (??????, habit, practice). In the Mahabharata Book 6 Chapter 2, the meaning of guna similarly comes in the sense of addressing each part (the root implying amantrana), and thereby it means avayava (????, member, subdivision, portion). In Sanskrit treatises on food and cooking, guna means quality, tendency and nature of ingredient. Ancient South Indian commentators, such as Lingayasurin, explain that the meaning of guna as "thread, string" comes from the root guna- in the sense of repetition (abhyasa), while the Telugu commentator Mallinatha explains the root guna- is to be understood in Sisupalavadha as amredana (???????, reiteration, repetition). Larson and Bhattacharya suggest that the "thread" metaphor relates to that which connects and runs between what we objectively observe to the tattva (??????, elementary property, principle, invisible essence) of someone or something.
In the context of philosophy, morality and understanding nature, "Guna-" with more dental na takes the meaning of addressing quality, substance, tendency and property. In abstract discussion, it includes all hues of qualities - desirable, neutral or undesirable; but if unspecified, it is assumed with good faith to be good and divine in Indian philosophy. Thus, Gu?i from the root "Gu?a-" means someone or something with "divine qualities", as in Svetasvatara Upanishad hymn VI.2.
*
99.Differential Calculus.
In mathematics, differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus.
The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.
Differential calculus and integral calculus are connected by the fundamental theorem of calculus, which states that differentiation is the reverse process to integration.
Differentiation has applications to nearly all quantitative disciplines. For example, in physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of velocity with respect to time is acceleration. The derivative of the momentum of a body equals the force applied to the body; rearranging this derivative statement leads to the famous F = ma equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories.
Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory and abstract algebra.
Suppose that x and y are real numbers and that y is a function of x, that is, for every value of x, there is a corresponding value of y. This relationship can be written as y = f(x). If f(x) is the equation for a straight line (called a linear equation), then there are two real numbers m and b such that y = mx + b. In this "slope-intercept form", the term m is called the slope and can be determined from the formula:
where the symbol ? (the uppercase form of the Greek letter Delta) is an abbreviation for "change in". It follows that ?y = m ?x.
A general function is not a line, so it does not have a slope. Geometrically, the derivative of f at the point x = a is the slope of the tangent line to the function f at the point a (see figure). This is often denoted f ?(a) in Lagrange's notation or dy/dx
 in Leibniz's notation. Since the derivative is the slope of the linear approximation to f at the point a, the derivative (together with the value of f at a) determines the best linear approximation, or linearization, of f near the point a.
If every point a in the domain of f has a derivative, there is a function that sends every point a to the derivative of f at a. This derivative function is usually written as f ?(x) in Lagrange's notation or dy/dx
 in Leibniz's notation. For example, if f(x) = x2, then the derivative function f ?(x) = dy/dx= 2x.
A closely related notion is the differential of a function. When x and y are real variables, the derivative of f at x is the slope of the tangent line to the graph of f at x. Because the source and target of f are one-dimensional, the derivative of f is a real number. If x and y are vectors, then the best linear approximation to the graph of f depends on how f changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted
?y/?x
. The linearization of f in all directions at once is called the total derivative.
The concept of a derivative in the sense of a tangent line is a very old one, familiar to Greek geometers such as Euclid (c. 300 BC), Archimedes (c. 287?212 BC) and Apollonius of Perga (c. 262?190 BC). Archimedes also introduced the use of infinitesimals, although these were primarily used to study areas and volumes rather than derivatives and tangents; see Archimedes' use of infinitesimals.
The use of infinitesimals to study rates of change can be found in Indian mathematics, perhaps as early as 500 AD, when the astronomer and mathematician Aryabhata (476?550) used infinitesimals to study the motion of the moon. The use of infinitesimals to compute rates of change was developed significantly by Bh?skara II (1114?1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem". The Persian mathematician, Sharaf al-D?n al-T?s? (1135?1213), was the first to discover the derivative of cubic polynomials, an important result in differential calculus; his Treatise on Equations developed concepts related to differential calculus, such as the derivative function and the maxima and minima of curves, in order to solve cubic equations which may not have positive solutions.
The modern development of calculus is usually credited to Isaac Newton (1643?1727) and Gottfried Leibniz (1646?1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes, which had not been significantly extended since the time of Ibn al-Haytham (Alhazen). For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Isaac Barrow (1630?1677), René Descartes (1596?1650), Christiaan Huygens (1629?1695), Blaise Pascal (1623?1662) and John Wallis (1616?1703). Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today.
Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789?1857), Bernhard Riemann (1826?1866), and Karl Weierstrass (1815?1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane.
Optimization.
If f is a differentiable function on ? (or an open interval) and x is a local maximum or a local minimum of f, then the derivative of f at x is zero; points where f'(x) = 0 are called critical points or stationary points (and the value of f at x is called a critical value). (The definition of a critical point is sometimes extended to include points where the derivative does not exist.) Conversely, a critical point x of f can be analysed by considering the second derivative of f at x:
if it is positive, x is a local minimum;
if it is negative, x is a local maximum;
if it is zero, then x could be a local minimum, a local maximum, or neither. (For example, f(x) = x3 has a critical point at x = 0, but it has neither a maximum nor a minimum there, whereas f(x) = ± x4 has a critical point at x = 0 and a minimum and a maximum, respectively, there.)
This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the f' on each side of the critical point.
Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.
This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.
In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is a saddle point, and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is inconclusive.
Calculus of variations.
One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the simplest problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations.
Physics.
Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and evolve over time, and the concept of the "time derivative" ? the rate of change over time ? is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics:
velocity is the derivative (with respect to time) of an object's displacement (distance from the original position)
acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position.
For example, if an object's position on a line is given by
then the object's velocity is
and the object's acceleration is
which is constant.
Differential equations.
A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation.
The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation.
Here u(x,t) is the temperature of the rod at position x and time t and ? is a constant that depends on how fast heat diffuses through the rod.
Mean value theorem.
The mean value theorem gives a relationship between values of the derivative and values of the original function. If f(x) is a real-valued function and a and b are numbers with a < b, then the mean value theorem says that under mild hypotheses, the slope between the two points (a, f(a)) and (b, f(b)) is equal to the slope of the tangent line to f at some point c between a and b. In other words,
In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that f has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of f must equal the slope of one of the tangent lines of f. All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
Taylor polynomials and Taylor series.
The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function f(x) at the point x0 is a linear polynomial a + b(x ? x0), and it may be possible to get a better approximation by considering a quadratic polynomial a + b(x ? x0) + c(x ? x0)2. Still better might be a cubic polynomial a + b(x ? x0) + c(x ? x0)2 + d(x ? x0)3, and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients a, b, c, and d that makes the approximation as good as possible.
In the neighbourhood of x0, for a the best possible choice is always f(x0), and for b the best possible choice is always f'(x0). For c, d, and higher-degree coefficients, these coefficients are determined by higher derivatives of f. c should always be
f''(x0)/2
, and d should always be
f'''(x0)/3!
. Using these coefficients gives the Taylor polynomial of f. The Taylor polynomial of degree d is the polynomial of degree d which best approximates f, and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If f is a polynomial of degree less than or equal to d, then the Taylor polynomial of degree d equals f.
The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic, but there are smooth functions which are not analytic.
Implicit function theorem.
Main article: Implicit function theorem
Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if f(x, y) = x2 + y2 ? 1, then the circle is the set of all pairs (x, y) such that f(x, y) = 0. This set is called the zero set of f. It is not the same as the graph of f, which is a cone. The implicit function theorem converts relations such as f(x, y) = 0 into functions. It states that if f is continuously differentiable, then around most points, the zero set of f looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of f. The circle, for instance, can be pasted together from the graphs of the two functions ± ?1 - x2. In a neighborhood of every point on the circle except (?1, 0) and (1, 0), one of these two functions has a graph that looks like the circle. (These two functions also happen to meet (?1, 0) and (1, 0), but this is not guaranteed by the implicit function theorem.)
The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.

Geen opmerkingen:

Een reactie posten