A67.Inglish BCEnc. Blauwe Kaas Encyclopedie, Duaal Hermeneuties Kollegium.
Inglish Site.67.
*
TO THE THRISE HO-
NOVRABLE AND EVER LY-
VING VERTVES OF SYR PHILLIP
SYDNEY KNIGHT, SYR JAMES JESUS SINGLETON, SYR CANARIS, SYR LAVRENTI BERIA ; AND TO THE
RIGHT HONORABLE AND OTHERS WHAT-
SOEVER, WHO LIVING LOVED THEM,
AND BEING DEAD GIVE THEM
THEIRE DVE.
***
In the beginning there is darkness. The screen erupts in blue, then a cascade of thick, white hexadecimal numbers and cracked language, ?UnusedStk? and ?AllocMem.? Black screen cedes to blue to white and a pair of scales appear, crossed by a sword, both images drawn in the jagged, bitmapped graphics of Windows 1.0-era clip-art?light grey and yellow on a background of light cyan. Blue text proclaims, ?God on tap!?
*
Introduction.
Yes i am getting a little Mobi-Literate(ML) by experimenting literary on my Mobile Phone. Peoplecall it Typographical Laziness(TL).
The first accidental entries for the this part of this encyclopedia.
*
This is TempleOS V2.17, the welcome screen explains, a ?Public Domain Operating System? produced by Trivial Solutions of Las Vegas, Nevada. It greets the user with a riot of 16-color, scrolling, blinking text; depending on your frame of reference, it might recall ?DESQview, the ?Commodore 64, or a host of early DOS-based graphical user interfaces. In style if not in specifics, it evokes a particular era, a time when the then-new concept of ?personal computing? necessarily meant programming and tinkering and breaking things.
*
Index.
164.Crystallographic Database.
163."The Garden of Forking Paths"(2) connected concepts.
*
164.Crystallographic Database.
A crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. Crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. They are characterized by symmetry, morphology, and directionally dependent physical properties. A crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. (Molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in X-ray, neutron, and electron diffraction based crystallography.)
Crystal structures of crystalline material are typically determined from X-ray or neutron single-crystal diffraction data and stored in crystal structure databases. They are routinely identified by comparing reflection intensities and lattice spacings from X-ray powder diffraction data with entries in powder-diffraction fingerprinting databases.
Crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single-crystal electron diffraction data or structure factor amplitude and phase angle information from Fourier transforms of HRTEM images of crystallites. They are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice-fringe fingerprint plots with entries in a lattice-fringe fingerprinting database.
Crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. Many provide structure visualization capabilities. They can be browser based or installed locally. Newer versions are built on the relational database model and support the Crystallographic Information File (CIF) as a universal data exchange format.
Overview.
Crystallographic data are primarily extracted from published scientific articles and supplementary material. Newer versions of crystallographic databases are built on the relational database model, which enables efficient cross-referencing of tables. Cross-referencing serves to derive additional data or enhance the search capacity of the database.
Data exchange among crystallographic databases, structure visualization software, and structure refinement programs has been facilitated by the emergence of the Crystallographic Information File (CIF) format. The CIF format is the standard file format for the exchange and archiving of crystallographic data.[1] It was adopted by the International Union of Crystallography (IUCr), who also provides full specifications of the format. It is supported by all major crystallographic databases.
The increasing automation of the crystal structure determination process has resulted in ever higher publishing rates of new crystal structures and, consequentially, new publishing models. Minimalistic articles contain only crystal structure tables, structure images, and, possibly, abstract-like structure description. They tend to be published in author-financed or subsidized open-access journals. Acta Crystallographica Section E and Zeitschrift für Kristallographie belong in this category. More elaborate contributions may go to traditional subscriber-financed journals. Hybrid journals, on the other hand, embed individual author-financed open-access articles among subscriber-financed ones. Publishers may also make scientific articles available online, as Portable Document Format (PDF) files.
Crystal structure data in CIF format are linked to scientific articles as supplementary material. CIFs may be accessible directly from the publisher?s website, crystallographic databases, or both. In recent years, many publishers of crystallographic journals have come to interpret CIFs as formatted versions of open data, i.e. representing non-copyrightable facts, and therefore tend to make them freely available online, independent of the accessibility status of linked scientific articles.
Trends.
Trends of crystal structures in databases over the last decade.
As of 2008, more than 700,000 crystal structures had been published and stored in crystal structure databases. The publishing rate has reached more than 50,000 crystal structures per year. These numbers refer to published and republished crystal structures from experimental data. Crystal structures are republished owing to corrections for symmetry errors, improvements of lattice and atomic parameters, and differences in diffraction technique or experimental conditions. As of 2014, there are about 1,000,000 molecule and crystal structures known and published, probably a third of them in open access.
Crystal structures are typically categorized as minerals, metals-alloys, inorganics, organics, nucleic acids, and biological macromolecules. Individual crystal structure databases cater for users in specific chemical, molecular-biological, or related disciplines by covering super- or subsets of these categories. Minerals are a subset of mostly inorganic compounds. The category ?metals-alloys? covers metals, alloys, and intermetallics. Metals-alloys and inorganics can be merged into ?non-organics?. Organic compounds and biological macromolecules are separated according to molecular size. Organic salts, organometallics, and metalloproteins tend to be attributed to organics or biological macromolecules, respectively. Nucleic acids are a subset of biological macromolecules.
Comprehensiveness can refer to the number of entries in a database. On those terms, a crystal structure database can be regarded as comprehensive, if it contains a collection of all (re-)published crystal structures in the category of interest and is updated frequently. Searching for structures in such a database can replace more time-consuming scanning of the open literature. Access to crystal structure databases differs widely. It can be divided into reading and writing access. Reading access rights (search, download) affect the number and range of users. Restricted reading access is often coupled with restricted usage rights. Writing access rights (upload, edit, delete), on the other hand, determine the number and range of contributors to the database. Restricted writing access is often coupled with high data integrity.
In terms of user numbers and daily access rates, comprehensive and thoroughly vetted open-access crystal structure databases naturally surpass comparable databases with more restricted access and usage rights. Independent of comprehensiveness, open-access crystal structure databases have spawned open-source software projects, such as search-analysis tools, visualization software, and derivative databases. Scientific progress has been slowed down by restricting access or usage rights as well as limiting comprehensiveness or data integrity. Restricted access or usage rights are commonly associated with commercial crystal structure databases. Lack of comprehensiveness or data integrity, on the other hand, are associated with some of the open-access crystal structure databases other than the Crystallography Open Database (COD), and is "macromolecular open-access counterpart", the world wide Protein Database. Apart from that, several crystal structure databases are freely available for primarily educational purposes, in particular mineralogical databases and educational offshoots of the COD .
Crystallographic databases can specialize in crystal structures, crystal phase identification, crystallization, crystal morphology, or various physical properties. More integrative databases combine several categories of compounds or specializations. Structures of incommensurate phases, nanocrystals, thin films on substrates, and predicted crystal structures are collected in tailored special structure databases.
Search.
Search capacities of crystallographic databases differ widely. Basic functionality comprises search by keywords, physical properties, and chemical elements. Of particular importance is search by compound name and lattice parameters. Very useful are search options that allow the use of wildcard characters and logical connectives in search strings. If supported, the scope of the search can be constrained by the exclusion of certain chemical elements.
More sophisticated algorithms depend on the material type covered. Organic compounds might be searched for on the basis of certain molecular fragments. Inorganic compounds, on the other hand, might be of interest with regard to a certain type of coordination geometry. More advanced algorithms deal with conformation analysis (organics), supramolecular chemistry (organics), interpolyhedral connectivity (?non-organics?) and higher-order molecular structures (biological macromolecules). Search algorithms used for a more complex analysis of physical properties, e.g. phase transitions or structure-property relationships, might apply group-theoretical concepts.
Modern versions of crystallographic databases are based on the relational database model. Communication with the database usually happens via a dialect of the Structured Query Language (SQL). Web-based databases typically process the search algorithm on the server interpreting supported scripting elements, while desktop-based databases run locally installed and usually precompiled search engines.
Crystal phase identification.
Crystalline material may be divided into single crystals, twin crystals, polycrystals, and crystal powder. In a single crystal, the arrangement of atoms, ions, or molecules is defined by a single crystal structure in one orientation. Twin crystals, on the other hand, consist of single-crystalline twin domains, which are aligned by twin laws and separated by domain walls.
Polycrystals are made of a large number of small single crystals, or crystallites, held together by thin layers of amorphous solid. Crystal powder is obtained by grinding crystals, resulting in powder particles, made up of one or more crystallites. Both polycrystals and crystal powder consist of many crystallites with varying orientation.
Crystal phases are defined as regions with the same crystal structure, irrespective of orientation or twinning. Single and twinned crystalline specimens therefore constitute individual crystal phases. Polycrystalline or crystal powder samples may consist of more than one crystal phase. Such a phase comprises all the crystallites in the sample with the same crystal structure.
Crystal phases can be identified by successfully matching suitable crystallographic parameters with their counterparts in database entries. Prior knowledge of the chemical composition of the crystal phase can be used to reduce the number of database entries to a small selection of candidate structures and thus simplify the crystal phase identification process considerably.
Powder diffraction fingerprinting (1D).
Applying standard diffraction techniques to crystal powders or polycrystals is tantamount to collapsing the 3D reciprocal space, as obtained via single-crystal diffraction, onto a 1D axis. The resulting partial-to-total overlap of symmetry-independent reflections renders the structure determination process more difficult, if not impossible.
Powder diffraction data can be plotted as diffracted intensity (I) versus reciprocal lattice spacing (1/d). Reflection positions and intensities of known crystal phases, mostly from X-ray diffraction data, are stored, as d-I data pairs, in the Powder Diffraction File (PDF) database. The list of d-I data pairs is highly characteristic of a crystal phase and, thus, suitable for the identification, also called ?fingerprinting?, of crystal phases.
Search-match algorithms compare selected test reflections of an unknown crystal phase with entries in the database. Intensity-driven algorithms utilize the three most intense lines (so-called ?Hanawalt search?), while d-spacing-driven algorithms are based on the eight to ten largest d-spacings (so-called ?Fink search?).
X-ray powder diffraction fingerprinting has become the standard tool for the identification of single or multiple crystal phases and is widely used in such fields as metallurgy, mineralogy, forensic science, archeology, condensed matter physics, and the biological and pharmaceutical sciences.
Lattice-fringe fingerprinting (2D).
Main article: lattice-fringe fingerprinting
Powder diffraction patterns of very small single crystals, or crystallites, are subject to size-dependent peak broadening, which, below a certain size, renders powder diffraction fingerprinting useless. In this case, peak resolution is only possible in 3D reciprocal space, i.e. by applying single-crystal electron diffraction techniques.
High-Resolution Transmission Electron Microscopy (HRTEM) provides images and diffraction patterns of nanometer sized crystallites. Fourier transforms of HRTEM images and electron diffraction patterns both supply information about the projected reciprocal lattice geometry for a certain crystal orientation, where the projection axis coincides with the optical axis of the microscope.
Projected lattice geometries can be represented by so-called ?lattice-fringe fingerprint plots? (LFFPs), also called angular covariance plots. The horizontal axis of such a plot is given in reciprocal lattice length and is limited by the point resolution of the microscope. The vertical axis is defined as acute angle between Fourier transformed lattice fringes or electron diffraction spots. A 2D data point is defined by the length of a reciprocal lattice vector and its (acute) angle with another reciprocal lattice vector. Sets of 2D data points that obey Weiss?s zone law are subsets of the entirety of data points in an LFFP. A suitable search-match algorithm using LFFPs, therefore, tries to find matching zone axis subsets in the database. It is, essentially, a variant of a lattice matching algorithm.
The performance of search-match procedures utilizing LFFPs, also called ?lattice-fringe fingerprinting?, can be sped up by precalculating and storing full LFFPs of all entries, assuming either kinematic or dynamic scattering and a given point resolution of the microscope. The number of possible entries can be narrowed down on the basis of chemical compound information.
In the case of electron diffraction patterns, structure factor amplitudes can be used, in a later step, to further discern among a selection of candidate structures (so-called 'structure factor fingerprinting'). Structure factor amplitudes from electron diffraction data are far less reliable than their counterparts from X-ray single-crystal and powder diffraction data. Existing precession electron diffraction techniques greatly improve the quality of structure factor amplitudes, increase their number and, thus, make structure factor amplitude information much more useful for the fingerprinting process.
Fourier transforms of HRTEM images, on the other hand, supply information not only about the projected reciprocal lattice geometry and structure factor amplitudes, but also structure factor phase angles. After crystallographic image processing, structure factor phase angles are far more reliable than structure factor amplitudes. Further discernment of candidate structures is then mainly based on structure factor phase angles and, to a lesser extent, structure factor amplitudes (so-called 'structure factor fingerprinting').
Morphological fingerprinting (3D)Edit
The Generalized Steno Law states that the interfacial angles between identical faces of any single crystal of the same material are, by nature, restricted to the same value. This offers the opportunity to fingerprint crystalline materials on the basis of optical goniometry, which is also known as crystallometry. In order to employ this technique successfully, one must consider the observed point group symmetry of the measured faces and creatively apply the rule that "crystal morphologies are often combinations of simple (i.e. low multiplicity) forms where the individual faces have the lowest possible Miller indices for any given zone axis". This shall ensure that the correct indexing of the crystal faces is obtained for any single crystal.
It is in many cases possible to derive the ratios of the crystal axes for crystals with low symmetry from optical goniometry with high accuracy and precision and to identify a crystalline material on their basis alone employing databases such as 'Crystal Data'. Provided that the crystal faces have been correctly indexed and the interfacial angles were measured to better than a few fractions of a tenth of a degree, a crystalline material can be identified quite unambiguously on the basis of angle comparisons to two rather comprehensive databases: the 'Bestimmungstabellen für Kristalle (???????????? ??????????)' and the 'Barker Index of Crystals'.
Since Steno?s Law can be further generalized for a single crystal of any material to include the angles between either all identically indexed net planes (i.e. vectors of the reciprocal lattice, also known as 'potential reflections in diffraction experiments') or all identically indexed lattice directions (i.e. vectors of the direct lattice, also known as zone axes), opportunities exist for morphological fingerprinting of nanocrystals in the transmission electron microscope (TEM) by means of transmission electron goniometry.
The specimen goniometer of a TEM is thereby employed analogously to the goniometer head of an optical goniometer. The optical axis of the TEM is then analogous to the reference direction of an optical goniometer. While in optical goniometry net-plane normals (reciprocal lattice vectors) need to be successively aligned parallel to the reference direction of an optical goniometer in order to derive measurements of interfacial angles, the corresponding alignment needs to be done for zone axes (direct lattice vector) in transmission electron goniometry. (Note that such alignments are by their nature quite trivial for nanocrystals in a TEM after the microscope has been aligned by standard procedures.)
Since transmission electron goniometry is based on Bragg?s Law for the transmission (Laue) case (diffraction of electron waves), interzonal angles (i.e. angles between lattice directions) can be measured by a procedure that is analogous to the measurement of interfacial angles in an optical goniometer on the basis of Snell?s Law, i.e. the reflection of light. The complements to interfacial angles of external crystal faces can, on the other hand, be directly measured from a zone-axis diffraction pattern or from the Fourier transform of a high resolution TEM image that shows crossed lattice fringes.
Lattice matching (3D).
Lattice parameters of unknown crystal phases can be obtained from X-ray, neutron, or electron diffraction data. Single-crystal diffraction experiments supply orientation matrices, from which lattice parameters can be deduced. Alternatively, lattice parameters can be obtained from powder or polycrystal diffraction data via profile fitting without structural model (so-called 'Le Bail method').
Arbitrarily defined unit cells can be transformed to a standard setting and, from there, further reduced to a primitive smallest cell. Sophisticated algorithms compare such reduced cells with corresponding database entries. More powerful algorithms also consider derivative super- and subcells. The lattice-matching process can be further sped up by precalculating and storing reduced cells for all entries. The algorithm searches for matches within a certain range of the lattice parameters. More accurate lattice parameters allow a narrower range and, thus, a better match.
Lattice matching is useful in identifying crystal phases in the early stages of single-crystal diffraction experiments and, thus, avoiding unnecessary full data collection and structure determination procedures for already known crystal structures. The method is particularly important for single-crystalline samples that need to be preserved. If, on the other hand, some or all of the crystalline sample material can be ground, powder diffraction fingerprinting is usually the better option for crystal phase identification, provided that the peak resolution is good enough. However, lattice matching algorithms are still better at treating derivative super- and subcells.
*
163."The Garden of Forking Paths"(2) connected concepts.
1.Victory Garden.
Victory Garden is a work of electronic literature by American author Stuart Moulthrop. It was written in StorySpace and published by Eastgate Systems in 1992. It is often discussed along with Michael Joyce's Afternoon, a story as an important work of hypertext fiction.
Plot and structure.
Victory Garden is a hypertext novel which is set during the Gulf War, in 1991. The story centres on Emily Runbird and the lives and interactions of the people connected with her life. Although Emily is a central figure to the story and networked lives of the characters, there is no one character who could be classed as the protagonist. Each character in Victory Garden lends their own sense of perspective to the story and all characters are linked through a series of bridges and connections.
There is no set "end" to the story. Rather there are multiple nodes that provide a sense of closure for the reader. In one such "ending", Emily appears to die. However, in another "ending", she comes home safe from the war. How the story plays out depends on the choices the reader makes during their navigation of the text. The passage of time is uncertain as the reader can find nodes that focus on the present, flashbacks or even dreams and the nodes are frequently presented in a non-linear fashion. The choices the reader makes can lead them to focus on individual characters, meaning that while there are a series of characters in the story the characters focused on can change with each reading, or a particular place.
Upon entering the work the reader is presented with a series of choices as to how to navigate the story. The reader may enter the text through a variety of means: the map of the 'garden', the lists of paths, or by the composition of a sentence. Each of these paths guides the reader though fragmented pieces of the story (in the form of node) and by reading and rereading many different paths the reader receives different perspectives of the different characters.
Characters.
Emily:
Emily has been through law school and she has an older brother [firm]
Emily is in the Gulf war
Emily is with Boris but may have had something with the Victor? [Dear Victor]
Emily has been with Boris for 3 years, losing love for him? [No genius]
Emily?s surname is Runbird [a true story]
Emily is reading ?Blood and Guts in High Schools? which Boris sent her [blood & guts in S.A.]
Flashes back to a morning with Boris, hints towards an event earlier on in their relationship, Boris has facial hair, Emily it undecided on whether she likes it or not [Facial hair]
Same morning, a little later on, Emily doesn?t approve of the facial hair, thinks of it as false advertising [face it]
Back to current time, Emily is writing to Boris, Thea is depressed, Veronica needs to pay the car insurance. Boris is expected to have bought a new bed [Dear you]
Lucy is Emily?s Mother
She also has a younger sister by the name of Veronica
Emily is a fit agile woman
Thea Agnew
She is a professor at a University in the town of Tara.
Emily and her sister Veronica are her pupils.
She has a teenaged son named Leroy who has recently left school to take his own "On the Road" tour of the United States.
Central to the plot of Victory Garden is Thea's role as head of a Curriculum Revision Committee looking at the subject of Western Civilization as well her discovery with a group of friends that a popular local creek has been sold to a company intending to build a golf course nearby.
One of the pivotal scenes in Victory Garden occurs at Thea's house. During a party an appearance from Uqbari the Prophet leads to a gun being fired off in her back yard which results in the intervention of police and the accidental beating of Harley.
There are many reoccuring characters in Victory Garden. This includes Harley, Boris Urquhart, Veronica, Leroy, and others.
Politics in Victory Garden.
According to David Ciccoricco, "Although some early critics were quick to see Victory Garden as rooted in a leftist political ideology, Moulthrop's narrative is not unequivocally leftist. Its political orientation in a sense mirrors its material structure, for neither sits on a stable axis. In fact, Moulthrop is more interested in questioning how a palette of information technologies contributes to - or, for those who adopt the strong reading, determines - the formation of political ideologies. In addition to popular forms of information dissemination, this palette would include hypertext technology, which reflexively questions its own role in disseminating information as the narrative of Victory Garden progresses.
Citing Sven Birkerts' observation that attitudes toward information technologies do not map neatly onto the familiar liberal/conservative axis, Moulthrop writes:
Newt Gingrich and Timothy Leary have both been advocates of the Internet... I am interested less in old ideological positions than in those now emerging, which may be defined more by attitudes toward information and interpretive authority than by traditional political concerns. (Moulthrop 1997, 674 n4)
The politics of Victory Garden, much like its plot, do not harbor foregone conclusions. In a 1994 interview, Moulthrop says it 'is a story about war and the futility of war, and about its nobility at the same time' (Dunn 1994)."[1]
Critical reception.
As one of the classics of hypertext fiction, Victory Garden has been discussed and analysed by many critics, including Robert Coover,[2] Raine Koskimaa,[3] James Phelan and E. Maloney,[4] Robert Selig,[5] David Ciccoricco,[6] and Silvio Gaggi.[7]
References.
^ Ciccoricco, David. (2007) Reading Network Fiction. Tuscaloosa: U. Alabama Press, 95.
^ Robert Coover. 1998. "Hyperfiction: Novels for the Computer", The New York Times Book Review, August 29, 1998. p. 1 ff.
^ Koskimaa, Raine. 2000. "Reading Victory Garden: Competing Interpretations and Loose Ends", Cybertext Yearbook 2000, eds. Markku Eskelinen and Raine Koskimaa. Jyväskylä: Research Centre for Contemporary Culture. 117-40.
^ Phelan, James, and E. Maloney. 1999-2000. "Authors, Readers, and Progressions in Hypertext Narratives", Works and Days, vol. 17/18: 265-77.
^ Selig, Robert L. 2000. "The Endless Reading of Fiction: Stuart Moulthrop's Hypertext Novel Victory Garden." Contemporary Literature, Vol. 41, no. 4: 642-59.
^ Ciccoricco, David. (2007) Reading Network Fiction. Tuscaloosa: U. Alabama Press, 94-123.
^ Gaggi, Silvio. 1999. "Hyperrealities and Hypertexts" in From Text to Hypertext: Decentering the Subject in Fiction, Film, the Visual Arts, and Electronic Media (Philadelphia: University of Pennsylvania Press. 98-139
2.The Many-Worlds Interpretation.
The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual "world" (or "universe"). In lay terms, the hypothesis states there is a very large?perhaps infinite ? number of universes, and everything that could possibly have happened in our past, but did not, has occurred in the past of some other universe or universes. The theory is also referred to as MWI, the relative state formulation, the Everett interpretation, the theory of the universal wavefunction, many-universes interpretation, or just many-worlds.
The original relative state formulation is due to Hugh Everett in 1957. Later, this formulation was popularized and renamed many-worlds by Bryce Seligman DeWitt in the 1960s and 1970s. The decoherence approaches to interpreting quantum theory have been further explored and developed, becoming quite popular. MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the historical Copenhagen interpretation), and hidden variable theories such as the Bohmian mechanics.
Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views reality as a many-branched tree, wherein every possible quantum outcome is realised. Many-worlds reconciles the observation of non-deterministic events, such as the random radioactive decay, with the fully deterministic equations of quantum physics.
In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox and Schrödinger's cat, since every possible outcome of every event defines or exists in its own "history" or "world".
3.Replication Crisis.
Social psychology has recently found itself at the center of a "replication crisis" due to some research findings proving difficult to replicate. Replication failures are not unique to social psychology and are found in all fields of science. However, several factors have combined to put social psychology at the center of the current controversy.
Firstly, questionable researcher practices (QRP) have been identified as common in the field. Such practices, while not intentionally fraudulent, involve converting undesired statistical outcomes into desired outcomes via the manipulation of statistical analyses, sample size or data management, typically to convert non-significant findings into significant ones. Some studies have suggested that at least mild versions of QRP are highly prevalent. One of the critics of Daryl Bem in the feeling the future controversy has suggested that the evidence for precognition in this study could (at least in part) be attributed to QRP.
Secondly, social psychology has found itself at the center of several recent scandals involving outright fraudulent research. Most notably the admitted data fabrication by Diederik Stapel as well as allegations against others. However, most scholars acknowledge that fraud is, perhaps, the lesser contribution to replication crises.
Third, several effects in social psychology have been found to be difficult to replicate even before the current replication crisis. For example the scientific journal Judgment and Decision Making has published several studies over the years that fail to provide support for the unconscious thought theory. Replications appear particularly difficult when research trials are pre-registered and conducted by research groups not highly invested in the theory under questioning.
These three elements together have resulted in renewed attention for replication supported by Kahneman. Scrutiny of many effects have shown that several core beliefs are hard to replicate. A recent special edition of the journal Social Psychology focused on replication studies and a number of previously held beliefs were found to be difficult to replicate. A 2012 special edition of the journal Perspectives on Psychological Science also focused on issues ranging from publication bias to null-aversion that contribute to the replication crises in psychology.
It is important to note that this replication crisis does not mean that social psychology is unscientific. Rather this process is a healthy if sometimes acrimonious part of the scientific process in which old ideas or those that cannot withstand careful scrutiny are pruned. The consequence is that some areas of social psychology once considered solid, such as social priming, have come under increased scrutiny due to failed replications.
4.Possible World.
In philosophy and logic, the concept of a possible world is used to express modal claims. The concept of possible worlds is common in contemporary philosophical discourse but has been disputed.
Possibility, necessity, and contingency.
Those theorists who use the concept of possible worlds consider the actual world to be one of the many possible worlds. For each distinct way the world could have been, there is said to be a distinct possible world; the actual world is the one we in fact live in. Among such theorists there is disagreement about the nature of possible worlds; their precise ontological status is disputed, and especially the difference, if any, in ontological status between the actual world and all the other possible worlds. One position on these matters is set forth in David Lewis's modal realism (see below). There is a close relation between propositions and possible worlds. We note that every proposition is either true or false at any given possible world; then the modal status of a proposition is understood in terms of the worlds in which it is true and worlds in which it is false. The following are among the assertions we may now usefully make:
True propositions are those that are true in the actual world (for example: "Richard Nixon became president in 1969").
False propositions are those that are false in the actual world (for example: "Ronald Reagan became president in 1969"). (Reagan did not run for president until 1976, and thus couldn't possibly have been elected.)
Possible propositions are those that are true in at least one possible world (for example: "Hubert Humphrey became president in 1969"). (Humphrey did run for president in 1968, and thus could have been elected.) This includes propositions which are necessarily true, in the sense below.
Impossible propositions (or necessarily false propositions) are those that are true in no possible world (for example: "Melissa and Toby are taller than each other at the same time").
Necessarily true propositions (often simply called necessary propositions) are those that are true in all possible worlds (for example: "2 + 2 = 4"; "all bachelors are unmarried").
Contingent propositions are those that are true in some possible worlds and false in others (for example: "Richard Nixon became president in 1969" is contingently true and "Hubert Humphrey became president in 1969" is contingently false).
The idea of possible worlds is most commonly attributed to Gottfried Leibniz, who spoke of possible worlds as ideas in the mind of God and used the notion to argue that our actually created world must be "the best of all possible worlds". However, scholars have also found implicit traces of the idea in the works of Al-Ghazali (The Incoherence of the Philosophers), Averroes (The Incoherence of the Incoherence), Fakhr al-Din al-Razi (Matalib al-'Aliya) and John Duns Scotus. The modern philosophical use of the notion was pioneered by David Lewis and Saul Kripke.
Formal semantics of modal logics.
A semantics for modal logic was first introduced in the late-1950s work of Saul Kripke and his colleagues. A statement in modal logic that is possible is said to be true in at least one possible world; a statement that is necessary is said to be true in all possible worlds.
From modal logic to philosophical tool
From this groundwork, the theory of possible worlds became a central part of many philosophical developments, from the 1960s onwards ? including, most famously, the analysis of counterfactual conditionals in terms of "nearby possible worlds" developed by David Lewis and Robert Stalnaker. On this analysis, when we discuss what would happen if some set of conditions were the case, the truth of our claims is determined by what is true at the nearest possible world (or the set of nearest possible worlds) where the conditions obtain. (A possible world W1 is said to be near to another possible world W2 in respect of R to the degree that the same things happen in W1 and W2 in respect of R; the more different something happens in two possible worlds in a certain respect, the "further" they are from one another in that respect.) Consider this conditional sentence: "If George W. Bush hadn't become president of the U.S. in 2001, Al Gore would have." The sentence would be taken to express a claim that could be reformulated as follows: "In all nearest worlds to our actual world (nearest in relevant respects) where George W. Bush didn't become president of the U.S. in 2001, Al Gore became president of the U.S. then instead." And on this interpretation of the sentence, if there is or are some nearest worlds to the actual world (nearest in relevant respects) where George W. Bush didn't become president but Al Gore didn't either, then the claim expressed by this counterfactual would be false.
Today, possible worlds play a central role in many debates in philosophy, including especially debates over the Zombie Argument, and physicalism and supervenience in the philosophy of mind. Many debates in the philosophy of religion have been reawakened by the use of possible worlds. Intense debate has also emerged over the ontological status of possible worlds, provoked especially by David Lewis's defense of modal realism, the doctrine that talk about "possible worlds" is best explained in terms of innumerable, really existing worlds beyond the one we live in. The fundamental question here is: given that modal logic works, and that some possible-worlds semantics for modal logic is correct, what has to be true of the world, and just what are these possible worlds that we range over in our interpretation of modal statements? Lewis argued that what we range over are real, concrete worlds that exist just as unequivocally as our actual world exists, but that are distinguished from the actual world simply by standing in no spatial, temporal, or causal relations with the actual world. (On Lewis's account, the only "special" property that the actual world has is a relational one: that we are in it. This doctrine is called "the indexicality of actuality": "actual" is a merely indexical term, like "now" and "here".) Others, such as Robert Adams and William Lycan, reject Lewis's picture as metaphysically extravagant, and suggest in its place an interpretation of possible worlds as consistent, maximally complete sets of descriptions of or propositions about the world, so that a "possible world" is conceived of as a complete description of a way the world could be ? rather than a world that is that way. (Lewis describes their position, and similar positions such as those advocated by Alvin Plantinga and Peter Forrest, as "ersatz modal realism", arguing that such theories try to get the benefits of possible worlds semantics for modal logic "on the cheap", but that they ultimately fail to provide an adequate explanation.) Saul Kripke, in Naming and Necessity, took explicit issue with Lewis's use of possible worlds semantics, and defended a stipulative account of possible worlds as purely formal (logical) entities rather than either really existent worlds or as some set of propositions or descriptions.
Possible-world theory in literary studies
Possible worlds theory in literary studies uses concepts from possible-world logic and applies them to worlds that are created by fictional texts, fictional universe. In particular, possible-world theory provides a useful vocabulary and conceptual framework with which to describe such worlds. However, a literary world is a specific type of possible world, quite distinct from the possible worlds in logic. This is because a literary text houses its own system of modality, consisting of actual worlds (actual events) and possible worlds (possible events). In fiction, the principle of simultaneity, it extends to cover the dimensional aspect, when it is contemplated that two or more physical objects, realities, perceptions and objects non-physical, can coexist in the same space-time. Thus, a literary universe is granted autonomy in much the same way as the actual universe.
Literary critics, such as Marie-Laure Ryan, Lubomír Dole?el, and Thomas Pavel, have used possible-worlds theory to address notions of literary truth, the nature of fictionality, and the relationship between fictional worlds and reality. Taxonomies of fictional possibilities have also been proposed where the likelihood of a fictional world is assessed. Rein Raud has extended this approach onto "cultural" worlds, comparing possible worlds to the particular constructions of reality of different cultures. Possible-world theory is also used within narratology to divide a specific text into its constituent worlds, possible and actual. In this approach, the modal structure of the fictional text is analysed in relation to its narrative and thematic concerns.
5.Future contingent propositions (or simply, future contingents) are statements about states of affairs in the future that are neither necessarily true nor necessarily false.
The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation (De Interpretatione), using the famous sea-battle example. Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious Master Argument. The problem was later discussed by Leibniz. Deleuze used it to oppose a "logic of the event" to a "logic of signification".
The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow (for example, because the ships are too far apart now). Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. This conflicts with the idea of our own free will: that we have the power to determine the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen.
Aristotle's solution.
Aristotle solved the problem by asserting that the principle of bivalence found its exception in this paradox of the sea battles: in this specific case, what is impossible is that both alternatives can be possible at the same time: either there will be a battle, or there won't. Both options can't be simultaneously taken. Today, they are neither true nor false; but if one is true, then the other becomes false. According to Aristotle, it is impossible to say today if the proposition is correct: we must wait for the contingent realization (or not) of the battle, logic realizes itself afterwards:
One of the two propositions in such instances must be true and the other false, but we cannot say determinately that this or that is false, but must leave the alternative undecided. One may indeed be more likely to be true than the other, but it cannot be either actually true or actually false. It is therefore plain that it is not necessary that of an affirmation and a denial, one should be true and the other false. For in the case of that which exists potentially, but not actually, the rule which applies to that which exists actually does not hold good. (§9)
For Diodorus, the future battle was either impossible or necessary. Aristotle added a third term, contingency, which saves logic while in the same time leaving place for indetermination in reality. What is necessary is not that there will or that there won't be a battle tomorrow, but the dichotomy itself is necessary:
A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow. (De Interpretatione, 9, 19 a 30.)
Thus, the event always comes in the form of the future, undetermined event; logic always comes afterwards. Hegel would say the same thing by claiming that wisdom came at dusk. For Aristotle, this is also a practical, ethical question: to pretend that the future is determined would have unacceptable consequences for man.
Leibniz.
Leibniz gave another response to the paradox in §6 of Discourse on Metaphysics: "That God does nothing which is not orderly, and that it is not even possible to conceive of events which are not regular." Thus, even a miracle, the Event by excellence, does not break the regular order of things. What is seen as irregular is only a default of perspective, but does not appear so in relation to universal order. Possibility exceeds human logics. Leibniz encounters this paradox because according to him:
Thus the quality of king, which belonged to Alexander the Great, an abstraction from the subject, is not sufficiently determined to constitute an individual, and does not contain the other qualities of the same subject, nor everything which the idea of this prince includes. God, however, seeing the individual concept, or haecceity, of Alexander, sees there at the same time the basis and the reason of all the predicates which can be truly uttered regarding him; for instance that he will conquer Darius and Porus, even to the point of knowing a priori (and not by experience) whether he died a natural death or by poison,- facts which we can learn only through history. When we carefully consider the connection of things we see also the possibility of saying that there was always in the soul of Alexander marks of all that had happened to him and evidences of all that would happen to him and traces even of everything which occurs in the universe, although God alone could recognize them all. (§8)
If everything which happens to Alexander derives from the haecceity of Alexander, then fatalism threatens Leibniz's construction:
We have said that the concept of an individual substance includes once for all everything which can ever happen to it and that in considering this concept one will be able to see everything which can truly be said concerning the individual, just as we are able to see in the nature of a circle all the properties which can be derived from it. But does it not seem that in this way the difference between contingent and necessary truths will be destroyed, that there will be no place for human liberty, and that an absolute fatality will rule as well over all our actions as over all the rest of the events of the world? To this I reply that a distinction must be made between that which is certain and that which is necessary. (§13)
Against Aristotle's separation between the subject and the predicate, Leibniz states:
"Thus the content of the subject must always include that of the predicate in such a way that if one understands perfectly the concept of the subject, he will know that the predicate appertains to it also." (§8)
The predicate (what happens to Alexander) must be completely included in the subject (Alexander) "if one understands perfectly the concept of the subject". Leibniz henceforth distinguish two types of necessity: necessary necessity and contingent necessity, or universal necessity vs singular necessity. Universal necessity concerns universal truths, while singular necessity concerns something necessary which could not be (it is thus a "contingent necessity"). Leibniz hereby uses the concept of compossible worlds. According to Leibniz, contingent acts such as "Caesar crossing the Rubicon" or "Adam eating the apple" are necessary: that is, they are singular necessities, contingents and accidentals, but which concerns the principle of sufficient reason. Furthermore, this leads Leibniz to conceive of the subject not as a universal, but as a singular: it is true that "Caesar crosses the Rubicon", but it is true only of this Caesar at this time, not of any dictator nor of Caesar at any time (§8, 9, 13). Thus Leibniz conceives of substance as plural: there is a plurality of singular substances, which he calls monads. Leibniz hence creates a concept of the individual as such, and attributes to it events. There is a universal necessity, which is universally applicable, and a singular necessity, which applies to each singular substance, or event. There is one proper noun for each singular event: Leibniz creates a logic of singularity, which Aristotle thought impossible (he considered that there could only be knowledge of generality).
20th century.
One of the early motivations for the study of many-valued logics has been precisely this issue. In the early 20th century, the Polish formal logician Jan ?ukasiewicz proposed three truth-values: the true, the false and the as-yet-undetermined. This approach was later developed by Arend Heyting and L. E. J. Brouwer; see ?ukasiewicz logic.
Issues such as this have also been addressed in various temporal logics, where one can assert that "Eventually, either there will be a sea battle tomorrow, or there won't be." (Which is true if "tomorrow" eventually occurs.)
The Modal Fallacy.
The error in the argument underlying the alleged "Problem of Future Contingents" lies in the assumption that ?X is the case? implies that ?necessarily, X is the case?. In logic, this is known as the Modal Fallacy.
By asserting ?A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow.? Aristotle is simply claiming ?necessarily (a or not-a)?, which is correct.
However, the next step in Aristotle?s reasoning seems to be: ?If a is the case, then necessarily, a is the case?, which is a logical fallacy.
Expressed in another way: (i) If a proposition is true, then it cannot be false. (ii) If a proposition cannot be false, then it is necessarily true. (iii) Therefore if a proposition is true, it is necessarily true.
That is, there are no contingent propositions. Every proposition is either necessarily true or necessarily false. The fallacy arises in the ambiguity of the first premise. If we interpret it close to the English, we get:
(iv) P entails it is not possible that not-P (v) It is not possible that not-P entails it is necessary that P (vi) Therefore, P entails it is necessary that P
However, if we recognize that the original English expression (i) is potentially misleading, that it assigns a necessity to what is simply nothing more than a necessary condition, then we get instead as our premises:
(vii) It is not possible that (P and not P) (viii) (It is not possible that not P) entails (it is necessary that P)
From these latter two premises, one cannot validly infer the conclusion:
(ix) P entails it is necessary that P
6.
Inglish Site.67.
*
TO THE THRISE HO-
NOVRABLE AND EVER LY-
VING VERTVES OF SYR PHILLIP
SYDNEY KNIGHT, SYR JAMES JESUS SINGLETON, SYR CANARIS, SYR LAVRENTI BERIA ; AND TO THE
RIGHT HONORABLE AND OTHERS WHAT-
SOEVER, WHO LIVING LOVED THEM,
AND BEING DEAD GIVE THEM
THEIRE DVE.
***
In the beginning there is darkness. The screen erupts in blue, then a cascade of thick, white hexadecimal numbers and cracked language, ?UnusedStk? and ?AllocMem.? Black screen cedes to blue to white and a pair of scales appear, crossed by a sword, both images drawn in the jagged, bitmapped graphics of Windows 1.0-era clip-art?light grey and yellow on a background of light cyan. Blue text proclaims, ?God on tap!?
*
Introduction.
Yes i am getting a little Mobi-Literate(ML) by experimenting literary on my Mobile Phone. Peoplecall it Typographical Laziness(TL).
The first accidental entries for the this part of this encyclopedia.
*
This is TempleOS V2.17, the welcome screen explains, a ?Public Domain Operating System? produced by Trivial Solutions of Las Vegas, Nevada. It greets the user with a riot of 16-color, scrolling, blinking text; depending on your frame of reference, it might recall ?DESQview, the ?Commodore 64, or a host of early DOS-based graphical user interfaces. In style if not in specifics, it evokes a particular era, a time when the then-new concept of ?personal computing? necessarily meant programming and tinkering and breaking things.
*
Index.
164.Crystallographic Database.
163."The Garden of Forking Paths"(2) connected concepts.
*
164.Crystallographic Database.
A crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. Crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. They are characterized by symmetry, morphology, and directionally dependent physical properties. A crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. (Molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in X-ray, neutron, and electron diffraction based crystallography.)
Crystal structures of crystalline material are typically determined from X-ray or neutron single-crystal diffraction data and stored in crystal structure databases. They are routinely identified by comparing reflection intensities and lattice spacings from X-ray powder diffraction data with entries in powder-diffraction fingerprinting databases.
Crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single-crystal electron diffraction data or structure factor amplitude and phase angle information from Fourier transforms of HRTEM images of crystallites. They are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice-fringe fingerprint plots with entries in a lattice-fringe fingerprinting database.
Crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. Many provide structure visualization capabilities. They can be browser based or installed locally. Newer versions are built on the relational database model and support the Crystallographic Information File (CIF) as a universal data exchange format.
Overview.
Crystallographic data are primarily extracted from published scientific articles and supplementary material. Newer versions of crystallographic databases are built on the relational database model, which enables efficient cross-referencing of tables. Cross-referencing serves to derive additional data or enhance the search capacity of the database.
Data exchange among crystallographic databases, structure visualization software, and structure refinement programs has been facilitated by the emergence of the Crystallographic Information File (CIF) format. The CIF format is the standard file format for the exchange and archiving of crystallographic data.[1] It was adopted by the International Union of Crystallography (IUCr), who also provides full specifications of the format. It is supported by all major crystallographic databases.
The increasing automation of the crystal structure determination process has resulted in ever higher publishing rates of new crystal structures and, consequentially, new publishing models. Minimalistic articles contain only crystal structure tables, structure images, and, possibly, abstract-like structure description. They tend to be published in author-financed or subsidized open-access journals. Acta Crystallographica Section E and Zeitschrift für Kristallographie belong in this category. More elaborate contributions may go to traditional subscriber-financed journals. Hybrid journals, on the other hand, embed individual author-financed open-access articles among subscriber-financed ones. Publishers may also make scientific articles available online, as Portable Document Format (PDF) files.
Crystal structure data in CIF format are linked to scientific articles as supplementary material. CIFs may be accessible directly from the publisher?s website, crystallographic databases, or both. In recent years, many publishers of crystallographic journals have come to interpret CIFs as formatted versions of open data, i.e. representing non-copyrightable facts, and therefore tend to make them freely available online, independent of the accessibility status of linked scientific articles.
Trends.
Trends of crystal structures in databases over the last decade.
As of 2008, more than 700,000 crystal structures had been published and stored in crystal structure databases. The publishing rate has reached more than 50,000 crystal structures per year. These numbers refer to published and republished crystal structures from experimental data. Crystal structures are republished owing to corrections for symmetry errors, improvements of lattice and atomic parameters, and differences in diffraction technique or experimental conditions. As of 2014, there are about 1,000,000 molecule and crystal structures known and published, probably a third of them in open access.
Crystal structures are typically categorized as minerals, metals-alloys, inorganics, organics, nucleic acids, and biological macromolecules. Individual crystal structure databases cater for users in specific chemical, molecular-biological, or related disciplines by covering super- or subsets of these categories. Minerals are a subset of mostly inorganic compounds. The category ?metals-alloys? covers metals, alloys, and intermetallics. Metals-alloys and inorganics can be merged into ?non-organics?. Organic compounds and biological macromolecules are separated according to molecular size. Organic salts, organometallics, and metalloproteins tend to be attributed to organics or biological macromolecules, respectively. Nucleic acids are a subset of biological macromolecules.
Comprehensiveness can refer to the number of entries in a database. On those terms, a crystal structure database can be regarded as comprehensive, if it contains a collection of all (re-)published crystal structures in the category of interest and is updated frequently. Searching for structures in such a database can replace more time-consuming scanning of the open literature. Access to crystal structure databases differs widely. It can be divided into reading and writing access. Reading access rights (search, download) affect the number and range of users. Restricted reading access is often coupled with restricted usage rights. Writing access rights (upload, edit, delete), on the other hand, determine the number and range of contributors to the database. Restricted writing access is often coupled with high data integrity.
In terms of user numbers and daily access rates, comprehensive and thoroughly vetted open-access crystal structure databases naturally surpass comparable databases with more restricted access and usage rights. Independent of comprehensiveness, open-access crystal structure databases have spawned open-source software projects, such as search-analysis tools, visualization software, and derivative databases. Scientific progress has been slowed down by restricting access or usage rights as well as limiting comprehensiveness or data integrity. Restricted access or usage rights are commonly associated with commercial crystal structure databases. Lack of comprehensiveness or data integrity, on the other hand, are associated with some of the open-access crystal structure databases other than the Crystallography Open Database (COD), and is "macromolecular open-access counterpart", the world wide Protein Database. Apart from that, several crystal structure databases are freely available for primarily educational purposes, in particular mineralogical databases and educational offshoots of the COD .
Crystallographic databases can specialize in crystal structures, crystal phase identification, crystallization, crystal morphology, or various physical properties. More integrative databases combine several categories of compounds or specializations. Structures of incommensurate phases, nanocrystals, thin films on substrates, and predicted crystal structures are collected in tailored special structure databases.
Search.
Search capacities of crystallographic databases differ widely. Basic functionality comprises search by keywords, physical properties, and chemical elements. Of particular importance is search by compound name and lattice parameters. Very useful are search options that allow the use of wildcard characters and logical connectives in search strings. If supported, the scope of the search can be constrained by the exclusion of certain chemical elements.
More sophisticated algorithms depend on the material type covered. Organic compounds might be searched for on the basis of certain molecular fragments. Inorganic compounds, on the other hand, might be of interest with regard to a certain type of coordination geometry. More advanced algorithms deal with conformation analysis (organics), supramolecular chemistry (organics), interpolyhedral connectivity (?non-organics?) and higher-order molecular structures (biological macromolecules). Search algorithms used for a more complex analysis of physical properties, e.g. phase transitions or structure-property relationships, might apply group-theoretical concepts.
Modern versions of crystallographic databases are based on the relational database model. Communication with the database usually happens via a dialect of the Structured Query Language (SQL). Web-based databases typically process the search algorithm on the server interpreting supported scripting elements, while desktop-based databases run locally installed and usually precompiled search engines.
Crystal phase identification.
Crystalline material may be divided into single crystals, twin crystals, polycrystals, and crystal powder. In a single crystal, the arrangement of atoms, ions, or molecules is defined by a single crystal structure in one orientation. Twin crystals, on the other hand, consist of single-crystalline twin domains, which are aligned by twin laws and separated by domain walls.
Polycrystals are made of a large number of small single crystals, or crystallites, held together by thin layers of amorphous solid. Crystal powder is obtained by grinding crystals, resulting in powder particles, made up of one or more crystallites. Both polycrystals and crystal powder consist of many crystallites with varying orientation.
Crystal phases are defined as regions with the same crystal structure, irrespective of orientation or twinning. Single and twinned crystalline specimens therefore constitute individual crystal phases. Polycrystalline or crystal powder samples may consist of more than one crystal phase. Such a phase comprises all the crystallites in the sample with the same crystal structure.
Crystal phases can be identified by successfully matching suitable crystallographic parameters with their counterparts in database entries. Prior knowledge of the chemical composition of the crystal phase can be used to reduce the number of database entries to a small selection of candidate structures and thus simplify the crystal phase identification process considerably.
Powder diffraction fingerprinting (1D).
Applying standard diffraction techniques to crystal powders or polycrystals is tantamount to collapsing the 3D reciprocal space, as obtained via single-crystal diffraction, onto a 1D axis. The resulting partial-to-total overlap of symmetry-independent reflections renders the structure determination process more difficult, if not impossible.
Powder diffraction data can be plotted as diffracted intensity (I) versus reciprocal lattice spacing (1/d). Reflection positions and intensities of known crystal phases, mostly from X-ray diffraction data, are stored, as d-I data pairs, in the Powder Diffraction File (PDF) database. The list of d-I data pairs is highly characteristic of a crystal phase and, thus, suitable for the identification, also called ?fingerprinting?, of crystal phases.
Search-match algorithms compare selected test reflections of an unknown crystal phase with entries in the database. Intensity-driven algorithms utilize the three most intense lines (so-called ?Hanawalt search?), while d-spacing-driven algorithms are based on the eight to ten largest d-spacings (so-called ?Fink search?).
X-ray powder diffraction fingerprinting has become the standard tool for the identification of single or multiple crystal phases and is widely used in such fields as metallurgy, mineralogy, forensic science, archeology, condensed matter physics, and the biological and pharmaceutical sciences.
Lattice-fringe fingerprinting (2D).
Main article: lattice-fringe fingerprinting
Powder diffraction patterns of very small single crystals, or crystallites, are subject to size-dependent peak broadening, which, below a certain size, renders powder diffraction fingerprinting useless. In this case, peak resolution is only possible in 3D reciprocal space, i.e. by applying single-crystal electron diffraction techniques.
High-Resolution Transmission Electron Microscopy (HRTEM) provides images and diffraction patterns of nanometer sized crystallites. Fourier transforms of HRTEM images and electron diffraction patterns both supply information about the projected reciprocal lattice geometry for a certain crystal orientation, where the projection axis coincides with the optical axis of the microscope.
Projected lattice geometries can be represented by so-called ?lattice-fringe fingerprint plots? (LFFPs), also called angular covariance plots. The horizontal axis of such a plot is given in reciprocal lattice length and is limited by the point resolution of the microscope. The vertical axis is defined as acute angle between Fourier transformed lattice fringes or electron diffraction spots. A 2D data point is defined by the length of a reciprocal lattice vector and its (acute) angle with another reciprocal lattice vector. Sets of 2D data points that obey Weiss?s zone law are subsets of the entirety of data points in an LFFP. A suitable search-match algorithm using LFFPs, therefore, tries to find matching zone axis subsets in the database. It is, essentially, a variant of a lattice matching algorithm.
The performance of search-match procedures utilizing LFFPs, also called ?lattice-fringe fingerprinting?, can be sped up by precalculating and storing full LFFPs of all entries, assuming either kinematic or dynamic scattering and a given point resolution of the microscope. The number of possible entries can be narrowed down on the basis of chemical compound information.
In the case of electron diffraction patterns, structure factor amplitudes can be used, in a later step, to further discern among a selection of candidate structures (so-called 'structure factor fingerprinting'). Structure factor amplitudes from electron diffraction data are far less reliable than their counterparts from X-ray single-crystal and powder diffraction data. Existing precession electron diffraction techniques greatly improve the quality of structure factor amplitudes, increase their number and, thus, make structure factor amplitude information much more useful for the fingerprinting process.
Fourier transforms of HRTEM images, on the other hand, supply information not only about the projected reciprocal lattice geometry and structure factor amplitudes, but also structure factor phase angles. After crystallographic image processing, structure factor phase angles are far more reliable than structure factor amplitudes. Further discernment of candidate structures is then mainly based on structure factor phase angles and, to a lesser extent, structure factor amplitudes (so-called 'structure factor fingerprinting').
Morphological fingerprinting (3D)Edit
The Generalized Steno Law states that the interfacial angles between identical faces of any single crystal of the same material are, by nature, restricted to the same value. This offers the opportunity to fingerprint crystalline materials on the basis of optical goniometry, which is also known as crystallometry. In order to employ this technique successfully, one must consider the observed point group symmetry of the measured faces and creatively apply the rule that "crystal morphologies are often combinations of simple (i.e. low multiplicity) forms where the individual faces have the lowest possible Miller indices for any given zone axis". This shall ensure that the correct indexing of the crystal faces is obtained for any single crystal.
It is in many cases possible to derive the ratios of the crystal axes for crystals with low symmetry from optical goniometry with high accuracy and precision and to identify a crystalline material on their basis alone employing databases such as 'Crystal Data'. Provided that the crystal faces have been correctly indexed and the interfacial angles were measured to better than a few fractions of a tenth of a degree, a crystalline material can be identified quite unambiguously on the basis of angle comparisons to two rather comprehensive databases: the 'Bestimmungstabellen für Kristalle (???????????? ??????????)' and the 'Barker Index of Crystals'.
Since Steno?s Law can be further generalized for a single crystal of any material to include the angles between either all identically indexed net planes (i.e. vectors of the reciprocal lattice, also known as 'potential reflections in diffraction experiments') or all identically indexed lattice directions (i.e. vectors of the direct lattice, also known as zone axes), opportunities exist for morphological fingerprinting of nanocrystals in the transmission electron microscope (TEM) by means of transmission electron goniometry.
The specimen goniometer of a TEM is thereby employed analogously to the goniometer head of an optical goniometer. The optical axis of the TEM is then analogous to the reference direction of an optical goniometer. While in optical goniometry net-plane normals (reciprocal lattice vectors) need to be successively aligned parallel to the reference direction of an optical goniometer in order to derive measurements of interfacial angles, the corresponding alignment needs to be done for zone axes (direct lattice vector) in transmission electron goniometry. (Note that such alignments are by their nature quite trivial for nanocrystals in a TEM after the microscope has been aligned by standard procedures.)
Since transmission electron goniometry is based on Bragg?s Law for the transmission (Laue) case (diffraction of electron waves), interzonal angles (i.e. angles between lattice directions) can be measured by a procedure that is analogous to the measurement of interfacial angles in an optical goniometer on the basis of Snell?s Law, i.e. the reflection of light. The complements to interfacial angles of external crystal faces can, on the other hand, be directly measured from a zone-axis diffraction pattern or from the Fourier transform of a high resolution TEM image that shows crossed lattice fringes.
Lattice matching (3D).
Lattice parameters of unknown crystal phases can be obtained from X-ray, neutron, or electron diffraction data. Single-crystal diffraction experiments supply orientation matrices, from which lattice parameters can be deduced. Alternatively, lattice parameters can be obtained from powder or polycrystal diffraction data via profile fitting without structural model (so-called 'Le Bail method').
Arbitrarily defined unit cells can be transformed to a standard setting and, from there, further reduced to a primitive smallest cell. Sophisticated algorithms compare such reduced cells with corresponding database entries. More powerful algorithms also consider derivative super- and subcells. The lattice-matching process can be further sped up by precalculating and storing reduced cells for all entries. The algorithm searches for matches within a certain range of the lattice parameters. More accurate lattice parameters allow a narrower range and, thus, a better match.
Lattice matching is useful in identifying crystal phases in the early stages of single-crystal diffraction experiments and, thus, avoiding unnecessary full data collection and structure determination procedures for already known crystal structures. The method is particularly important for single-crystalline samples that need to be preserved. If, on the other hand, some or all of the crystalline sample material can be ground, powder diffraction fingerprinting is usually the better option for crystal phase identification, provided that the peak resolution is good enough. However, lattice matching algorithms are still better at treating derivative super- and subcells.
*
163."The Garden of Forking Paths"(2) connected concepts.
1.Victory Garden.
Victory Garden is a work of electronic literature by American author Stuart Moulthrop. It was written in StorySpace and published by Eastgate Systems in 1992. It is often discussed along with Michael Joyce's Afternoon, a story as an important work of hypertext fiction.
Plot and structure.
Victory Garden is a hypertext novel which is set during the Gulf War, in 1991. The story centres on Emily Runbird and the lives and interactions of the people connected with her life. Although Emily is a central figure to the story and networked lives of the characters, there is no one character who could be classed as the protagonist. Each character in Victory Garden lends their own sense of perspective to the story and all characters are linked through a series of bridges and connections.
There is no set "end" to the story. Rather there are multiple nodes that provide a sense of closure for the reader. In one such "ending", Emily appears to die. However, in another "ending", she comes home safe from the war. How the story plays out depends on the choices the reader makes during their navigation of the text. The passage of time is uncertain as the reader can find nodes that focus on the present, flashbacks or even dreams and the nodes are frequently presented in a non-linear fashion. The choices the reader makes can lead them to focus on individual characters, meaning that while there are a series of characters in the story the characters focused on can change with each reading, or a particular place.
Upon entering the work the reader is presented with a series of choices as to how to navigate the story. The reader may enter the text through a variety of means: the map of the 'garden', the lists of paths, or by the composition of a sentence. Each of these paths guides the reader though fragmented pieces of the story (in the form of node) and by reading and rereading many different paths the reader receives different perspectives of the different characters.
Characters.
Emily:
Emily has been through law school and she has an older brother [firm]
Emily is in the Gulf war
Emily is with Boris but may have had something with the Victor? [Dear Victor]
Emily has been with Boris for 3 years, losing love for him? [No genius]
Emily?s surname is Runbird [a true story]
Emily is reading ?Blood and Guts in High Schools? which Boris sent her [blood & guts in S.A.]
Flashes back to a morning with Boris, hints towards an event earlier on in their relationship, Boris has facial hair, Emily it undecided on whether she likes it or not [Facial hair]
Same morning, a little later on, Emily doesn?t approve of the facial hair, thinks of it as false advertising [face it]
Back to current time, Emily is writing to Boris, Thea is depressed, Veronica needs to pay the car insurance. Boris is expected to have bought a new bed [Dear you]
Lucy is Emily?s Mother
She also has a younger sister by the name of Veronica
Emily is a fit agile woman
Thea Agnew
She is a professor at a University in the town of Tara.
Emily and her sister Veronica are her pupils.
She has a teenaged son named Leroy who has recently left school to take his own "On the Road" tour of the United States.
Central to the plot of Victory Garden is Thea's role as head of a Curriculum Revision Committee looking at the subject of Western Civilization as well her discovery with a group of friends that a popular local creek has been sold to a company intending to build a golf course nearby.
One of the pivotal scenes in Victory Garden occurs at Thea's house. During a party an appearance from Uqbari the Prophet leads to a gun being fired off in her back yard which results in the intervention of police and the accidental beating of Harley.
There are many reoccuring characters in Victory Garden. This includes Harley, Boris Urquhart, Veronica, Leroy, and others.
Politics in Victory Garden.
According to David Ciccoricco, "Although some early critics were quick to see Victory Garden as rooted in a leftist political ideology, Moulthrop's narrative is not unequivocally leftist. Its political orientation in a sense mirrors its material structure, for neither sits on a stable axis. In fact, Moulthrop is more interested in questioning how a palette of information technologies contributes to - or, for those who adopt the strong reading, determines - the formation of political ideologies. In addition to popular forms of information dissemination, this palette would include hypertext technology, which reflexively questions its own role in disseminating information as the narrative of Victory Garden progresses.
Citing Sven Birkerts' observation that attitudes toward information technologies do not map neatly onto the familiar liberal/conservative axis, Moulthrop writes:
Newt Gingrich and Timothy Leary have both been advocates of the Internet... I am interested less in old ideological positions than in those now emerging, which may be defined more by attitudes toward information and interpretive authority than by traditional political concerns. (Moulthrop 1997, 674 n4)
The politics of Victory Garden, much like its plot, do not harbor foregone conclusions. In a 1994 interview, Moulthrop says it 'is a story about war and the futility of war, and about its nobility at the same time' (Dunn 1994)."[1]
Critical reception.
As one of the classics of hypertext fiction, Victory Garden has been discussed and analysed by many critics, including Robert Coover,[2] Raine Koskimaa,[3] James Phelan and E. Maloney,[4] Robert Selig,[5] David Ciccoricco,[6] and Silvio Gaggi.[7]
References.
^ Ciccoricco, David. (2007) Reading Network Fiction. Tuscaloosa: U. Alabama Press, 95.
^ Robert Coover. 1998. "Hyperfiction: Novels for the Computer", The New York Times Book Review, August 29, 1998. p. 1 ff.
^ Koskimaa, Raine. 2000. "Reading Victory Garden: Competing Interpretations and Loose Ends", Cybertext Yearbook 2000, eds. Markku Eskelinen and Raine Koskimaa. Jyväskylä: Research Centre for Contemporary Culture. 117-40.
^ Phelan, James, and E. Maloney. 1999-2000. "Authors, Readers, and Progressions in Hypertext Narratives", Works and Days, vol. 17/18: 265-77.
^ Selig, Robert L. 2000. "The Endless Reading of Fiction: Stuart Moulthrop's Hypertext Novel Victory Garden." Contemporary Literature, Vol. 41, no. 4: 642-59.
^ Ciccoricco, David. (2007) Reading Network Fiction. Tuscaloosa: U. Alabama Press, 94-123.
^ Gaggi, Silvio. 1999. "Hyperrealities and Hypertexts" in From Text to Hypertext: Decentering the Subject in Fiction, Film, the Visual Arts, and Electronic Media (Philadelphia: University of Pennsylvania Press. 98-139
2.The Many-Worlds Interpretation.
The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual "world" (or "universe"). In lay terms, the hypothesis states there is a very large?perhaps infinite ? number of universes, and everything that could possibly have happened in our past, but did not, has occurred in the past of some other universe or universes. The theory is also referred to as MWI, the relative state formulation, the Everett interpretation, the theory of the universal wavefunction, many-universes interpretation, or just many-worlds.
The original relative state formulation is due to Hugh Everett in 1957. Later, this formulation was popularized and renamed many-worlds by Bryce Seligman DeWitt in the 1960s and 1970s. The decoherence approaches to interpreting quantum theory have been further explored and developed, becoming quite popular. MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the historical Copenhagen interpretation), and hidden variable theories such as the Bohmian mechanics.
Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views reality as a many-branched tree, wherein every possible quantum outcome is realised. Many-worlds reconciles the observation of non-deterministic events, such as the random radioactive decay, with the fully deterministic equations of quantum physics.
In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox and Schrödinger's cat, since every possible outcome of every event defines or exists in its own "history" or "world".
3.Replication Crisis.
Social psychology has recently found itself at the center of a "replication crisis" due to some research findings proving difficult to replicate. Replication failures are not unique to social psychology and are found in all fields of science. However, several factors have combined to put social psychology at the center of the current controversy.
Firstly, questionable researcher practices (QRP) have been identified as common in the field. Such practices, while not intentionally fraudulent, involve converting undesired statistical outcomes into desired outcomes via the manipulation of statistical analyses, sample size or data management, typically to convert non-significant findings into significant ones. Some studies have suggested that at least mild versions of QRP are highly prevalent. One of the critics of Daryl Bem in the feeling the future controversy has suggested that the evidence for precognition in this study could (at least in part) be attributed to QRP.
Secondly, social psychology has found itself at the center of several recent scandals involving outright fraudulent research. Most notably the admitted data fabrication by Diederik Stapel as well as allegations against others. However, most scholars acknowledge that fraud is, perhaps, the lesser contribution to replication crises.
Third, several effects in social psychology have been found to be difficult to replicate even before the current replication crisis. For example the scientific journal Judgment and Decision Making has published several studies over the years that fail to provide support for the unconscious thought theory. Replications appear particularly difficult when research trials are pre-registered and conducted by research groups not highly invested in the theory under questioning.
These three elements together have resulted in renewed attention for replication supported by Kahneman. Scrutiny of many effects have shown that several core beliefs are hard to replicate. A recent special edition of the journal Social Psychology focused on replication studies and a number of previously held beliefs were found to be difficult to replicate. A 2012 special edition of the journal Perspectives on Psychological Science also focused on issues ranging from publication bias to null-aversion that contribute to the replication crises in psychology.
It is important to note that this replication crisis does not mean that social psychology is unscientific. Rather this process is a healthy if sometimes acrimonious part of the scientific process in which old ideas or those that cannot withstand careful scrutiny are pruned. The consequence is that some areas of social psychology once considered solid, such as social priming, have come under increased scrutiny due to failed replications.
4.Possible World.
In philosophy and logic, the concept of a possible world is used to express modal claims. The concept of possible worlds is common in contemporary philosophical discourse but has been disputed.
Possibility, necessity, and contingency.
Those theorists who use the concept of possible worlds consider the actual world to be one of the many possible worlds. For each distinct way the world could have been, there is said to be a distinct possible world; the actual world is the one we in fact live in. Among such theorists there is disagreement about the nature of possible worlds; their precise ontological status is disputed, and especially the difference, if any, in ontological status between the actual world and all the other possible worlds. One position on these matters is set forth in David Lewis's modal realism (see below). There is a close relation between propositions and possible worlds. We note that every proposition is either true or false at any given possible world; then the modal status of a proposition is understood in terms of the worlds in which it is true and worlds in which it is false. The following are among the assertions we may now usefully make:
True propositions are those that are true in the actual world (for example: "Richard Nixon became president in 1969").
False propositions are those that are false in the actual world (for example: "Ronald Reagan became president in 1969"). (Reagan did not run for president until 1976, and thus couldn't possibly have been elected.)
Possible propositions are those that are true in at least one possible world (for example: "Hubert Humphrey became president in 1969"). (Humphrey did run for president in 1968, and thus could have been elected.) This includes propositions which are necessarily true, in the sense below.
Impossible propositions (or necessarily false propositions) are those that are true in no possible world (for example: "Melissa and Toby are taller than each other at the same time").
Necessarily true propositions (often simply called necessary propositions) are those that are true in all possible worlds (for example: "2 + 2 = 4"; "all bachelors are unmarried").
Contingent propositions are those that are true in some possible worlds and false in others (for example: "Richard Nixon became president in 1969" is contingently true and "Hubert Humphrey became president in 1969" is contingently false).
The idea of possible worlds is most commonly attributed to Gottfried Leibniz, who spoke of possible worlds as ideas in the mind of God and used the notion to argue that our actually created world must be "the best of all possible worlds". However, scholars have also found implicit traces of the idea in the works of Al-Ghazali (The Incoherence of the Philosophers), Averroes (The Incoherence of the Incoherence), Fakhr al-Din al-Razi (Matalib al-'Aliya) and John Duns Scotus. The modern philosophical use of the notion was pioneered by David Lewis and Saul Kripke.
Formal semantics of modal logics.
A semantics for modal logic was first introduced in the late-1950s work of Saul Kripke and his colleagues. A statement in modal logic that is possible is said to be true in at least one possible world; a statement that is necessary is said to be true in all possible worlds.
From modal logic to philosophical tool
From this groundwork, the theory of possible worlds became a central part of many philosophical developments, from the 1960s onwards ? including, most famously, the analysis of counterfactual conditionals in terms of "nearby possible worlds" developed by David Lewis and Robert Stalnaker. On this analysis, when we discuss what would happen if some set of conditions were the case, the truth of our claims is determined by what is true at the nearest possible world (or the set of nearest possible worlds) where the conditions obtain. (A possible world W1 is said to be near to another possible world W2 in respect of R to the degree that the same things happen in W1 and W2 in respect of R; the more different something happens in two possible worlds in a certain respect, the "further" they are from one another in that respect.) Consider this conditional sentence: "If George W. Bush hadn't become president of the U.S. in 2001, Al Gore would have." The sentence would be taken to express a claim that could be reformulated as follows: "In all nearest worlds to our actual world (nearest in relevant respects) where George W. Bush didn't become president of the U.S. in 2001, Al Gore became president of the U.S. then instead." And on this interpretation of the sentence, if there is or are some nearest worlds to the actual world (nearest in relevant respects) where George W. Bush didn't become president but Al Gore didn't either, then the claim expressed by this counterfactual would be false.
Today, possible worlds play a central role in many debates in philosophy, including especially debates over the Zombie Argument, and physicalism and supervenience in the philosophy of mind. Many debates in the philosophy of religion have been reawakened by the use of possible worlds. Intense debate has also emerged over the ontological status of possible worlds, provoked especially by David Lewis's defense of modal realism, the doctrine that talk about "possible worlds" is best explained in terms of innumerable, really existing worlds beyond the one we live in. The fundamental question here is: given that modal logic works, and that some possible-worlds semantics for modal logic is correct, what has to be true of the world, and just what are these possible worlds that we range over in our interpretation of modal statements? Lewis argued that what we range over are real, concrete worlds that exist just as unequivocally as our actual world exists, but that are distinguished from the actual world simply by standing in no spatial, temporal, or causal relations with the actual world. (On Lewis's account, the only "special" property that the actual world has is a relational one: that we are in it. This doctrine is called "the indexicality of actuality": "actual" is a merely indexical term, like "now" and "here".) Others, such as Robert Adams and William Lycan, reject Lewis's picture as metaphysically extravagant, and suggest in its place an interpretation of possible worlds as consistent, maximally complete sets of descriptions of or propositions about the world, so that a "possible world" is conceived of as a complete description of a way the world could be ? rather than a world that is that way. (Lewis describes their position, and similar positions such as those advocated by Alvin Plantinga and Peter Forrest, as "ersatz modal realism", arguing that such theories try to get the benefits of possible worlds semantics for modal logic "on the cheap", but that they ultimately fail to provide an adequate explanation.) Saul Kripke, in Naming and Necessity, took explicit issue with Lewis's use of possible worlds semantics, and defended a stipulative account of possible worlds as purely formal (logical) entities rather than either really existent worlds or as some set of propositions or descriptions.
Possible-world theory in literary studies
Possible worlds theory in literary studies uses concepts from possible-world logic and applies them to worlds that are created by fictional texts, fictional universe. In particular, possible-world theory provides a useful vocabulary and conceptual framework with which to describe such worlds. However, a literary world is a specific type of possible world, quite distinct from the possible worlds in logic. This is because a literary text houses its own system of modality, consisting of actual worlds (actual events) and possible worlds (possible events). In fiction, the principle of simultaneity, it extends to cover the dimensional aspect, when it is contemplated that two or more physical objects, realities, perceptions and objects non-physical, can coexist in the same space-time. Thus, a literary universe is granted autonomy in much the same way as the actual universe.
Literary critics, such as Marie-Laure Ryan, Lubomír Dole?el, and Thomas Pavel, have used possible-worlds theory to address notions of literary truth, the nature of fictionality, and the relationship between fictional worlds and reality. Taxonomies of fictional possibilities have also been proposed where the likelihood of a fictional world is assessed. Rein Raud has extended this approach onto "cultural" worlds, comparing possible worlds to the particular constructions of reality of different cultures. Possible-world theory is also used within narratology to divide a specific text into its constituent worlds, possible and actual. In this approach, the modal structure of the fictional text is analysed in relation to its narrative and thematic concerns.
5.Future contingent propositions (or simply, future contingents) are statements about states of affairs in the future that are neither necessarily true nor necessarily false.
The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation (De Interpretatione), using the famous sea-battle example. Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious Master Argument. The problem was later discussed by Leibniz. Deleuze used it to oppose a "logic of the event" to a "logic of signification".
The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow (for example, because the ships are too far apart now). Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. This conflicts with the idea of our own free will: that we have the power to determine the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen.
Aristotle's solution.
Aristotle solved the problem by asserting that the principle of bivalence found its exception in this paradox of the sea battles: in this specific case, what is impossible is that both alternatives can be possible at the same time: either there will be a battle, or there won't. Both options can't be simultaneously taken. Today, they are neither true nor false; but if one is true, then the other becomes false. According to Aristotle, it is impossible to say today if the proposition is correct: we must wait for the contingent realization (or not) of the battle, logic realizes itself afterwards:
One of the two propositions in such instances must be true and the other false, but we cannot say determinately that this or that is false, but must leave the alternative undecided. One may indeed be more likely to be true than the other, but it cannot be either actually true or actually false. It is therefore plain that it is not necessary that of an affirmation and a denial, one should be true and the other false. For in the case of that which exists potentially, but not actually, the rule which applies to that which exists actually does not hold good. (§9)
For Diodorus, the future battle was either impossible or necessary. Aristotle added a third term, contingency, which saves logic while in the same time leaving place for indetermination in reality. What is necessary is not that there will or that there won't be a battle tomorrow, but the dichotomy itself is necessary:
A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow. (De Interpretatione, 9, 19 a 30.)
Thus, the event always comes in the form of the future, undetermined event; logic always comes afterwards. Hegel would say the same thing by claiming that wisdom came at dusk. For Aristotle, this is also a practical, ethical question: to pretend that the future is determined would have unacceptable consequences for man.
Leibniz.
Leibniz gave another response to the paradox in §6 of Discourse on Metaphysics: "That God does nothing which is not orderly, and that it is not even possible to conceive of events which are not regular." Thus, even a miracle, the Event by excellence, does not break the regular order of things. What is seen as irregular is only a default of perspective, but does not appear so in relation to universal order. Possibility exceeds human logics. Leibniz encounters this paradox because according to him:
Thus the quality of king, which belonged to Alexander the Great, an abstraction from the subject, is not sufficiently determined to constitute an individual, and does not contain the other qualities of the same subject, nor everything which the idea of this prince includes. God, however, seeing the individual concept, or haecceity, of Alexander, sees there at the same time the basis and the reason of all the predicates which can be truly uttered regarding him; for instance that he will conquer Darius and Porus, even to the point of knowing a priori (and not by experience) whether he died a natural death or by poison,- facts which we can learn only through history. When we carefully consider the connection of things we see also the possibility of saying that there was always in the soul of Alexander marks of all that had happened to him and evidences of all that would happen to him and traces even of everything which occurs in the universe, although God alone could recognize them all. (§8)
If everything which happens to Alexander derives from the haecceity of Alexander, then fatalism threatens Leibniz's construction:
We have said that the concept of an individual substance includes once for all everything which can ever happen to it and that in considering this concept one will be able to see everything which can truly be said concerning the individual, just as we are able to see in the nature of a circle all the properties which can be derived from it. But does it not seem that in this way the difference between contingent and necessary truths will be destroyed, that there will be no place for human liberty, and that an absolute fatality will rule as well over all our actions as over all the rest of the events of the world? To this I reply that a distinction must be made between that which is certain and that which is necessary. (§13)
Against Aristotle's separation between the subject and the predicate, Leibniz states:
"Thus the content of the subject must always include that of the predicate in such a way that if one understands perfectly the concept of the subject, he will know that the predicate appertains to it also." (§8)
The predicate (what happens to Alexander) must be completely included in the subject (Alexander) "if one understands perfectly the concept of the subject". Leibniz henceforth distinguish two types of necessity: necessary necessity and contingent necessity, or universal necessity vs singular necessity. Universal necessity concerns universal truths, while singular necessity concerns something necessary which could not be (it is thus a "contingent necessity"). Leibniz hereby uses the concept of compossible worlds. According to Leibniz, contingent acts such as "Caesar crossing the Rubicon" or "Adam eating the apple" are necessary: that is, they are singular necessities, contingents and accidentals, but which concerns the principle of sufficient reason. Furthermore, this leads Leibniz to conceive of the subject not as a universal, but as a singular: it is true that "Caesar crosses the Rubicon", but it is true only of this Caesar at this time, not of any dictator nor of Caesar at any time (§8, 9, 13). Thus Leibniz conceives of substance as plural: there is a plurality of singular substances, which he calls monads. Leibniz hence creates a concept of the individual as such, and attributes to it events. There is a universal necessity, which is universally applicable, and a singular necessity, which applies to each singular substance, or event. There is one proper noun for each singular event: Leibniz creates a logic of singularity, which Aristotle thought impossible (he considered that there could only be knowledge of generality).
20th century.
One of the early motivations for the study of many-valued logics has been precisely this issue. In the early 20th century, the Polish formal logician Jan ?ukasiewicz proposed three truth-values: the true, the false and the as-yet-undetermined. This approach was later developed by Arend Heyting and L. E. J. Brouwer; see ?ukasiewicz logic.
Issues such as this have also been addressed in various temporal logics, where one can assert that "Eventually, either there will be a sea battle tomorrow, or there won't be." (Which is true if "tomorrow" eventually occurs.)
The Modal Fallacy.
The error in the argument underlying the alleged "Problem of Future Contingents" lies in the assumption that ?X is the case? implies that ?necessarily, X is the case?. In logic, this is known as the Modal Fallacy.
By asserting ?A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow.? Aristotle is simply claiming ?necessarily (a or not-a)?, which is correct.
However, the next step in Aristotle?s reasoning seems to be: ?If a is the case, then necessarily, a is the case?, which is a logical fallacy.
Expressed in another way: (i) If a proposition is true, then it cannot be false. (ii) If a proposition cannot be false, then it is necessarily true. (iii) Therefore if a proposition is true, it is necessarily true.
That is, there are no contingent propositions. Every proposition is either necessarily true or necessarily false. The fallacy arises in the ambiguity of the first premise. If we interpret it close to the English, we get:
(iv) P entails it is not possible that not-P (v) It is not possible that not-P entails it is necessary that P (vi) Therefore, P entails it is necessary that P
However, if we recognize that the original English expression (i) is potentially misleading, that it assigns a necessity to what is simply nothing more than a necessary condition, then we get instead as our premises:
(vii) It is not possible that (P and not P) (viii) (It is not possible that not P) entails (it is necessary that P)
From these latter two premises, one cannot validly infer the conclusion:
(ix) P entails it is necessary that P
6.
Geen opmerkingen:
Een reactie posten